Want to deem whether content is allowed or not? Fine, provide an API that allows content to be scanned and return a bool.
Want to age-gate content? Fine, provide an identity service.
While both of these will reduce privacy, they'll achieve one of two objectives: Either those making these policies will realize the law they wrote is impossible to achieve, or it will at least provide an even playing field for startups vs incumbents if they succeed.
Now things become interesting when a users pays for ranking or 'verification' checkmarks. What makes that content different than a paid advertisment?
But user generated content? LOL, no.
If these companies aren't willing to put basic measures in place to stop even the most obviously illegal ads from airing, I have a lot of trouble having sympathy for them getting their just desserts in court.
[0]: https://www.msn.com/en-us/money/personalfinance/meta-showed-...
I never really understood how that system is supposed to work.
So on the one hand, Section 230 absolves the hoster of liability and tells an affected party to go after the author directly.
But on the other hand, we all rally for the importance of anonymity on the internet, so it's very likely that there will be no way to find the author.
Isn't this a massive vacuum of responsibility?
The Russmedia ruling of the ECJ: Towards a “Cleannet”?
A change in liability privilege for online providers will lead to a “cleaner”, but also more rigid, monitored internet, says Joerg Heidrich.
And for quick reference, what the judgement actually entails:
On those grounds, the Court (Grand Chamber) hereby rules:
1. Article 5(2) and Articles 24 to 26 of Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) must be interpreted as meaning that the operator of an online marketplace, as controller, within the meaning of Article 4(7) of that regulation, of the personal data contained in advertisements published on its online marketplace, is required, before the publication of the advertisements and by means of appropriate technical and organisational measures,
– to identify the advertisements that contain sensitive data in terms of Article 9(1) of that regulation,
– to verify whether the user advertiser preparing to place such an advertisement is the person whose sensitive data appear in that advertisement and, if this is not the case,
– to refuse publication of that advertisement, unless that user advertiser can demonstrate that the data subject has given his or her explicit consent to the data in question being published on that online marketplace, within the meaning of Article 9(2)(a), or that one of the other exceptions provided for in Article 9(2)(b) to (j) is satisfied.
2. Article 32 of Regulation 2016/679 must be interpreted as meaning that the operator of an online marketplace, as controller, within the meaning of Article 4(7) of that regulation, of the personal data contained in advertisements published on its online marketplace, is required to implement appropriate technical and organisational security measures in order to prevent advertisements published there and containing sensitive data, in terms of Article 9(1) of that regulation, from being copied and unlawfully published on other websites.
3. Article 1(5)(b) of Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’) and Article 2(4) of Regulation 2016/679 must be interpreted as meaning that the operator of an online marketplace, as controller, within the meaning of Article 4(7) of Regulation 2016/679, of the personal data contained in advertisements published on its online marketplace, cannot rely, in respect of an infringement of the obligations arising from Article 5(2) and Articles 24 to 26 and 32 of that regulation, on Articles 12 to 15 of that directive, relating to the liability of intermediary providers.
This mostly seems to be about advertisers distributing content, not so much making it "effectively impossible to run a user-generated platform legally". Unless you think basic KYC for paying customers is "effectively impossible", perhaps.> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
False. A European court's conclusions are specifically around the case it is ruling about, so it is inherently limited to the exact circumstances of that case, without the need for it to specify that. They do not set precedent.
How do they think a hosting provider can check if personal data is accurate? Maybe if privacy didn't exist and everybody could be scrutinized.. but the ruling refers to the GDPR to justify this, and the GDPR is about _protecting_ privacy. So, what is it?
And for everything else.. is the material sensitive or not? How can anyone know, in advance?
I suggest every web site host simply forward all and every input to an EU Court address, and let them handle it. They're the ones suggesting that hosts should make sure that personal data on someone is "accurate", they're the ones demanding that the data should not be "sensitive", so they can as well be responsible for vetting the data.
But they're all crazy anyway, as they demand that a website must block anyone from copying the content.. so how, at the same time, can you even have a website? A website which people can watch?
If the ruling was about collecting data which isn't for displaying, i.e. what a net shop does (address, credit card number), then this would be understandable. But provisions for that already exists, instead they use this (GDPR) as a tool to extend this to user-created content. It's not limited to ads, and ads do need something done. Something totally different from this.
Could this lead to censorship as well? For example you could go to a website or community you don’t like, and share information that could be seen as “sensitive personal data” and then file an anonymous complaint so they get into legal trouble or get shut down?