The key phrase is "non-consensual intimate image" commonly known as "revenge porn". It seems this includes fakes as well.
Edit: full text of draft legislation https://www.gov.uk/government/collections/crime-and-policing... ; still very much in the process of being amended.
I note that publishing NCII is already an offence in Scotland, although it doesn't have this kind of liability for platforms. Primarily used against ex-partners publishing real or fake revenge images.
Again, I’m not judging about content moderation, but this is an extremely weak initiative.
You’d have to essentially police all VPN use beyond China levels to get the worst offenders of this.
Can we also get a legal definition of "social media"? Is that really just as simple as "services which allow multi-directional communication"? Hate to break it to them, but the internet proper is, itself, a service which allows multi-directional communication. No matter how many walled gardens are created, the 1s and 0s will continue to flow unimpeded.
Laws are reactive. When abuses of the system happen lawmakers need to find ways to minimize the damage. This is one of the reasons that Google used to follow the "do not do evil" doctrine. It was a smart way to minimize regulation. The new big tech has thrown any aparence of morality thru the windows and that creates a strong need to regulate their actions.
Bare in mind, this would have been used to stop the Epstein images of the former Prince Andrew from being viewed [1].
> Platforms that do not do so would potentially face fines of 10 percent of "qualifying worldwide income" or have their services blocked in the UK.
Why on earth would it be 10% of their world wide income and not their UK-based income? These politicians really think they have more power than they really do.
> The amendment follows outrage over the Elon Musk-owned chatbot Grok's willingness to generate nude or sexualized images of people, mainly women and girls, which forced a climbdown earlier this year.
The AI didn't just randomly generate NSFW content, it did it at the request of the user. Remember, there was no interest in removing the CP content from Twitter prior to Musk buying it, and then they all moved to Mastodon / BlueSky where they now share that content.
> The government said: "Plans are currently being considered by Ofcom for these kinds of images to be treated with the same severity as child sexual abuse and terrorism content, digitally marking them so that any time someone tries to repost them, they will be automatically taken down."
Ofcom simply doesn't have this kind of power. 4chan are showing as much [2]. This is simply massive overreach by the UK government, and I would advise tech giants to stop complying.