I can see this is related to the sad and ongoing ‘purification’ of the internet, but still, not going to get upset over better UX for taking down deepfakes or non-consensual explicit images which do hurt people.
What could go wrong?
Why can I not control whether any kind of public image of me at all appears?
Microsoft, Google, Facebook, and other large tech companies have had image recognition models capable of detecting this kind of content at scale for years, long before large language models became popular. There’s really no excuse for hosting or indexing these images as publicly accessible assets when they clearly have the technical ability to identify and exclude explicit content automatically.
Instead of putting the burden on victims to report these images one by one, companies should be proactively preventing this material from appearing in search results at all. If the technology exists, and it clearly does, then the default approach should be prevention, not reactive cleanup.