Anonymous age verification isn't a technical problem to be solved, as it's already been solved, it's a societal problem in that either the companies or the politicians pushing for age verification don't want to support it.
If anyone implemented this privacy preservation scheme, would all the laws flip to say "yeah we really did mean it govt id tied to your post".
The key issue however is trust. The underlying protocols may support zero-knowledge proofs. But as a user I'm unlikely to be able to inspect those underlying protocols. I need to be able to see exactly what information I'm allowing the Issuer to see. Otherwise a "correct" anonymous scheme is indistinguishable from a "bad" scheme whereby the Issue sees both my full ID and details of the Resource I wish to access. Assuming a small set of centralized Issuers, they are in a position of great power if they can see exactly who is trying to access exactly what at all times. That's the question of trust - trust in the Issuer and in the implementation, not the underlying math.
The data is of such form that the phone then can pass challenges of type "are you of at least x years old" without giving out any other information.
And the user cannot share that data with other users because their phone will not let them.
From what I understand the issuer signs a credential and then the user on their local device generates unique proofs based on the signature each time, preventing verifiers from colluding/tracking the original signature across services. It also seems to be designed with safeguards against the issuer.
Info based on credentials can be selectively disclosed like whether you're over 18 or whether you have above a certain threshold in an account without disclosing the underlying data.
Obviously if the type of services you use need literal PII then they can still tie activity to a real-world identity but for services only requiring age assurance being able to prove you're over 18 without providing the actual age or other identifiers is better than solutions being actively used.
Just ban social media outright. Facebook, Twitter, Instagram, TikTok, dating apps, etc. They created this problem. They're destroying the fabric of our society. Sometimes the best solution is subtractive.
> These techniques are described in a great paper whose title I’ve stolen for this section.
First you have people running around claiming that systems are "anonymous" and "privacy preserving"... meaning that they're anonymous if you trust somebody you shouldn't be trusting. Just simple snake oil. Often offered by people who know perfectly well what they're doing, too.
Then somebody like Matt Green writes something like this, or some standards committee decides to try to do zero knowledge right, or whatever. But (a) people don't understand how it's different from the snake oil, and (b) almost nobody understands how hard it is to get it right. Information wants to be free, and even if you have a perfect privacy-preserving protocol, it doesn't work if you embed it in a workflow that turns around and leaks the information you're trying to hide. So there are an infinite number of more mistakes to argue against.
But all of THAT just distracts from the fact that age verification is a bad goal. Setting up an ubiquitous, actually functional age verification tool is just handing weapons to people like, say, [Ken Paxton, Attorney General of the great state of Texas](https://thehill.com/policy/healthcare/5762893-paxton-opinion...). A system like that is an attractive nuisance that should not exist. What people like that will do with it is far worse than any problem it could possibly solve.
... and that fact gets lost in all the other stuff...
Let me explain the simple solutions:
Don't let phone manufacturers lock the bootloader on phones. Let the device owner lock it themselves with a password if they want to. Someone will make a kid-friendly OS if there is market demand and tech-savvy parents can install that and lock the bootloader.
What about the non-tech-savvy parents?
Don't restrict people from sideloading apps. Let the user set a password-based app installation lock if they want to. It should be a toggle in the phone's settings. Someone will make kid-friendly apps if there is demand. This lets average parents control what apps get installed or uninstalled on their kid's phone.
But what about apps or online services that adults also use?
Apps and online services can add a password-protected toggle in their user account settings that enables child mode. Let the user set the password and toggle it themselves. Parents can take their child's phone and toggle this.
----
Notice how easy these things are to implement? All of these features could be implemented in less than a week. But instead of doing this, they want to implement much more complicated schemes where the gov and corps control all the toggles, and you control none. Why is it like that? Surely there are no ulterior motives, right?