Apologies if this isn’t the right place to ask this, but I thought actual developers with a deep understanding of how technology actually works would be the people to ask!
If you were tasked with setting up a safe and secure way to do this, how would you do it differently than what the UK government is proposing? How could it be done such that I wouldn’t have to worry about my privacy and the threat of government suppression? Is it even theoretically possible to accomplish such a task at such a scale?
Cheers!
EDIT: Just to be clear: I’m not in favour of age verification laws. But they’re on their way regardless. My question is purely about the implementation and technology of the thing, rather than the ethics or efficacy of it. Can this seemingly-inevitable privacy hellscape be done in a non-hellscapish way?
I agree with others here when they say that age-verification laws aren’t about children at all, and identification isn’t a side effect, it’s the raison d’être.
But if I were to earnestly try to solve the problem, I might look to the physical (non-online) world. In every part of the world I’ve been to, buying alcohol requires one thing; to be of age. So if you very clearly look of age, you are allowed to buy it. If you look younger, you may be asked to provide ID proving you are old enough. While some vendors may take additional precautions such as scanning your ID, it is not a requirement and most do not. They simply look at your ID to verify, then allow the purchase.
One could buy a physical verification token, like one might buy a gift card currently, and the purchase requires the same verification as buying alcohol. Imagine you buy a plastic gift-card-like item branded Roblox and they verify you are of age, when you sign up for Roblox you enter in the details of the gift-card-like item. You are verified to be of age, and no-one has any other details.
I think this is the best possible solution, great write up and explanation. A minor improvement would be to make the card some kind of OATH device to generate TOTP tokens rather than a single ID number, so that you can reuse the same identifying token in multiple places with no way to connect the token.
Edit: On second thought, I can’t think of a way to make that work, without compromising privacy, and I can think of a few possible ways that the original idea could potentially go wrong, too. Still, I think this is the closest possible solution.
Oh, mine is a terrible idea, but maybe one of the least bad. I like your idea of making it reusable somehow.
On second thought, I can’t think of a way to make that work, without compromising privacy
I’d say check out my top-level comment, and the link to the crypto Stack Exchange within it.
deleted by creator
identification isn’t a side effect, it’s the raison d’être.
In Australia, the law quite specifically says sites aren’t allowed to require ID as the method of age verification. It can be one option they provide, but it cannot be the only. Even a sort of sentiment analysis is permitted, and from everything I’ve heard that seems to be the method most have defaulted to. Social media sites don’t want to risk losing users by putting up barriers to them making accounts. People talking about politics and taxes are probably adults. People looking at Bluey videos are much more likely to be children. And it’s all based on information they already had used in ways a lot of them probably already did.
So at least here, I think the idea that it’s anything other than what they say it is is just an unfounded conspiracy theory. It may not be well-implemented, but it is genuinely well-intentioned. Or if not well-intentioned, the real intent is bad, but not in the same way you suggest—it’s just about being seen to do something good and win some good PR for the government, without actually having to go to any effort to implement good policy.
deleted by creator
That’d only work if legal sex acts were the worst thing a kid could find online. As someone who went spelunking online as a kid. I assure you theyre not.
And the Roblox issue is hardly that of exposure to normal human biology.
That said this stuff should be up to parents, and instead of verification requirements, we should have parental control requirements (as in, the tools for it should exist).
On a lot of devices, I couldn’t make them safe to hand to a kid without coding the tools myself.
deleted by creator
…
Parental control software existing is a terrible idea?
There is insufficient incentive for games like Roblox to provide effective controls for parents to manage their childrens accounts.
That should change.
I was raised by loving and trustworthy parents. It didn’t save me from curiosity leading me to things I was far, far too young to see. No amount of love and care can fix that.
You do your kids no favors by sheltering them from reality.
No. You let them learn to swim in he shallows to prepare them. Blind trust is like throwing them into the deep end and seeing if they figure it out before drowning.
Having a supportive, loving and trustworthy environmnet didn’t stop me from wanting to kill myself.
I figured out how to swim, eventually. But I should not have had to.
Are you seriusly suggesting that nothing bad will happen to children as long as their parents just care enough?
deleted by creator
It is teaching your kids that you do not trust them, and it is indulging your own paranoia.
Only if you never lift the restrictions, and coddle instead of raise them.
A parent buying their kid their first beer is a show of trust, and confidence in having prepared them. That they weren’t allowed before isn’t some paranoia about turning them into an alcoholist. It’s about preparation and maturity.
It is a display that shows that the parent feels their child is ready and responsible enough for the ability to decide themselves to be granted to them.
A parent enforcing a rule before that is not some violation.
Only, for the internet, the tools and know-how to do so are rare or non-existent.
My parents did this with tons of things as me and my siblings grew up. Each time trusting us with a new piece of responsibility, when they felt we were ready.
What kind of parent doesn’t decide their childrens bedtime until they can sanely maintain their sleep schedule themselves?
It was a big day when I was granted the ability to go to visit friends without asking permission first.
Parental control and guidance is essential for the stage-by-stage raising of a child into adulthood. Both online and offline, only tools and practices around the former practically don’t exist.
Each parent is also on a deadline. When someone turns 18, all rules come off whether you’re ready or not.
As a parent, you should aim for your kid to be making as many decisions themselves as possible by then.
And did you seriously just try to use dismissing my trauma as a point, instead of a logical retort?
deleted by creator
None of your business.
I don’t need you approval that my past is “real enough” to be allowed to argue my point.
No, it’s not desirable, but it’s coming nonetheless. I was just curious if it’s even possible to do it in a way that doesn’t harm everyone.
hardcore christians disagree
Hardcore Christians are just pedophiles and trauma victims.
yep, everybody knows that already
This isn’t true.
The internet has been around since the late 90s at the earliest…that’s when some kids started freely accessing adult content.
When I was a kid…(and I grew up unsupervised and poor with one working parent - I was free range)…porn mags were like the holy grail. I literally didn’t see one until I was about 14 and I found one in somebodies forest fort. So think about that…not only could I not find a porn mag…but the person that had one had to go hiking to “use” it.
I mean…we also had homophobic molester gym teachers teaching us health class…
There’s got to be a workable happy medium between no access and no information - and everything always all the time to the max.
deleted by creator
You said teens. I didn’t know you meant 18. Even if it wasn’t your topic…surely kids having access to hardcore porn, fetish scenarios, etcetc before they have access to sex ed isn’t optimal.
Yeah…things are better, sex ed wise, then they were in the 80s. Miles better. That’s a great thing - but as I said above, sex-ed can’t keep up with what children are being exposed to. We’re not talking about Playboys and R-Rated movies here.
You didn’t really get my point, no. My point was that some children are being bombarded with sexual information from all angles, and it’s having unintended consequences. We essentially opened a new all-encompassing type of media and barely tried to regulate it.
The question isn’t whether having access to hardcore porn is harmful. The question is the relative degree of harm.
When a web service can reliably distinguish between adult and child, it can specifically target content to either. Netflix can provide age-appropriate content to its users. That’s great.
Groomers can specifically target members of their desired audience. That’s not so great. That’s bad. That’s really, really bad. That’s much worse than kids finding hardcore pornography. And that degree of targeting is only possible with widespread age verification laws.
There’s no question that certain types of adult content, not restricted to hardcore porn is harmful…we know it is.
It’s not “we deal with groomers OR we deal with harmful adult content…OR we only regulate popular streaming sites. We can do all of the above. We certainly don’t just throw up our hands and say “it’s not profitable to protect our children” (not what you’re saying, but rather what’s happening).
The way regulators are currently dealing with age-gating - say, in Australia - isn’t what we need to do. That certainly empowers groomers because there’s zero expertise or thought out into it: it’s an ISP-friendly virtue signal that attempts to preserve profits while making Boomers feel like something is happening.
I don’t have the answer…but I DO know there are a ton of answers that include actually attempting to study and regulate all addictive content, including adult content - ie content at the hosting level and requiring that providers and purveyors regulate their content with actual humans. We can never “win” the war if the status quo is automated moderation and profits above protection.
We can do all of the above.
No, we cannot. At a societal level, we can’t do any of it.
Protecting a child from content on the internet requires a massive invasion of the child’s privacy. That degree of privacy invasion should not be granted to society in general. It should not be granted to the operators of a pornography site. It should certainly not be granted to the groomers.
The only place where that degree of privacy invasion is reasonable and acceptable is between parent and child. If you want to protect the children, you give parents the tools to regulate content. You don’t provide those privacy-invading tools to the content providers and you certainly don’t expect them to take a parental role over your kids, let alone your neighbors and yourself.
Well, we can protect them as societies and villages and we do.
This notion that somehow groomers are neutralized if we abandon any attempt at protecting children at large is absurd…talk about throwing the baby out with the bath water. Imagine a world where we just ignore the source of the issue…the groomers would have a hay day. “Sorry kid…you should have had better parents”.
Putting it all on the parents just means that a small portion of rich and savvy parents will be able to “protect” their kids, usually with draconian practices that put kids far more at risk. Pardon me…but you don’t know what you’re talking about.
No, here in reality we should continue to institute and advocate for effective measures.
Very easy: create a law that if minors are caught where they shouldn’t be the parents and the minors are going to be held responsible, because raising kids is parent’s responsibility.
However, absolutely ZERO percent of the age verification laws are being put in place to protect kids. They are pushing that with sole reason to invade your privacy and monitor your activity, so any mean that doesn’t accomplish it missed the point.
No.
Basically, as soon as a web service knows your age, they can tailor their content specifically for you. That’s great when the service is Netflix and doesn’t want to suggest R-rated movies to pre-teens.
That’s not quite so great when the “service” is KidGroomer dot com.
Turns out that having machines automatically report the ages of their users is not such a good idea. Turns out that enabling groomers to identify children from adults is a fair bit worse than kids finding naked people on the internet.
We use used to have a privacy friendly solution that allowed parents to monitor their kid’s internet use.

You just had to put it in shared area.
How about we just don’t.
Honestly I can’t think of any way you could verify age accurately without something identifying being provided.
You could try age based trivia, but anyone could Google an answer.
No.
The short answer is yes, it can.
I actually think the best method is to put the onus on parents to parent in the way they think best, while giving them effective tools with which to do it. Parental controls should be baked into the OS, and sites should be required to hook into these parental controls via an API. The system could even have the capability, optionally, to block based on a crowd-sourced list, so it can still be effective against non-compliant sites. There would be no privacy problems, because no private information is ever shared. There isn’t even a middleman who has to see any identification at any point.
However, if the goal is to have specific age verification that actually enforces age, it’s still possible. I know of two main ways.
Here’s the first: https://crypto.stackexchange.com/a/96283
It has the downside of requiring a physical device like a passport or some specific trusted long-running locally-kept identity store held by the user. But it’s otherwise very good.
Another option does not require anything extra be kept by the user, but does slightly compromise privacy. The Government will not be able to track each time the user tries to access age-gated content, or even know what sources of age-gated content are being accessed, but they will know how many different sites the user has requested access to. And sites requiring age verification will not get access to any information they didn’t already have other than the simple answer to the question “is this user old enough?” It works like this:
- The user creates or logs in to an account on the age-gated site.
- The site creates a token
Tthat can uniquely identify that user. - That token is then blinded
B(T). Nobody who receivesB(T)can learn anything about the user. - The user takes the token to the government age verification service (AVS).
- The user presents the AVS with
B(T)and whatever evidence is needed to verify age. - The AVS checks if the person should be verified. If not, we can end the flow here. If so, move on.
- The AVS signs the blinded token using a trusted AVS certificate,
S(B(T))and returns it to the user. - The user returns the token to the site.
- The site unblinds the token and obtains
S(T). This allows them to see that it is the same tokenTrepresenting the user, and to know that it was signed by the AVS, indicating that the user is of age. - The site marks in their database that the user has been age verified. On future visits to that site, the user can just log in as normal, no need to re-verify.
All of the moving around of the token can be automated by the browser/app, if it’s designed to be able to do that. Unfortunately a typical OAuth-style redirect system probably would not work (someone with more knowledge please correct me), because it would expose to the AVS what site the token is being generated for via redirect URLs. So the behaviour would need to be created bespoke. Or a user could have a file downloaded and be asked to share it manually.
The AVS could also be private third parties rather than governments, if necessary. Since it probably relies on government ID, I think it’s better for the government to do it, but technologically there’s no problem with private companies doing it. They would still not gain any information about which sites you access. Only that a user with this ID card tried to access an age-gated site.
There’s also a potential exposure of information due to timing. If site X has a user begin the age verification flow at 8:01, and the AVS receives a request at 8:02, and the site receives a return response with a signed token at 8:05, then the government can, with a subpoena (or the consent of site X) work out that the user who started it at 8:01 and return at 8:05 is probably the same person who started verifying themselves at 8:02. Or at least narrow it down considerably. Making the redirect process manual would give the user the option to delay that, if they wanted even more privacy.
The site would probably want to store the unblinded, signed token, as long-term proof that they have indeed verified the user’s age with the AVS. A subsequent subpoena would not give the Government any information they could not have obtained from a subpoena in an un-age-verified system, assuming the token does not include a timestamp.
Contracts only for those 18 and older… So no internet for anyone under 18, or else the parents have to sign the contracts and take responsibility for their upbringing, etc… Oh, damn…
I’m not sure what you mean. Nobody’s talking about contracts?
Exactly, that’s the problem. That’s where age verification comes into play (at least in Europe). It doesn’t matter whether it’s prepaid or not. If a minor has internet access, it’s automatically considered to have parental consent. (No anonymous Internet connections… No anonymous SIM cards… Everything has already been verified with real data.
Do you want to register for internet service? You need to provide valid information and be of legal age.
Do you want a prepaid SIM card? You can only register for one with valid information and only if you are of legal age.
So we’ve had age verification in place for a long time. )
Yes, kind of. In a similar way that we can currently authenticate with OpenID. Basically something like a passkey could be issued by your government that would let you prove your (pseudonymous) identity (and thus age range) through their API to a website.
This wouldn’t allow for anonymous browsing, since the website would have to identify you, but it could allow for pseudonymous browsing, since the website’s identification of you could be just an ID number that is specific to them. They already track you with cookies, so it wouldn’t be any worse than we have now, except that it’s more unnecessary bureaucracy.
This just seems like a great way for your government to know every single thing you do online.
You could make it so that the government doesn’t know who’s requesting it.
What do you mean by government suppression? The government suppressing entities, or you as the authorizing individual?
EU has eIDAS, and Germany has an existing working system. A certified publisher and you with your NFC phone can confirm your age above x without disclosing any other information about your identity. It runs with sophisticated cryptographic negotiation between the three parties. For you as an end user, obviously the government already knows of your existence beforehand and can serve as an authorative entity. The two other parties can then verify their validity to each other through the mutually trusted entity without revealing unnecessary information to any of the parties. Practically, the requesting entity must be certified by the state to confirm their validity and reasonable necessity of what kind of data they plan to request, and the user use their moile phone NFC and an app to read their identity document, and give explicit consent to specific data sharing.
I’m not too familiar with the specifics of what the state can see in this system. It seemed plausible to me that they may not even see that you’re authenticating with a specific party or that and what you’re sharing. Cryptography ftw.
Yes, that’s what the California and Colorado (not sure about the others) implementations accomplish. Bare minimim exposure of data to consumers who want to verify you, without any need to expose additional data to trusted third parties. The burden of trust is placed on the device owner.
By the way, people who create accounts without age verification will then have access to the planned children’s versions of social media… A pedophile’s dream come true 🤢🤢🤮🤮🤮
Age verification = digital epstein pedo playground
This is the precise question that Soatok discussed here: https://soatok.blog/2025/07/31/age-verification-doesnt-need-to-be-a-privacy-footgun/
Google recently published an open source library that proves a user’s age in a way that preserves privacy. This library is undergoing two independent security reviews, but should be production-ready in the near future.
If we’re going to force websites to implement some kind of age verification for adult content, we should demand the governments that pass these laws provide the zero-knowledge proof technologies to satisfy the law.
Absolute privacy? Not at all, the fact that I’m over 18 is personal information, you’ve all invaded my privacy a little bit by reading that. Absolute accuracy? Not at all, I have no idea how anyone would ever prove for sure someone’s age. Any potential solution is going to about compromise. The real question is: How well can we verify someone’s age well enough while preserving as much privacy as possible?
The best solution I’ve heard of, that hits a pretty good compromise, is giving the local device some indicator of the user’s age, and allow applications or websites to perform a limited resolution query of that value, along the lines of which of several age brackets does the user fall into. The birthday can optionally be provided when the device is configured; a parent can set up a device for their kid, setting whatever value they want for the kid’s age. A good implementation would make it quite difficult to extract or change that birthday value without admin rights, which the parents would keep.
If this sounds a lot like the laws in the news from California and Colorado, that’s because it is. I think that they’re stupid laws, but they describe reasonably good features for software. That law making effort should have been put towards banning the incredibly invasive and somehow also incredibly inaccurate use of AI image processing for age estimation.










