- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
Lawmakers in Congress are moving quickly on the GUARD Act, an age-gating bill restricting minors’ access to a wide range of online tools, with a key vote expected this week. The proposal is framed as a response to alarming cases involving “AI companions” and vulnerable young users. But the text of the bill goes much further, and could require age gates even for search engines that use AI.
If enacted, the GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems. It would block minors from everyday online tools, undermine parental guidance, and force adults to sacrifice their privacy. In the process, it would require services to implement speech-restricting and privacy-invasive age-verification systems for everyone—not just kids.
Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.



Okay, so I’ve read the full bill now, and I gotta say I don’t feel as conflicted about this anymore. The EFF’s article looks like it has a lot of bad takes in it now; my (still not insignificant) doubts on this bill now come from the fact that I’m not a lawyer and thus cannot foresee the consequences of this as well, and the fact that a decent bill can still be implemented horribly by idiotic companies.
(I wrote so much here I ended up needing to break out the header markdown. Apologies in advance!)
Chatbot definition
I don’t think the bill’s definition of chatbots is actually bad at all. Quoting directly:
(Collapsible) Bill quote regarding AI definitions
Notice the frequent use of the word “and” here, rather than “or.” Do I think there are no possible holes in this? No. And again, I’m no lawyer. But my main concern here would be restricting programs that aren’t LLMs, and this seems to do a good job of avoiding that.[1] The EFF is concerned this would restrict people from, say, cheating on homework. It would. I don’t care about that and I don’t think they should either, for reasons addressed in my comment above.
Age verification
It’s not as bad as it sounded to me, but it’s still not acceptable. Quoting again:
(Collapsible) Bill quote regarding age verification measures
The reason I say this is “not as bad as it sounded” is primarily because it’s open-ended.[2] An actually acceptable, privacy-preserving age verification method would be legal here and is not actively prevented. But that’s about all the faith I can muster for it. This law could be good if we had age-gating tech that could actually be trusted, and indeed if this law passes it might become good if we were ever to develop such a thing.
But we don’t have that, and I do not trust for-profit corporations to ever make one, and in such a context this law runs the risk of causing serious issues. Namely, I would be concerned that – contrary to what the EFF states – companies would decide that the path of least resistance would involve continuing to use AI and implementing accounts and age verification for their services anyway. We’d move from having shitty AI chatbot customer support people shouldn’t use, to shitty AI chatbot customer support that is considered so important that the company mandates everyone get age-checked to view a support page.
It’s unlikely, since the tech the law mandates is extensive enough to be an expensive hurdle to set up that really isn’t worth it for any company that doesn’t outright rely on AI to do their core business. But since when has sense mattered in the so-called AI age?
Privacy
There’s also the privacy issue of the age gating. Which is omnipresent as ever with these sorts of things. All the bill offers on that front is this:
(Collapsible) Bill quote regarding data security
5(E) here is great. I wouldn’t know if it’s foolproof, and it’s probably not, but it looks good. As for the rest? Seems very unrestricted and lacking definitions to me. Words like “reasonable” are great to use if you want to allow for a broad range of methods for tackling an issue, but I don’t think that move is reasonable when it comes to PII security. With “industry-standard encryption protocols” being as rigorous as the security standards get, the bill may as well just say “try not to fuck up,” and the track record for this is, uh, poor.
So yeah, all in all, way better than the EFF is putting it. But unfortunately the problems are bad enough that I’m not convinced this bill should pass. At least, not while the massive bad-faith age-gating push is currently strangling the internet. I hate AI, and it is absolutely hurting people, but if we’re to have this then privacy-perserving (and secure) tech is a must and has to be created first.
“AI companion” uses this definition and then further narrows it to things like “human-like” and “is designed to encourage or facilitate the simulation of […] friendship” and such, so I’m not worried about that either. ↩︎
6(B) and 6(D) are notable in their being specific exclusions; “I am not a minor” buttons and “enter your birthdate” fields are explicitly disallowed as age verification methods, as is using the same machine as a different, already-verified user. ↩︎