The issue is more a loss of contextual awareness. For example - why should a true crime documentary have to self-censor words like rape, died, sexual assault when they are often relevant to the story being told? Why should a blogger talking about mental health issues have to self-censor the word suicide?
These are valid issues. There’s a reply in here which has a screenshot of someone saying “elon musk can fucking die”. Context tells you immediately it’s not a targeted death threat, it’s an opinion. Yet the systems that places rely on cannot make that distinction.
4chan existed way before the advent of the attempts to sanitize the internet. Heck I remember the biker forum I frequented having some nasty shit and attitudes on there. But despite their scummy viewpoints, these were people I could rely on when my motorbike shat itself.
Smaller communities police themselves better as well. Large-scale social media and other platforms just made it much harder to have the moderator model that those forums had. The human touch is sorely lacking as much as the automated processes lack nuance and context. A modern form of the Scunthorpe problem I guess.
Fortunately DDG are opt out, and short of cookie sessions expiring it seems to stick.
Unlike a certain set of other “search” engines that are slowly changing into AI chat bot outputs with zero opt-out abilities besides using some hacky tricks to avoid it.