- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
I’ll be the first to hop in on the AI hate train, but isn’t this just broadly true of all humans? We’re pretty notoriously awful at identifying our own gaps in knowledge and skill. I imagine that the constant confirmation from AI exacerbates the issue, but I don’t think it’s entirely AI’s fault that people are bad at recognizing their shortcomings.
Dont blame the calculator because you suck at math.
AI is still shart though.
That’s a much more succinct way to put it, well done!
The AI also has the tendency inherited from the broad human tendency in training.
So you get overconfident human + overconfident AI which leads to a feedback loop that lands even more confident in BS than a human alone.
AI can routinely be confidently incorrect. Especially people who don’t realize this and don’t question outputs when it aligns with their confirmation biases end up misled.
This article is about how AI exacerbates those tendencies. And since there are so few ways to accurately measure the functionality of AI in general, those self-segments are a significant portion of AI’s value proposition.
Turns out talking to the bullshit machine like it’s a person makes you bullshit
You see, this is where I come out ahead, I know I’m a moron!
Hey, me too!
Me four!
So do I sometimes but that’s just ADHD, hatred of banal competition, imposter syndrome, or simply “still learning the thing.”
Yes I hate corporate self evaluations, how could you tell? Fuck it, I’ll just put “I’m the absolute best person to ever walk the earth mr bossman. Money me please” again, because fuck you for making me do this in the first place.
I’ll assess them: they are incompetent and talentless.
That’ll be $20.
I think these are the logic test questions (or similar).
Reasonably difficult! I can see why an AI assistant would sometimes help and sometimes really not.







