Doing the Lord’s work in the Devil’s basement

  • 0 Posts
  • 336 Comments
Joined 1 year ago
cake
Cake day: May 8th, 2024

help-circle

















  • That is, almost certainly, not the reason. What you’re describing is “model collapse”, a situation which can be triggered in certain extreme laboratory conditions, and only in small models. It may be possible on larger models such as OpenAI’s flagships, but has never been observed or even proved to be feasible. In fact there probably isn’t enough synthetic (ai-generated) data in the world to do that.

    If i were to guess why hallucinations are on the rise, i’d say it’s more probably because the new models are fine-tuned for “vibes”, “empathy”, “emotional quotient” and other unquantifiables. This naturally exacerbates their tendency for bullshit.

    This is very apparent when you compare ChatGPT (fine-tuned to be a nice and agreeable chat bot) with Claude (fine-tuned to be a performant task executor). You almost never see hallucinations from Claude, it is perfectly able to just respond with “i don’t know”, where ChatGPT would spout 5 paragraphs of imaginary knowledge.