“In short, the OpenAI paper inadvertently highlights an uncomfortable truth,” Xing concluded. “The business incentives driving consumer AI development remain fundamentally misaligned with reducing hallucinations.”

  • fox [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    27
    ·
    23 days ago

    Yeah they can’t do that because they’re unable to discern truth from falsehood and all that’s happening is matrix algebra on vectors representing how statistically likely an output is for any given input. Hallucinations are produced by the exact same process as useful answers