• peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    Or that it can produce repeatedly. That’s something that bothers me. Slight changes in the prompt and you get a wildly different result. Or, worse, you get the same bad output every time you prompt it.

    And then there are the security flaws

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      You can use that to your advantage! Slight prompt changes can give you different ideas on how to proceed, give you some items to evaluate. But that’s all they’re good for, and while they can be solid on getting you past a block, I’m horrified to think anyone in the IT space thinks an LLM can output safe, working code.