Or that it can produce repeatedly. That’s something that bothers me. Slight changes in the prompt and you get a wildly different result. Or, worse, you get the same bad output every time you prompt it.
You can use that to your advantage! Slight prompt changes can give you different ideas on how to proceed, give you some items to evaluate. But that’s all they’re good for, and while they can be solid on getting you past a block, I’m horrified to think anyone in the IT space thinks an LLM can output safe, working code.
Or that it can produce repeatedly. That’s something that bothers me. Slight changes in the prompt and you get a wildly different result. Or, worse, you get the same bad output every time you prompt it.
And then there are the security flaws
You can use that to your advantage! Slight prompt changes can give you different ideas on how to proceed, give you some items to evaluate. But that’s all they’re good for, and while they can be solid on getting you past a block, I’m horrified to think anyone in the IT space thinks an LLM can output safe, working code.