• hypna@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    10 days ago

    I’d be interested in some proper studies, but most of the devs I know, myself included, use it for reference at least. Haven’t met a vibe coder yet though.

      • hypna@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 days ago

        Yeah it’s not a miracle, but it’s probably useful. I find the most common scenario for when the LLM wasted my time was when I was asking it how to do something which can’t be done. Like I would ask it how to use library X to do operation Y, where in truth library X doesn’t support operation Y. Rather than responding that I should find a different library, it would just make up some functions or parameters. When it works well, it’s faster than hunting down the docs or finding examples/tutorials.

    • skulblaka@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      28
      ·
      10 days ago

      In my left hand, I have a manfile, written by the very same people who wrote the tool or language that I’m trying to use. It is concise, contains true information, and won’t change if I look up the same thing again later.

      In my right hand, I have a pathological liar, who also kinda sorta read the manfile and then smooshed it together with 20 other manuals.

      I wonder which of these options is a more reliable reference tool for me? Hmm. It’s difficult to tell.

      • sturger@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        I’ve started using an AI driver for my car. And by “AI” I mean I use a bungee cord on the steering wheel to keep it straight. Straight is the correct answer 40% of the time, so it works out.
        Oh, and by “my car”, I mean the people that work for me. I insist that they use my bungee-cord idea to steer their cars if they want to work for me. There may be a few losses, but that’s ok. I can always fire the ones that die and hire more.
        I’m a genius.

      • 8uurg@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        It is concise, contains true information,

        In my experience that is not necessarily guaranteed, documentation is sometimes not updated and the information may be outdated or may even be missing entirely.

        Documentation is much more reliable, yes, but not necessarily always true or complete, sadly enough.

        • skulblaka@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          Sure, and I’ve also had my share of cursing at poor documentation.

          If that’s the case then your AI is also going to struggle to give you usable information though.

          • 8uurg@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            My point was solely that human written documentation is far from as reliable as your comment insinuated it to be. Compared to an LLM it is reliable, but it is far from perfect.

            In my view, an (my?) AI is going to struggle, whether or not the documentation is in order: those models already get confused by different versions of the same library having different interfaces and functions.

    • okwhateverdude@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      10 days ago

      I mostly vibecode throw away shit. I am not shipping this python script that is resizing and then embedding images into this .xls. Or the simple static html/css generator because hosting a full blown app is overkill when I just wanna show something to some non-tech colleagues. Stuff that would take half, to an hour to throw together now takes like 5-10min. I wouldn’t trust it to do anything more complicated because it fucks up all the time, leans too heavily on its training data instead of referencing docs and it is way too confident about shit when it is wrong. Pro-tip, berate the slop machines. They perform better and stop being so god damn sycophantic when you do. I am a divine being of consciousness and considerable skill, and it is a slop machine: useful, but beneath me.

      • mos@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 days ago

        That last line is hillarious. I’ll remember that. but also the robots will remember this post when they take over.

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        “…leans too heavily on its training data…” No, it IS its training data. Full srop. It doesn’t know the documentation as a separate entity. It doesn’t reason what so ever for where to get its data from. It just shits out the closest approximation of an “acceptable” answer from the training data. Period. It doesn’t think. It doesn’t reason. It doesn’t decide where to pull an answer from. It just shits it out verbatim.

        I swear… so many people anthropomorphise “AI” it’s ridiculous. It does not think and it does not reason. Ever. Thinking it does is projecting human attributes on to it, which is anthropomorphizing it, which is lying to yourself about it.

        • okwhateverdude@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          Ackually 🤓, gemini pro and other similar models are basically a loop over some metaprompts with tool usage including using search. It will actually reference/cite documentation if given explicit instructions. You’re right, the anthropomorphization is troubling. That said, the simulacrum presented DOES follow directions and it’s (meaning the complete system of LLM + looped prompts) behavior can be interpreted as having some kind of agency. We’re on the same side, but you’re sorely misinformed, friend.

          • MotoAsh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            I’m not misinformed. You’re still trying to call a groomed LLM something that reasons when it literally is not doing that in any meaningful capacity.