…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 hour ago

    I think it’s mostly going to be useful for boilerplate generation, and effectiveness is going to vary wildly based on what language you’re using. JS or Python? It’ll probably do OK. Plenty of open source for it to “learn” from. Delphi? Forget it.

    Brief experimentation showed it liked to bullshit if it was wrong, rather than fix things.

  • JubilantJaguar@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 hour ago

    Recently I used it (some free-tier DuckAI model, not Claude) to write a Python script for pasting PNGs into PDFs (complete with Tk interface) while applying a whole bunch of custom transformations. Simple enough, but a total chore with all the back-and-forth of searching for relevant unfamiliar libraries and syntax checking and troubleshooting. Inevitably it would have taken me the whole afternoon by hand. With AI I knocked it out in 25 minutes. That was my epiphany moment.

    Since then I’ve noticed a general problem with AI coding. It almost always introduces too much complexity, which I then have to waste time untangling (and often just understanding) before I can proceed. Whereas if I had done it “my way” from the start I might have got there earlier. But I figure this problem is kinda on me.

  • x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    The trick about vibe coding is that you confidently release the messed up code as something amazing by generating a professional looking readme to accompany it.

  • arthur@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    I’m using (Gemini 3.1 pro in) Gemini cli to build a complex (personal) project to explore how to use these tools. My impression is that the code produced by LLMs is disposable/throwaway. We need to babysit the model and be very hands on to get good results.

  • Katherine 🪴@piefed.social
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    3 hours ago

    Don’t just use it as a drop in replacement for a programmer; use it to automate menial tasks while employing trust but verify with every output it produces.

    A well written CLAUDE.md and prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification before doing anything will keep everything in your control while also aiding menial maintenance tasks like repetitive sections or user tests.

    • Feyd@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      verify with every output it produces.

      I agree that you can get quality output using these tools, but if you actually take the time to validate and fix everything they’ve output then you spend more time than if you’d just written it, rob yourself of experience, and melt glaciers for no reason in the process.

      prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification

      Anything in the prompt is a suggestion, not a restriction. You are correct you should restrict those actions, but it must be done outside of the chatbot layer. This is part of the problem with this stuff. People using it don’t understand what it is or how it works at all and are being ridiculously irresponsible.

      repetitive sections

      Repetitive sections that are logic can be factored down and should be for maintainability. Those that can’t be can be written with tons of methods. A list of words can be expanded into whatever repetitive boilerplate with sed, awk, a python script etc and you’ll know nothing was hallucinated because it was deterministic in the first place.

      user tests.

      Tests are just as important as the rest of the code and should be given the same amount of attention instead of being treated as fine as long as you check the box.

      • Katherine 🪴@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I agree it’s not perfect; I still only use it very sparingly, I was just just saying as an alternative to trusting everything it does out of the box.

  • thedeadwalking4242@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 hours ago

    I use it for tedious transformations or needle ones haystack problems.

    They are better at searching for themes or concepts then they are at actually doing any “thinking tasks”. My rule is that if requires a lot of critical thinking then the LLM can do it.

    It’s definitely not all they say it is. I think LLMs will fundamentally always have these problems.

    I’ve actually had a much better time using it for in line completion as if recent. It’s much better when the scope of the problem it needs to “solve” ( the code it needs to find and compose to complete your line ) is like the Goldilocks zone. And if the answer it gives is bad I just keep typing.

    I really hate the way LLM vibe coded slop is written and architecture. To me is clear these things have extremely limited conception. I’ve compared it to essentially ripping out the human language center, giving it a keyboard and asking it to program for you. It’s just no really what it’s good at.

  • Michal@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    5 hours ago

    You can’t really just use Claude code raw. You have to give it detailed instructions, use Claude skills,observe results, update prompts. It can be just as consuming, but rather that doing the productive work, you’re just reviewing and correcting AI. People who have success using AI have invested time in their setup and are continuously adjusting it.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      2
      ·
      2 hours ago

      But all in all extremely much faster. That’s the reason it is not useless. Everyone whines that it takes so much time when no it is not close to manual. It’s not a magic pill and you need the know how still, but no, it does not take “just as time consuming”. You are more productive. But yes, it is also more boring.

  • ReallyCoolDude@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    5 hours ago

    I read a lot of these posts that sadly leave out the basic parts: what were your prompts? What does it means in this context ‘vibe coding’? Did you create an initial setup, and slowly build up? Did you left wverything to the agent understanding, and just pushed approve or reject? There are multiple levels of quality that depends on the input. Did you get into context rotting? 3d math means vector math, matrices, or what? Given claude has a serious problem from march at least, the way u use it is paramount. In our team we all use claude with copilot ( sadly, that is a business directive ), and while excpetional at finding small relationships in components and microservices, had to build a long list of skills just to be barely usable in a ‘star trek’ way. The bottom line is that it is that you must be extremely precise when asking. Prompt modeling count a lot. Context build as well. For now, unit tests and data/mocks refactors are working extremely well for me, when i define the tests cases. My agents got to a point where i can safely have small peoperty additions with refactors on multiple repositories at once ( ie: i change the contract on microservice a, microservices b,c,and d are automatically updated ). This last part had to.be built thoug, with memory, engrams, and some fune tuing. It is not always a shit: if not nobody would use it. It is not this revolutionary technology that will make humans obsolete as well ( as they are selling it ).

  • sobchak@programming.dev
    link
    fedilink
    arrow-up
    15
    ·
    8 hours ago

    Key is having it write tests and have it iterate by itself, and also managing context in various ways. It only works on small projects in my experience. And it generates shit code that’s not worth manually working on, so it kind of locks your project in to being always dependent on AI. Being always dependant on AI, and AI hitting a brick wall eventually means you’ll reach a point where you can’t really improve the project anymore. I.e. AI tools are nearly useless.

  • kunaltyagi@programming.dev
    link
    fedilink
    arrow-up
    10
    ·
    9 hours ago

    Don’t jump right in to coding.

    Take a feature you want, and use the plan feature to break it down. Give the plan a read. Make sure you have tests covering the files it says it’ll need to touch. If not, add tests (can use LLM for that as well).

    Then let the LLM work. Success rates for me are around 80% or higher for medium tasks (30 mins–1 hour for me without LLM, 15–30 mins with one, including code review)

    If a task is 5 mins or so, it’s usually a hit or miss (since planning would take longer). For tasks longer than 1 hour or so, it depends. Sometimes the code is full of simple idioms that the LLM can easily crush it. Other times I need to actively break it down into digestible chunks

  • onlinepersona@programming.dev
    link
    fedilink
    arrow-up
    8
    ·
    10 hours ago

    It’s not called “correct” coding for a reason.

    That’s why people are wrong so often: they feel like something is right, but don’t check. That’s how you get anti -vaxxers, manospere people, MAGA, QAnon, Brexit, etc.

  • athatet@lemmy.zip
    link
    fedilink
    arrow-up
    23
    ·
    15 hours ago

    The reason you kept going around in circles and reintroducing bugs you already got rid of is because LLMs don’t remember things. Every time you tell it something it tells it the entire conversation again so it has all the parts. Eventually it runs out of room and starts cutting off the beginning of the convo and now the llm can’t ‘remember’ what it was you were even talking about.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      Kind of, but it really depends on the workflow. Simple 3d math does not extend to a codebase that is impacted by context window

    • Railcar8095@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      8 hours ago

      For that you can ask to update a documentation/status file on every update. You can manually add the goal and/or tasks for the future.

      With that, I improved my success a lot even when starting new sessions (add in the instructions file to use this file for reference, so you don’t have to remind every time)