• Bytemeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    Current one I’m dealing with…

    Word will open (uncommanded) copilot processes in the background which will immediately request location permissions every few minutes.

    Microsofts workaround: just let copilot know where you are bro.

  • ZombiFrancis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 hours ago

    I have to use windows 11 and teams for all my work. Teams is being utilized as the central file management system.

    The benefit these days is if something doesn’t work I can shrug and say ‘must be ai’ or ‘just windows 11 things’ and generally get tacot agreement.

  • 1rre@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 hours ago

    I’ve started using AI pretty heavily for writing code in languages I’m not as confident in (especially JS and SQL) after being skeptical for a while, as well as code which can be described briefly but is tedious to write, and I think the problem here is “by” - it would be better to say “with”

    You don’t say that 90% of code was written by code completion plugins, because it takes someone to pick the right thing from the list, check the docs to see it’s right, etc.

    It’s the same for AI, I check the “thinking”/planning logs to make sure the logic is right, and sometimes it is, sometimes it isn’t, at which point you can write a brief psudocode brief of what you want to do, sometimes it starts on the right path then goes off, at which point you can say “no, go back to this point” and generally it works well.

    I’d say this kind of code is maybe 30-50% of what I write, the other 50-70% being more technically complex and in a language I’m more experienced in, so I can’t fully believe the 30% figure when you’re going to be having some people wasting time by not using it when they could use it for speedup, and others using it too much and wasting time trying to implement more complex things than it’s capable of - this one irks me especially after having to spend 3½ hours yesterday reviewing a new hire’s MR that they could’ve spent actually learning the libraries, or I could’ve spent implementing the whole ticket with some time left over to teach them.

    • TonyTonyChopper@mander.xyz
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 hours ago

      Large language models can’t think. The “thinking” it spits out to explain the other text it spits out is pure bullshit.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 hours ago

        Why do you think I said "thinking"/planning instead of just calling it thinking…

        The “thinking” stage is actually just planning so that it can list out the facts and then try and find inconsistencies, patterns, solutions etc. I think planning is a perfectly reasonable thing to call it, as it matches the distinct between planning and execution in other algorithms like navigation.

        • AliasAKA@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 hours ago

          “Thinking” is just an arbitrary process to generate additional prompt tokens. In their training data now, they’ve realized people suck at writing prompts, and that it was clear their models lack causal or state models of anything. They’re simply good at word substitution to a context that is similar enough to the prompt they’re given. So a solution to sucky prompt writing and trying to sell people on its capacity (think full self driving — it’s never been full self driving, but it’s marketed that way to make people think it is super capable) is to simply have the model itself look up better templates within its training data that tend to result in better looking and sounding answers.

          The thinking is not thinking. It’s fancier probabilistic look up.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        That kind of matches my experience, but some of the negatives they bring up can be fixed with monitoring thinking mode. If they start to make assumptions on your behalf, or go down the wrong path, you can interrupt it and tell it to persue the correct line without polluting the context.

  • gegil@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    59
    ·
    edit-2
    1 day ago

    Windows 11 is not being developed by people. It is entirely undeveloped by ai.

  • Gork@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    44
    ·
    1 day ago

    Sounds like all the hard work they did refactoring Windows 10 is gonna go to waste with the new AI vibe coding in Windows 11.

  • NONE@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    1 day ago

    What a punchline is finding out how absurdly long the image is while I was scrolling lol

  • pastel_de_airfryer@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    A few weeks ago, my laptop speakers stopped working out of nowhere on Windows 11. They work perfectly on CachyOS. My Windows 11 partition won’t live for much longer.

  • Rollade@lemmy.ml
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 day ago

    Well letting us put the taskbar on top of on the side is to hard but breaking everything else with ai integration is quite easy lmao

  • ToastedPlanet@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    23 hours ago

    The last time I touched a windows device was to make windows 11 look like windows 10, so the task bar could be moved to the top. edit: typo

    • ToastedPlanet@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago

      Actually more recently I was setting up an older, but still great, canon printer, with a new windows 11 machine. I had to install a driver for windows 7/8/8.1/10 because there is no working windows 11 driver and the 7/8/8.1/10 driver still works. XD

  • maria [she/her]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    just 30%?..

    that should cause much less harm.

    are the devs there that lazy? do they just not review the code?
    I thought that especially these companies do code review and such…

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      19 hours ago

      I mean, for the bugs in the screenshot, it is more than plenty, if even just 1% of bad code slips through.

      And AI-generated code is extremely time-consuming and tricky to review, because you can’t assume there to be rhyme and reason to the changes, so I would be surprised, if they actually put in all the effort to properly review.

    • megopie@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      24 hours ago

      I suspect it’s largely more the result of failing internal organization. Like, a detached from reality and ideologically motivated faction with in corporate leadership has seized control of the company and fired anyone who told them they were being idiots or opposed their initiatives. People are probably getting promoted or hired to management positions based on their ability to tell leadership what they want to hear rather than their ability to actually run things. Everyone lower down has internalized that telling the higher ups what’s going on will get them fired and only is telling them what they want to hear. Resources and people got diverted away from projects that the management doesn’t care about (have no potential to drive growth), and they’re just assuming that the “increase in productivity of AI” will make up the difference. Now everything is melting down and their core product is losing market share while the new products intended to drive growth are failing to see meaningful adoption. Heads will probably role, but it’s unlikely it will be the people who are causing the problem.

      That’s what it looks like to me from the outside.

    • Catpuccino@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 day ago

      Its a mix. Some devs are definitely lazy but from what I have heard there is also a big push for devs to deploy faster and to actively use AI or be punished. So there is incentive to just get code out to meet deadlines/expectations and move on to the next task. The amount of work being put on an individual dev is rising with these accelerated expectations in mind, and getting another dev to review your code takes time from them and their own stack of tasks so code review quality has fallen greatly. Not to mention the high likelihood that AI is also doing code reviews to make up for this and we can only guess how many reviews get approved saying “you’re absolutely right!”

    • gegil@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 day ago

      30% is last year news. Now windows 11 in entirely developed by ai. How it works:

      1: Ai hallucinates a new feature.

      2: Ai agents generate enough code to meet investors demands.

      3: Ai Qa agents are being gaslighted into verifying code.

      4: The result is a feature which does not even work, while also ten breaking changes in core system features, which worked completely fine, but ai rewrote them anyway.