• Chana [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    4 days ago

    Machine learning etc could be great for medicine but the profit motive means that the best bullshitter will get these early contracts. The best bullshitter has an actual product and it looks great in demos but they cut corners so its fuckups are difficult to audit and were swept under the rug when the few engineers that understood the problem raised the issue.

    With very carefully collected and annotated datasets and heuristic guard rails this stuff could actually be great. But those things are slow and expensive.

  • vegeta1 [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    4 days ago

    I could see it implemented in research (covid vaccine for one it helped) and to detect some radiological early detection but something as sensitive as surgery in this stage? thats-why-im-confused Humans can handle error with research in data with surgery error is bad news

    • VILenin [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      25
      ·
      4 days ago

      The doctors watching as the janitors mop the blood off the OR floor after I die when the “AI” surgery robot misidentifies my heart as my appendix: just think of all the training data we’re going to get from this!

    • Chana [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 days ago

      Even the radiology analysis applications have been largely bullshit. They don’t actually outperform humans and most of it is due to using heavily biased datasets and poorly tuned black box models. At its base, a “modern” model doing this kind of thing is learning patterns. The pattern learned may not be the thing you’re actually trying to recognize. It may instead notice that, say, an image was taken with contrast and that statically increases the chance that the image will contain the issue of interest, as contrast imaging is done when other issues are already suspected. But it didn’t “see” the problem in the image, it learned the other thing, so now it thinks that most people getting contrast imaging have cancer or whatever

      • KuroXppi [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago
        similar example that I like to share with people, spoiler tag cos of length

        I remember reading about early models used by the US(?) military to identify camouflaged tanks. They posed their own tanks, took the photos, and ran the ‘photo with tank’ and ‘photo without tank’ reinforcement and rewarded it when it correctly identified a tank.

        They ran the models and found that the ‘AI’ was able to identify images with a disguised tank essentially 100% of the time. It was astounding, so they ran more tests, they then discovered that the ‘AI’ could identify images where the tank was completely obscured nearly 100% of the time, too.

        They celebrated thinking that their model was so advanced it had developed x-ray vision or something. So they ran more tests and discovered that no, the ‘AI’ wasn’t identifying the tank at all. What had happened was that there was a week between the days when they had taken the two data photo sets.

        For argument’s sake, the day that they took the ‘No tank’ photos it was sunny, and the day they took the ‘Camouflaged tank’ photos it was slightly overcast. The AI was picking up on the weather/lighting differences and identifying overcast days as ‘hidden tank’ 100% of the time. Basically ‘AI’ makes the shortest inference between the data set and the reinforced outcome, which results in shortcuts that fool the human testers.

        It’s a bit like how geoguessers like rainbolt can tell they’re in xyz province of myanmar because of the lense grime on the google van.


  • Awoo [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    4 days ago

    Identifying body parts inside the body seems like exactly the kind of thing AI should NOT be doing, they all look the same. It takes a lot of training and experience to be able to identify them correctly.

    • bloopything [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 days ago

      As user Chana said it makes some sense just not under a for profit system, it can difficult to tell one red mass of flesh on a screen from another, so in theory computer vision could provide a “second pair of eyes” for verification, but that’s only if it was actually implemented correctly with the proper testing and training, which it wont be until it’s costing hospitals more in lawsuits than it gets from kickbacks

    • Owl [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      It’s also exactly the kind of thing that, using the methodologies accepted by the “AI” industry, AI will beat human professionals at easily every time! (Clown industry with clown methodologies.)

  • DasRav [any, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 days ago

    Ah great, fresh and entirely preventable horrors. I can’t wait for a real doctor to even glance at your problems being a double super premium service only for CEOs and their pedophile friends.