Mount Sinai has become a laboratory for AI, trying to shape the future of medicine. But some healthcare workers fear the technology comes at a cost.

WP gift article expires in 14 days.

https://archive.ph/xCcPd

  • They worry about the technology making wrong diagnoses,

    You know who I’ve seen make “wrong diagnoses” over and over again? Human fricken doctors. And not to me (a healthy, upper middle class white male professional) but to my wife (a disabled woman with a debilitating genetic disease from a shitty part of Texas). We had to fight for years and spend tons of money to get “official” diagnoses that we were able to make at home based on observation, Googling and knowledge of her family history. I’ve watched male neurologists talk to ME instead of her while staring at her boobs. I’ve watched ER doctors have her history and risks explained to them in excruciating detail, only to send her home (when it turns out she needs emergency surgery).

    revealing sensitive patient data

    Oh, 100%, this is gonna happen.

    becoming an excuse for insurance and hospital administrators to cut staff in the name of innovation and efficiency.

    Oh, 100% this is ALSO gonna happen. My wife recently had to visit the ER twice, receive scary spinal surgery and stay over for 2 weeks. The NUMBER ONE THING I noticed was that in this state of the art hospital, in a small, wealthy, highly gentrified town, was DANGEROUSLY understaffed. The nurses and orderlies were stretched so thin, they couldn’t even stop to breath (and they were OFTEN cranky and RUSHING to do delicate tasks where they could easily make mistakes). This reckless profiteering is already a problem (that probably needs some more aggressive regulation to deal with it, nothing else will work). If AI exposes it more and pushes it to a breaking point, maybe that could ultimately be a good thing.

  • Storksforlegs@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 years ago

    This looks like another instance where AI could be used to really make doctors and nurses lives easier, provide more and better care at lower cost - but in the hands of greedy corporate types it wont go that way.

  • bl_r@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I’m not an expert at ML or cardiology, but I was able to create models that could detect heart arrhythmias with upwards of 90% accuracy, higher accuracy than a cardiologist, and do so much faster.

    Do I think AI can replace doctors? No. The amount of data needed to train a model is immense (granted I only had access to public sets), and detecting rarer conditions was not feasible. While AI will beat cardiologists in this one aspect, making predictions is not the only thing a cardiologist does.

    But I think positioning AI as a tool to assist in triage, and to provide second opinions could be a massive boon for the industry.

      • bl_r@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        That is a good thing and a bad thing. Self diagnosis will inevitably end with misdiagnosis.

        I think AI has the potential to increase the amount of patients seen, and maybe even decrease cost, but in the enshittified American system I’m willing to bet it would not be close to the best outcome

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    AI can be an amazing tool in healthcare, as a double check. For example, assume a doctor thinks you have something. Right now you could have

    • A simple non invasive diagnostic procedure that costs X that is 90% accurate.
    • A complex invasive diagnostic procedure that costs 10X that is 98% accurate.

    Doctor will always suggest the first one and then see if you need the second one based on other factors. AI can be a great tool as a double checker. You go in, do the simple test, and then you run your results and your inputs through a model and that’ll give you a second probability, and that could help determine that you should go in for the more invasive one.

    If it’s done that way it’ll be a great tool, but AI should never be used as the only check or to replace real proven tests. At the end of the day it’s still saying “From my information I’ve trained on, the answer to the question of 2+2 is probably 4”, it does not do any actual calculations. Only probabilities from trained data. So great at double checking, but bad at being the single source.

    • LallyLuckFarm@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      There was an interview I saw with a cancer researcher working with AI to improve cancer detection in early imaging - they’ve fed thousands of CT and X-ray images to their model, and have then gone back through the data when patients have had biopsies to confirm. This sort of high quality data and attentive follow up has the potential to provide such better screening for cancers and other conditions that patients could have additional months or years to address them.