Users of OpenAI’s GPT-4 are complaining that the AI model is performing worse lately. Industry insiders say a redesign of GPT-4 could be to blame.

  • Muehe@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How is a language based approach that completely abstracts away actual knowledge, and just tries to sound “good enough” any kind of useful in a medical workflow?

    A LLM cross-referencing a list of symptoms against papers and books could be helpful for example. There is so much medical literature available these days and in so many languages that no one person can hope to gain a somewhat clear overview, much less keep up with all the new stuff coming out.

    Of course this should only be in assistance to a trained medical professional, as all neural networks are prone to hallucinations. You should also double-check results of NNs that interpret medical images, they may straight-up hallucinate or just pick up on correlation instead of causation (say all the cancer images in your training set having a watermark from the same lab or equipment manufacturer).