Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

  • starman2112@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I’m extremely skeptical of medical diagnosis AIs. Without being able to explain why it comes to a conclusion, how do we know it won’t just accidentally find correlations? One example I heard of recently was an AI that was extremely good at detecting TB… based on the age of the machine that took the x-ray. Because it turns out places with older machines tend to be poorer, and poorer places tend to have more TB.

    The only positive use I can think of is time saving measures. A researcher can feed a study to ChatGPT and have it write a rough first draft of the abstract. A Game Master could ask it for inspiration on the next few game sessions if they’re underprepared. An internet commenter could ask it for a third example of how it could save time.

    But for anything serious, until it can explain why it comes to the conclusions it comes to, and can understand when a human says “no, you’re doing it wrong,” I can’t see it being a real force for good.

    • Priestofazathoth@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Ehh…at least we know we don’t understand how the AI reached its conclusion. When you study human cognition long enough you discover that our beliefs about how we reach our conclusions are just stories the conscious mind makes up to justify after the fact.

      “No, you’re doing it wrong” isn’t really a problem - it’s fundamental to most ML processes.