Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Nomecks@lemmy.ca
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    6 hours ago

    I think the real differentiation is understanding. AI still has no understanding of the concepts it knows. If I show a human a few dogs they will likely be able to pick out any other dog with 100% accuracy after understanding what a dog is. With AI it’s still just stasticial models that can easily be fooled.

    • stupidcasey@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      This is entirely presumptive, we simply do not and cannot know how much they understand, this all boils down to if it looks like a duck and quacks like a duck is it a duck?

    • DavidDoesLemmy@aussie.zone
      link
      fedilink
      arrow-up
      3
      ·
      2 hours ago

      I disagree here. Dogs breeds are so diverse, there’s no way you could show some pictures of a few dogs and they’d be able to pick other dogs, but also rule out other dog like creatures. Especially not with 100 percent accuracy.

      • Match!!@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        32 minutes ago

        for example, wolves, hyenas, and african wild dogs certainly won’t ever reach 100% consensus on dog-or-not within human groups