Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • webghost0101@sopuli.xyz
    link
    fedilink
    arrow-up
    7
    arrow-down
    6
    ·
    edit-2
    12 hours ago

    This is true if you describe a pure llm, like gpt3

    However systems like claude, gpt4o and 1o are far from just a single llm, they are a blend of tailored llms, machine learning some old fashioned code to weave it all together.

    Op does ask “modern llm” so technically you are right but i believed they did mean the more advanced “products”

    Though i would not be able to actually answer ops questions, ai is hard to directly compare with a human.

    In most ways its embarrassingly stupid, in other it has already surpassed us.

    • fartsparkles@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      10 hours ago

      None of which are intelligence, and all of which are catered towards predicting the next token.

      All the models have a total reliance on data and structure for inference and prediction. They appear intelligent but they are not.

      • webghost0101@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        How is good old fashioned code comparing outputs to a database of factual knowledge “predicting the next token” to you. Or reinforcement relearning and token rewards baked into models.

        I can tell you have not actually tried to work with professional ai or looked at the research papers.

        Yes none of it is “intelligent” but i would counter that with neither are human beings, we dont even know how to define intelligence.