• DamarcusArt
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    6 days ago

    Haven’t read the article, but I’m guessing this new model enables them to do something computers could do 20 years ago, only far, far less efficiently.

    • To me it seems like they added a preprocessor that can choose to tokenize letters or send directions to other systems as opposed to just using the llm. So this makes it far better at logical reasoning and math from natural language input. The achievement isn’t counting the R’s in strawberry, it’s working around the fundamental issues of llms that make that task difficult.