• average650@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    That isn’t actually what’s important. It’s the frequency of the token, which could be as simple as single characters. The frequency of those is certainly not zero.

    LLMs absolutely can make up new words, word combinations, or sentences.

    That’s not to say chatgpt can actually give you good windows keys, but it isn’t a fundamental limitation of LLMs.

    • Demonen@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Okay, I’ll take your word for it.

      I’ve never ever, in many hours of playing with ChatGPT as a toy, had it make up a word. Hallucinate wildly, yes, but not stogulate a word out of nothing.

      I’d love to know more, though. How does it combine new words? Do you have any examples of words ChatGPT has made up? This is fascinating to me, as it means the model is much less chained to the training data than I thought.

      • gun@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It can create new words, I just verified this. First word it gave me: flumjangle. Google gives me 0 results. Maybe Google is missing something and it exists in some data out there, Idk.

        I’m not sure what is so impressive about this though. Language models can string words together in unique ways, why would it be different for characters?

      • relevants@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        A lot of compound words are actually multiple tokens so there’s nothing stopping the LLM from generating the tokens in a new order thereby creating a new word.