The way LLMs work is by approaching the most “average” response given any particular input. It’s why everything written by an LLM looks similar and always has the same voice.

Anyways, shockingly, the Machine That Generates the Average Output is bad at unique passwords.

Of the 50 returned, only 30 were unique (20 duplicates, 18 of which were the exact same string), and the vast majority started and ended with the same characters.

Imagine that an LLM tries to fit its outputs into a bell curve of potential responses, with each character in the output aimed to be as close to the middle as feasible (with a small randomization factor so it’s not always the exact same). A good password’s bell curve ought to be a completely flat graph where any character is just as likely to be chosen as any other character.

Use a password manager.

  • bobs_guns
    link
    fedilink
    English
    arrow-up
    12
    ·
    22 hours ago

    Can’t believe these were marketed as zero knowledge. If a server knows the ciphertext or even the size of the ciphertext that is not zero knowledge, by definition.

      • bobs_guns
        link
        fedilink
        English
        arrow-up
        6
        ·
        20 hours ago

        And zero knowledge proofs operate by playing a game that shows that you know a secret without revealing any information about the secret. If you transmit the approximate length of the ciphertext that is no longer zero knowledge.