• JoeByeThen [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    23 days ago

    ngl, I haven’t spent the time on it, but I do often wonder if they all hoovered up a bunch of haveibeenowned type datasets. And if that, combined with all the social media data they scraped, could lead to a password guesser much more efficient than plain old bruteforcing.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 days ago

      Nice! The idea that you can bias the model during training in such a way that it doesn’t focus on specific information is really fascinating.

      • KnilAdlez [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 days ago

        I’ve never really thought about it like that, but that’s an interesting way to think about it. The way I view differential privacy (and DP-SGD) is more about adding enough error to the output (or gradients) to guarantee someone could only be so confident in any information they extract. Just adding a mathematically proven amount of uncertainty to the output of the model.