- cross-posted to:
- technology@lemmy.ml
- technology
- lobsters@lemmy.bestiver.se
- cross-posted to:
- technology@lemmy.ml
- technology
- lobsters@lemmy.bestiver.se
ngl, I haven’t spent the time on it, but I do often wonder if they all hoovered up a bunch of haveibeenowned type datasets. And if that, combined with all the social media data they scraped, could lead to a password guesser much more efficient than plain old bruteforcing.
honestly that’s quite possible
YA, 2FA makes it less worth exploring, but with github and all the public s3 buckets out there, I’m sure there’s some internal info that could be finessed out of the older models that would be useful for “bughunting.” Who has the time tho.😅
2FA is pretty much a must nowadays for anything you need to be even remotely secure. Maybe you can use a model to interrogate another model. 🤣
If by interrogate another model you mean setup an automated spearphisher system. Absolutely.

Oh hey, it’s my research (not this paper specifically)
Nice! The idea that you can bias the model during training in such a way that it doesn’t focus on specific information is really fascinating.
I’ve never really thought about it like that, but that’s an interesting way to think about it. The way I view differential privacy (and DP-SGD) is more about adding enough error to the output (or gradients) to guarantee someone could only be so confident in any information they extract. Just adding a mathematically proven amount of uncertainty to the output of the model.
I actually like your way better, but thanks. :)



