A little experiment to demonstrate how a large language model like ChatGPT can not only write, but can read and judge. That, in turn, could lead to an enormous scaling up of the number of communications that are meaningfully monitored, warns ACLU, a human rights group.
The biggest ethical issues in AI/ML right now are primarily ones in which judgement is passed or facilitated via AI/ML. Judgement via surveillance is only one of many issues to be concerned about - judgement in healthcare, judgement about who gets access to resources, judgement in the legal system and other forms of judgement are also extremely high value and concerning targets of AI/ML judgement. The use of these models to identify, categorize, or otherwise quantify anything is generally a bad idea because they are trained on fundamentally racist, sexist, homophobic, transphobic, ableist, ageist, and other forms of bigoted text which are a direct representation of our existing society on the internet.
The issue is technology is advancing faster than wisdom.
I think it’s quite a bit more complicated than that. The wisdom is there- I’ve been to a large number of AI/ML ethics talks in the last several years, including entire conferences, but the people putting on these conferences and the people actually creating and pushing these models don’t always overlap. Even when they do, people disagree on how these should be implemented and how much ethics really matters.
It’s usually more complicated than what a catchphrase could convey, but I think it’s pretty close.
Anyone can get access to pretty powerful ML, just with a credit card. But it’s harder to get a handle on ethical implications, privacy implications, and the way the model is inaccurate, biased. This require caution, wisdom, which too few people have.
I know basics in the area, probably more than the average person, but not enough to use ML safely and ethically in practical applications. So it’s probably too early to make powerful ML accessible to the general public, not without better safeguard built-in.
This is not at all unique to AI. Reminds me of some of the samples from this track: https://www.youtube.com/watch?v=4Uu6mW3y5Iw (which is a sick beat) which I looked up and paste here:
https://vocal.media/futurism/the-greada-treaty
A YouTube link was detected in your comment. Here are links to the same video on Invidious, which is a YouTube frontend that protects your privacy:
Well stated, completely agreed.
@Gaywallet @Hirom ethics are damned because money talks. As usual, it is not problem with technology or understanding potential issues per se, but how it is all is getting blatantly ignored due of get there first gold rush, real or imagined. We have to remember that training most of LLMs are very questionable from copyright / authorship POV already, and companies try really hard to make everyone to ignore it. Because winner takes it all.