Yes, because we’re no longer at the point of “programming” by writing in instructions (in the context of AI models). We are legitimately moving towards training AI the way a child learns. These AI image generators are given millions upon millions of images with descriptions, then they learn the general features. So once it’s done and you give it a command, it does its best to create something from what it learned. And right now these AI’s don’t differentiate between the number of hands. Mainly because a good majority of images are showing clear views of hands. Most hands will be at an angle or even sideways. It also has no concept of how many fingers a person has. It’s actually an incredibly good image.
It looks far too realistic to be of anyone’s comfort. The fact that people are just shrugging their shoulders at a near-photorealistic AI image being used for blatant political propaganda is beyond a serious problem.
What’s going to happen when some smartass uses the AIs to generate pictures of a police murder of a Black man that never actually happened? Will anyone question the photo or what they’re doing when they rise up exactly when and where whoever made the deepfake wanted them to? Will they care about the truth then, or only when they are getting run over by tanks because they were too stupid, emotional and shortsighted to see they were falling into an obvious trap?
Nothing says “we care about kids” quite like using AI images of kids for your articles so you don’t have to pay them.
This is totes an actually super real picture, but I do feel bad for this kid. He’s lost a finger on one hand and regrown it on the other.
The wrong number of limbs seems like a very common AI mistake. Is that really hard to program/teach an AI system?
Yes, because we’re no longer at the point of “programming” by writing in instructions (in the context of AI models). We are legitimately moving towards training AI the way a child learns. These AI image generators are given millions upon millions of images with descriptions, then they learn the general features. So once it’s done and you give it a command, it does its best to create something from what it learned. And right now these AI’s don’t differentiate between the number of hands. Mainly because a good majority of images are showing clear views of hands. Most hands will be at an angle or even sideways. It also has no concept of how many fingers a person has. It’s actually an incredibly good image.
It looks far too realistic to be of anyone’s comfort. The fact that people are just shrugging their shoulders at a near-photorealistic AI image being used for blatant political propaganda is beyond a serious problem.
What’s going to happen when some smartass uses the AIs to generate pictures of a police murder of a Black man that never actually happened? Will anyone question the photo or what they’re doing when they rise up exactly when and where whoever made the deepfake wanted them to? Will they care about the truth then, or only when they are getting run over by tanks because they were too stupid, emotional and shortsighted to see they were falling into an obvious trap?
What goes next to a finger? Another finger. So they offer fingers next to fingers and don’t always stop at the right one.
https://youtu.be/24yjRbBah3w?si=ZSc3WDWyK6gM2aLJ
Best comment in thread
this isn’t an article it’s a forum post on the interweb