Wow, that’s a little too impressive. I’m guessing that image was probably in its training set (or each individual image). There are open training sets with adversarial images, and these images may have come from them. Every time I’ve tried to use ChatGPT with images it has kinda failed (on electronic schematics, plant classification, images of complex math equations, etc). I’m kind of surprised OpenAI doesn’t just offload some tasks to purpose-built models (an OCR or a classification model like inaturalist’s would’ve performed better in some of my tests).
This exact image (without the caption-header of course) was on one of the slides for one of the machine-learning related courses at my college, so I assume it’s definitely out there somewhere and also was likely part of the training sets used by OpenAI. Also, the image in those slides has a different watermark at the bottom left, so it’s fair to assume it’s made its rounds.
Contradictory to this post, it was used as an example for a problem that machine learning can solve far better than any algorithms humans would come up with.
Confuse AI? Fuck, I’m confused…wait…OMG…am I AI!!!
Absolutely human, i even removed the context text and it didn’t even flinch.
“Similar color” yep seems right, “…and texture” wait what?
idk what kind of muffin they are feeding this AI but sure is a hairy one
Cromch
You don’t know until you try.
Wow, that’s a little too impressive. I’m guessing that image was probably in its training set (or each individual image). There are open training sets with adversarial images, and these images may have come from them. Every time I’ve tried to use ChatGPT with images it has kinda failed (on electronic schematics, plant classification, images of complex math equations, etc). I’m kind of surprised OpenAI doesn’t just offload some tasks to purpose-built models (an OCR or a classification model like inaturalist’s would’ve performed better in some of my tests).
This exact image (without the caption-header of course) was on one of the slides for one of the machine-learning related courses at my college, so I assume it’s definitely out there somewhere and also was likely part of the training sets used by OpenAI. Also, the image in those slides has a different watermark at the bottom left, so it’s fair to assume it’s made its rounds.
Contradictory to this post, it was used as an example for a problem that machine learning can solve far better than any algorithms humans would come up with.
Did people make you? If so, yes!