It’s not that easy, since even if the neural network is trained to recognize poisoned images, you would need to remove the poisoned data from the image to be able to properly categorize it. Without the original nonpoisoned image or human intervention it’s going to be exceedingly hard.
This is going to be an arms race, but luckily the AI has to find a few correct answers from a large pool of possibilities, whereas the poison has to just not produce the correct ones. This combined with the effort to retrain the models every time a new version of the poison pops up is going to keep the balance on the side of the artists at least for a while.
Honestly this doesn’t sound like something they can’t just train the model to recognize and account for; it’s just a short term roadblock at best.
It’s not that easy, since even if the neural network is trained to recognize poisoned images, you would need to remove the poisoned data from the image to be able to properly categorize it. Without the original nonpoisoned image or human intervention it’s going to be exceedingly hard.
This is going to be an arms race, but luckily the AI has to find a few correct answers from a large pool of possibilities, whereas the poison has to just not produce the correct ones. This combined with the effort to retrain the models every time a new version of the poison pops up is going to keep the balance on the side of the artists at least for a while.