Previous Lemmy.ml post: https://lemmy.ml/post/1015476 Original X post (at Nitter): https://nitter.net/xwang_lk/status/1734356472606130646
Previous Lemmy.ml post: https://lemmy.ml/post/1015476 Original X post (at Nitter): https://nitter.net/xwang_lk/status/1734356472606130646
My impression is that game AI (and I mean in FPS, not board games) were not considered as serious AI in the computer science sense. Most game AI even till this day are “cheating” in the sense that they are not end-to-end (i.e. cannot operate using screen capture, vs. engine information), and often also need additional advantages to hold ground. For example, virtually all these FPS game AI are quite useless once you actually want to interface it with some form of robotics and do open world exploration. So game AI is somewhat separate from the public’s obsession with the term AI, that suddenly turn nit-picky/moving-the-goalposty once AI became performant on end-to-end tasks.
The Wikipedia article AI effect (not super-polished) has many good references where people discussed how this is related to anthropocentrism, and people can also be very pushy with that view in the context of animal cognition:
Note that there is also a similar effect, not explicitly discussed by that article, where people like to depict ancient societies dumber than they actually are (e.g. the today discounted notion of “Dark Ages”).
The purpose of game AI is to make games fun, not to advance serious research, but it certainly is real AI. Making computers play chess was a subject of much serious research. AI opponents in video games are not fundamentally different from that.
As humans, we have an unfortunate tendency to aggrandize our own group and denigrate others. I see anthropocentrism as just one aspect of that, beside nationalism, racism and such. This psychological goal could be equally well achieved by saying things like: “This is not real intelligence. It’s just artificial, like game AI.”
But I don’t see that take being made. I only see pseudo-smart assertions about how AI is just a marketing term.
I think anthropocentrism may have something to do with why the idea of “emergent abilities” (as step-changes in performance/parameters) is alluring. We like to believe that we are categorically different from animals; or at least, that is the traditional belief in many western cultures. We now know, though, that the brain does the thinking, and that human and other mammal brains only show differences in degree, not in kind. If you believe in some categorical difference between animals and humans, you would expect to find step-changes of that sort. Personally, I would find it nice, if I could believe that, somewhere along that continuum between animal and human brain, something goes click and makes it ok to eat them.