• 1 Post
  • 14 Comments
Joined 11 months ago
cake
Cake day: July 26th, 2023

help-circle


  • The purpose of game AI is to make games fun, not to advance serious research, but it certainly is real AI. Making computers play chess was a subject of much serious research. AI opponents in video games are not fundamentally different from that.

    As humans, we have an unfortunate tendency to aggrandize our own group and denigrate others. I see anthropocentrism as just one aspect of that, beside nationalism, racism and such. This psychological goal could be equally well achieved by saying things like: “This is not real intelligence. It’s just artificial, like game AI.”

    But I don’t see that take being made. I only see pseudo-smart assertions about how AI is just a marketing term.


    I think anthropocentrism may have something to do with why the idea of “emergent abilities” (as step-changes in performance/parameters) is alluring. We like to believe that we are categorically different from animals; or at least, that is the traditional belief in many western cultures. We now know, though, that the brain does the thinking, and that human and other mammal brains only show differences in degree, not in kind. If you believe in some categorical difference between animals and humans, you would expect to find step-changes of that sort. Personally, I would find it nice, if I could believe that, somewhere along that continuum between animal and human brain, something goes click and makes it ok to eat them.


  • You were never into video games, right? The reason I ask, is because games use a lot of AI. One might see “AI” in the game settings, or if the game has some editing tool/level builder/ … one might see it there. If one takes an interest, one might pick up on people talking about the AI of one game or another.

    I am always surprised, when I hear people say that LLMs are too simple to be real AI, because I’m thinking that most people who grew up in the last ~20 years would have interacted a lot with these much simpler game AIs. I would have thought that this knowledge would diffuse to parents and peers.

    Non-rhetorical question: Any idea why that didn’t happen?


  • It’s not the definition in the paper. Here is the context:

    The idea of emergence was popularized by Nobel Prize-winning physicist P.W. Anderson’s “More Is Different”, which argues that as the complexity of a system increases, new properties may materialize that cannot be predicted even from a precise quantitative understanding of the system’s microscopic details.

    What this means is, that we cannot, for example, predict chemistry from physics. Physics studies how atoms interact, which yields important insights for chemistry, but physics cannot be used to predict, say, the table of elements. Each level has its own laws, which must be derived empirically.

    LLMs obviously show emergence. Knowing the mathematical, technological, and algorithmic foundation, tells you little about how to use (prompt, train, …) an AI model. Just like knowing cell biology will not help you interact with people, even if they are only colonies of cells working together.

    The paper talks specifically about “emergent abilities of LLMs”:

    The term “emergent abilities of LLMs” was recently and crisply defined as “abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale models”

    The authors further clarify:

    In this paper, […] we specifically mean sharp and unpredictable changes in model outputs as a function of model scale on specific tasks.

    Bigger models perform better. An increase in the number of parameters correlates to an increase in the performance on tests. It had been alleged, that some abilities appear suddenly, for no apparent reason. These “emergent abilities of LLMs” are a very specific kind of emergence.


  • I think a big picture view makes the problem clearer.

    Licensing material means that you must pay the owner of some intellectual property. If we expand copyright to require licensing for AI training, then that means that the owners can demand more money for no additional work.

    Where does the wealth come from that flows to the owners? It comes from the people who work. There is nowhere else it could possibly come from.

    That has some implications.

    Research and development progress slower because, not only do we have to work on improving things, but also to pay off property owners who contribute nothing. If you zoom in from the big picture view, you find that this is where small devs and open source suffer. They have to pay or create their own, new datasets; extra work for no extra benefit.

    It also means that inequality increases. The extra cash flow means that more income goes to certain property owners.


  • It’s possible to run small AIs on gaming PCs. For Stable Diffusion and small LLMs (7, maybe 13B), a GPU with 4GB (or even 2GB?) VRAM is sufficient. A high-end gaming PC can also be used to modify them (ie make LoRas, etc.). Cloud computing is quite affordable, too.

    Stable Diffusion, which had such an impact, reportedly cost only 600k USD to train. It should be possible to make a new one for a fraction of that today. Training MPT-7B cost MosaicML reportedly 200k USD. Far from hobbyist money, but not big business, either.







  • So acquiring and distributing pirated materials like college textbooks and otherwise expensive software is one example.

    That’s an interesting example, because in threads on AI lawsuits there are many calls for expanding intellectual property, without any consideration for public benefit. It’s such an outright doubling down on all the pathological aspects of capitalism. It made me look whether there are any equally concrete demands going the other way and eventually make this post.