• ☆ Yσɠƚԋσʂ ☆OP
    link
    fedilink
    arrow-up
    4
    ·
    20 days ago

    I very much agree with all that. Basically the current approach can do neat party tricks, but to do anything really useful you’d have to do things like building a world model, and allowing the system to do adaptive learning on the fly. People are trying to skip really important steps and just go from having an inference engine to AGI, and that’s bound to fail. The analogy between LLMs and short term memory is very apt. It’s an important component of the overall cognitive system, but it’s not the whole thing.

    In regards to the problems associated with this tech, I largely agree as well, although I would argue that the root problem is capitalism as usual. Of course, since we do live under capitalism, and that’s not changing in the foreseeable future, this tech will continue being problematic. It’s hard to say at the moment how much influence this stuff will really have in the end. We’re kind of in uncharted territory right now, and it’ll probably become more clear in a few years.

    I do think that at the very least this tech needs to be open. The worst case scenario will be that we’ll have megacorps running closed models trained in opaque ways.