• ☆ Yσɠƚԋσʂ ☆OP
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    Using a combination of specialized systems is definitely a viable approach, but I think there’s a more fundamental issue that needs to be addressed. The main difference between humans and AI when it comes to decision making is that with people you can ask questions about why they made a certain choice in a given situation. This allows for correction of wrong decisions and guidance towards better ones. However, with AI, it’s not as simple because there is no shared context or intuition for how to interact with the physical world. This is due to AIs having lack of human intuition about how the physical world behaves that we develop by interacting with it from the day we’re born. This forms the basis of understanding in a human sense. As a result, AI lacks this capacity for genuine understanding of the tasks it’s accomplishing and making informed decisions.

    To ensure machines can operate safely in the physical world and effectively interact with humans, we’d need to follow a similar process as with human child development. This involves training through embodiment and constructing an internal world model that allows the AI to develop an intuition about how objects behave in the physical realm. Then we could teach it language within this context. What we’re doing with LLMs is completely backwards in my opinion. We just feed them a whole bunch of text, and then they figure out relationships within that text, but none of that is anchored to the physical world in any way.

    The model needs to be trained to interact with the physical world through reinforcement to create an internal representation of the world that’s similar to our own. This would give us a shared context that we can use to communicate with the AI, and it would have actual understanding of the physical world that’s similar to our own. It’s hard to say whether current LLM approaches are flexible enough to support this sort of a world model, so we’ll have to wait and see what the ceiling for this stuff is. I do think we will figure this out eventually, but we may need more insights into how the brain works before that happens.