• ☆ Yσɠƚԋσʂ ☆OP
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    I think we have to be careful with assumptions here. The human brain is incredibly complex, but it evolved organically to do what it does under the selection pressures that weren’t strictly selecting for intelligence. We shouldn’t assume that the complexity of our brain is a prerequisite. The underlying algorithm may be fairly simple, and the complexity we see is just an emergent phenomenon from scaling it up to the size of our brain.

    We also know that animals with much smaller brains, like corvids, can exhibit impressive feats of reasoning. That strongly suggests that their brains are wired more efficiently than primate brains. I imagine part of the reason is that they need to fly, which creates additional selection pressure for more efficient wiring that facilitates smaller brains. Even insects like bees can perform fairly complex cognitive tasks like mapping out their environment and complex communication. And perhaps that’s where we should really be focusing our studies. A bee brain has around a million neurons, and that’s a far more tractable problem to tackle than the human brain.

    Another interesting thing to note is that human brains have massive amounts of redundancy. There’s a case of a guy who effectively had 90% of his brain missing and was living a normal life. So, even when it comes to human style intelligence, it looks like the scope of the problem is significantly smaller than it might first appear.

    I’d argue that embodiment is the key feature in establishing a reinforcement loop, and that robotics will be the path toward creating genuine AI. An organism’s brain maintains homeostasis by constantly balancing internal body signals with those from the external environment, making decisions to regulate its internal state. It’s a continuous feedback loop that allows the brain to evaluate the usefulness of its actions, which facilitates reinforcement learning. An embodied AI could use this same mechanism to learn about and interact with the world effectively. Robots build an internal world model based on the interaction with the environment that acts as the basis for their decision making. Such a system develops underlying representations of the world that are fundamentally similar to our own, and that would provide a basis for meaningful communication.

    • Monk3brain3 [any, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      You make a lot of good points that I think are all valid. The only thing I can add is that the embodied AI is an interesting thing. The only thing im a bit of a sceptic on is that robots and other hardware on which the AI is being developed lacks the biological plasticity we have in living creatures. That might lead to incorporation of biological systems in ai development (and all the ethical issues that go with that).

      • ☆ Yσɠƚԋσʂ ☆OP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        That’s something we’ll have to see to know for sure, but personally I don’t see that biological substrate is fundamental to the patterns of our thoughts. Neural networks within a computer have similar kind of plasticity because the connections within the neural network are balanced through training. They are less efficient than biological networks, but there are already analog chips being made which express neuron potentials in hardware. It’s worth noting that we won’t necessarily create intelligence like our own either. This might be the closest we’ll get to meeting aliens. :)

        I suspect that the next decade will be very interesting to watch.