• GrouchyGrouse [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    40
    ·
    2 months ago

    I got this mental image of a bunch of guys trying to invent flight before the Wright Brothers. They’ve got this wingless prototype that shoots off some giant ramp. No matter how big the ramp it never achieves flight. It goes up and comes back down. And these scientists are just chain smoking, pounding black coffee by the pot, pulling all-nighters, trying to come up with a bigger ramp. They bulldoze the whole fucking planet to make the ramp. Now we’re Planet Ramp. The fucking prototype still won’t fly.

  • SwitchyandWitchy [she/her]@hexbear.netM
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 months ago

    I understand that here and in many of the irl circles I roll in that this statement is taken for granted, but it’s still really nice to have research that backs it up too.

  • Soot [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 months ago

    Cutting edge research proves that AI isn’t as smart as humans… well, yeah, my 3 year old knows that

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      English
      arrow-up
      23
      ·
      2 months ago

      You don’t need to fully understand a mechanism to replicate its function. We frequently treat systems as black boxes and focus entirely on the output. Also, consider that nature has zero ‘understanding’ of intelligence, yet it managed to produce the human brain through blind mutation shaped strictly by selection pressures. Clearly, comprehension is not a prerequisite for creation. We can mimic the process using genetic algorithms and biologically inspired neural networks.

      In fact, we often gain understanding through the attempt to replicate. For instance, reverse engineering these structures is exactly how we learned that language isn’t the basis of intelligence in the first place. We don’t need a perfect theory of mind to build a system that works. All this shows is that LLM approach has limits and it’s not going to lead to any sort of general intelligence on its own.

      • Monk3brain3 [any, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 months ago

        Yeah I agree with you. A better way to make my point would be that I just think trying to replicate something as insanely complex as intelligence will require a much more thorough understanding of how it works. Like nature took billions of years to pull it off and only one species reached a high level of intelligence (from our perspective at least)

        • GrouchyGrouse [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          13
          ·
          2 months ago

          The whole thing reeks of “cart before the horse” and always has. It bleeds into all facets of it, right down to it demanding energy outputs we don’t have yet.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          I think we have to be careful with assumptions here. The human brain is incredibly complex, but it evolved organically to do what it does under the selection pressures that weren’t strictly selecting for intelligence. We shouldn’t assume that the complexity of our brain is a prerequisite. The underlying algorithm may be fairly simple, and the complexity we see is just an emergent phenomenon from scaling it up to the size of our brain.

          We also know that animals with much smaller brains, like corvids, can exhibit impressive feats of reasoning. That strongly suggests that their brains are wired more efficiently than primate brains. I imagine part of the reason is that they need to fly, which creates additional selection pressure for more efficient wiring that facilitates smaller brains. Even insects like bees can perform fairly complex cognitive tasks like mapping out their environment and complex communication. And perhaps that’s where we should really be focusing our studies. A bee brain has around a million neurons, and that’s a far more tractable problem to tackle than the human brain.

          Another interesting thing to note is that human brains have massive amounts of redundancy. There’s a case of a guy who effectively had 90% of his brain missing and was living a normal life. So, even when it comes to human style intelligence, it looks like the scope of the problem is significantly smaller than it might first appear.

          I’d argue that embodiment is the key feature in establishing a reinforcement loop, and that robotics will be the path toward creating genuine AI. An organism’s brain maintains homeostasis by constantly balancing internal body signals with those from the external environment, making decisions to regulate its internal state. It’s a continuous feedback loop that allows the brain to evaluate the usefulness of its actions, which facilitates reinforcement learning. An embodied AI could use this same mechanism to learn about and interact with the world effectively. Robots build an internal world model based on the interaction with the environment that acts as the basis for their decision making. Such a system develops underlying representations of the world that are fundamentally similar to our own, and that would provide a basis for meaningful communication.

          • Monk3brain3 [any, he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            You make a lot of good points that I think are all valid. The only thing I can add is that the embodied AI is an interesting thing. The only thing im a bit of a sceptic on is that robots and other hardware on which the AI is being developed lacks the biological plasticity we have in living creatures. That might lead to incorporation of biological systems in ai development (and all the ethical issues that go with that).

            • ☆ Yσɠƚԋσʂ ☆OP
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              That’s something we’ll have to see to know for sure, but personally I don’t see that biological substrate is fundamental to the patterns of our thoughts. Neural networks within a computer have similar kind of plasticity because the connections within the neural network are balanced through training. They are less efficient than biological networks, but there are already analog chips being made which express neuron potentials in hardware. It’s worth noting that we won’t necessarily create intelligence like our own either. This might be the closest we’ll get to meeting aliens. :)

              I suspect that the next decade will be very interesting to watch.

    • BodyBySisyphus [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 months ago

      Because LLMs aren’t useful enough to be profitable and the investments companies are making in infrastructure only make sense if they represent a viable stepping stone toward AGI. If LLMs are a dead end, a lot of money may be about to go up in smoke.

      The other problem is that they are mainly good at creating the illusion that they work well, and the main barrier to implementation, the tendency to hallucinate, may not be fixable.

      • Kefla [she/her, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        23
        ·
        2 months ago

        Of course it isn’t fixable and I’ve been saying this since like 2021. Hallucination isn’t a bug that mars their otherwise stellar performance, hallucination is the only thing they do. Since nothing they generate is founded on any sort of internal logic, everything they generate is hallucination, even the parts that coincidentally line up with reality.