• 0 Posts
  • 160 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle



  • I mean, that would be terrible.

    Microgravity is horrible for the human body. Bones lose density, the heart enlarges, and all muscles atrophy. It takes months or years to recover from prolonged microgravity, they were supposed to be up there for like a week, where none of that is an issue.

    I’m sure if they really wanted to come down, nasa would put them on a different capsule and get them down, but then that just means 2 other people, who’ve been up there longer than them, have to stay in microgravity for even longer than they planned.



  • Zron@lemmy.worldtoScience Memes@mander.xyzHere kitty kitty
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Too bad it peaked 2000 years ago.

    I know it’s kind of a meme, but Diogenes was really onto something. Don’t keep what you do not need, how can someone be respected as a person if they depend on servants, a wealthy ruler is no different from a slave once they’ve died, etc.


  • Zron@lemmy.worldtoScience Memes@mander.xyzMushroom Guides
    link
    fedilink
    English
    arrow-up
    49
    ·
    2 months ago

    He also wandered into the Alaskan wilderness with basically just a sack of rice and a .22lr rifle.

    He was a a couple miles from safety the entire time, but did not buy a map so believed he was stranded when the river rose and cut off the main trail. But there was another trail with a raised cable crossing over the river a few miles upstream.

    He was totally unprepared and essentially just committed extended suicide. The fact that he remembered some basic tips from a Boy Scout handbook doesn’t mean he was an expert. Kid was an idiot who got in way over his head.





  • I didn’t bring up Chinese rooms because it doesn’t matter.

    We know how chatGPT works on the inside. It’s not a Chinese room. Attributing intent or understanding is anthropomorphizing a machine.

    You can make a basic robot that turns on its wheels when a light sensor detects a certain amount of light. The robot will look like it flees when you shine a light at it. But it does not have any capacity to know what light is or why it should flee light. It will have behavior nearly identical to a cockroach, but have no reason for acting like a cockroach.

    A cockroach can adapt its behavior based on its environment, the hypothetical robot can not.

    ChatGPT is much like this robot, it has no capacity to adapt in real time or learn.


  • You’re the one who made this philosophical.

    I don’t need to know the details of engine timing, displacement, and mechanical linkages to look at a Honda civic and say “that’s a car, people use them to get from one place to another. They can be expensive to maintain and fuel, but in my country are basically required due to poor urban planning and no public transportation”

    ChatGPT doesn’t know any of that about the car. All it “knows” is that when humans talked about cars, they brought up things like wheels, motors or engines, and transporting people. So when it generates its reply, those words are picked because they strongly associate with the word car in its training data.

    All ChatGPT is, is really fancy predictive text. You feed it an input and it generates an output that will sound like something a human would write based on the prompt. It has no awareness of the topics it’s talking about. It has no capacity to think or ponder the questions you ask it. It’s a fancy lightbulb, instead of light, it outputs words. You flick the switch, words come out, you walk away, and it just sits there waiting for the next person to flick the switch.







  • At this point we’re not even sure if fully autonomous vehicles are possible.

    Yes that one guy has been saying it’ll be ready next year for the passed 10 years, but no self driving company has been able to get an autonomous car from point A to point B in all road conditions that a competent human can manage.

    Even aircraft autopilot is not as autonomous as what people want out of self driving cars. Pilots are still required to be at their seats the entire flight in case something unexpected happens. And there are a lot more unexpected things on a road than in the middle of the sky. Even discounting human drivers being in the way, a self driving car needs to be able to recognize everything a human can and react to it better than a human would. I’m not sure that’s possible, even with “AI”. The human brain is insanely good at pattern matching, and it took millions of years of trial and error evolution to luck our way into that. How can someone guarantee an AI is going to be better?