• 0 Posts
  • 26 Comments
Joined 4 months ago
cake
Cake day: March 3rd, 2024

help-circle
  • Microsoft dominated with IE back in the day for the same reason Chrome and Safari are the dominant choices. People don’t tend to change the default if it works okay enough. Firefox dropped heavily years ago as the market was saturated with other new choices already installed on mobile and Chromebooks, but recent numbers are about the same as they have been for a while. Maybe even still growing, as all the numbers I find are percentages, and there’s no doubt we’ve had an explosion of device use.





  • That was news to me. The wiki for Torosaurus is a deep rabbit hole, and since fossilization is rare and we have to go with what little we can find, this will probably be a long debate either way.

    With that being said, whatever the name, three horn dino is best dino! And weird that the meme wouldn’t have one on there, since it has always been a well known dinosaur in movies and toys.







  • Wherever humans draw the line. The meme uses the assumption that there is a clear change from earlier species to later descendants, when it reality it is a continuous change of many characteristics each time an individual reproduces and spreads their genetics. It’s the flaw of the missing link argument.


  • The caveat of finding “better” methods is that it excuses continuing or expanding the things we do that are the core problems of rapid growth, consumption, and a throwaway society. And like you said, they have their own issues that might become problematic with growth in that process. Not to say that we shouldn’t try to improve what we can, just a point that being better than the worst way to do things isn’t all that great either.

    The word “sustainable” in the title is one of those greenwashing terms to sell a product and keep the status quo of business as usual. As the report shows.


  • Same here. When I stop at a stop sign and there is a car behind me, I routinely take bets in my head to see if the next car just goes through it. Most often they do. Running red lights is another…if you are at a red light waiting for it to change to green, always wait a split second before going and also give a glance both ways. Don’t assume because the light is good there isn’t someone trying to beat the red. Or just going through an obvious red because they’re more important than everyone else in their head. I’ve gone through many a yellow light thinking to myself that I really cut it close, then I notice one or even more people have followed me through the intersection. Boy they get upset too if you actually stop for that yellow.




  • If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.


  • Rhaedas@fedia.iotoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    arrow-up
    63
    arrow-down
    4
    ·
    edit-2
    3 months ago

    LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

    I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not “thinking” themselves, how we’ve dived head first ignoring the dangers of misuse and many flaws they have is telling on how we’ll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

    HAL from 2001/2010 was a great lesson - it’s not the AI…the humans were the monsters all along.