I asked GPT-4 with the exact same wording and it answered:
“N/A”
I asked GPT-4 with the exact same wording and it answered:
“N/A”
Fascinating that people with stutters can be helped by practicing speaking with speech jammers.
It makes me think about how ADHD medication will make people without ADHD more distractible while it’ll help focus people with it.
John Wayne Gacy is really unhappy with this feature.
I think that’s their point: That maybe, as long as a candidate is mentally fit, then voters ought to be able to continue voting for them if they feel like the candidate is still worth voting for.
Honestly, if there was some kind of magical bullet to simply ban candidates who are mentally unfit (i.e. losing their marbles) from holding office that couldn’t be exploited, I think a lot of people would find that preferable to an age limit.
That doesn’t address issues like politicians who are too technologically illiterate to do things like open PDF files, though.
Old-school AI systems from way back in the day called Expert Systems were just a crapload of IF statements. There’s never been a concrete agreed-upon definition of AI because there’s never been an agreed-upon definition of the word Intelligence.
They’re saying that politicians like AOC, Katie Porter, Sanders, etc. are high quality public servants, and that high quality public servants should be able to be elected as long as they have cognitive function.
On one hand, in a hypothetical and ideal scenario, that would be nice to have for us voters.
On the other hand, even if an elected official does great work and has a great track record, should they be able to just serve indefinitely until their brain gives out? There’d be a lot of potential problems such as having entrenched and corruptible political operators, even if they started out good, who prevent “fresh blood” from entering politics. It’d be neat to see a study comparing different countries and political systems where there are age barriers and term limits vs those that don’t have them.
I wish I could upvote this twice
The free version gets things wrong a bunch. It’s impressive how good GPT-4 is. Human brains are still a million times better in almost every way (they cost a few dollars of energy to operate per day, for example) but it’s really hard to believe how capable the state of the art of LLMs is until you’ve tried it.
You’re right about one thing though. Humans are able to know things, and to know when we don’t know things. Current LLMs (transformer-based architecture) simply can’t do that yet.
I think the fundamental question is, as the Fediverse gets more popular, then how will servers get paid for? Here are some possibilities I see for how Fediverse hosting could work at scale:
I hope we come up with some process or plan for avoiding the pitfalls and forging an honest and community-integrating way forward.
I haven’t heard of cognitive schema assimilation. That sounds interesting. It sounds like it might fall prey to challenges we’ve had with symbolic AI in the past though.
The threat is a new sustainable community that’s sheltered from advertising that people could leave Factbook/Instagram/whatever and go to.
So then why was Meta trying to get Threads to be on the Fediverse? Of course they’re aware of any potential threats, no matter how small.
I wonder if planting 73 different kinds of ferns would have this benefit, or if they have to be very different kinds of plants.
When one side is suspiciously quiet or supportive about legislation that’s against their publicly stated goals, it’s because they secretly want it too.
I feel like I remember them being there since January of this year, which is when I started playing with ChatGPT, but I could be mistaken.
Recent papers have shown that LLMs build internal world models but about a topic as niche and complicated as cancer treatment, a chatbot based on GPT-3.5 be woefully ill-equipped to do any kind of proper reasoning.
It’s possible to build intelligent AI
What does intelligent AI that we can currently build look like?
I mean, on the ChatGPT site there’s literally a disclaimer along the bottom saying it’s able to say things that aren’t true…
IQ is mostly a pretty arbitrary and pointless metric because things like attitude, process, and creativity matter a lot more for getting results, but it can still help to diagnose learning disabilities and it has a solid statistical underpinning. The only thing it strongly correlates with is chess ability.
I was kind of with you until saying they’re “being a fucking idiot.”
Encouraging someone to help out? Great.
Browbeating someone for voicing the viewpoint or experience a lot of users are facing? We can do better than that.