You hold artificial intelligence to the standards of general artificial intelligence, which doesn’t even exist yet. Even dumb decision trees are considered an AI. You have to lower your expectations. Calling the best AIs we have dumb is unhelpful at best.
We never called if statements AI until the last year or so. It’s all marketing buzz words. It has to be more than just “it makes a decision” to be AI, or else rivers would be AI because they “make a decision” on which path to take to the ocean based on which dirt is in the way.
Yeah, and highlighting that difference is what is important right now.
This is the first AI to masquerade as general artificial intelligence and people are getting confused.
This current thing doesn’t have or need rights or ethics. It can’t produce new intellectual property. It’s not going to save Timmy when he falls into the well. We’re going to need a new Timmy before all this is over
What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?
I’ve seen this argument so many times and it makes zero sense to me. I don’t think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don’t think “I just said predicting, what’s the next likely word to come after it”. That’s not even remotely similar to how I think at all.
Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn’t AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became “machine learning” or “model”.
Language is a method for encoding human thought. Mastery of language is mastery of human thought. The problem is, predictive text heuristics don’t have mastery of language and they cannot predict desired output
Sorry, but you oversimplify a lot here, it hurts. Language can express and communicate human thought, sure, but human thought is more than language. Human thought includes emotions, experiences, abstract concepts, etc. that go beyond what can be expressed through language alone. LLMs are excellent at generating text, often more skilled than the average person, but training data and algorithms limit LLMs. They can lack nuances of context, tone, or intent. TL;DR.: Understanding language doesn’t imply understanding human thought.
I’d love to know how you even came to your conclusion.
Many languages lack words for certain concepts. For example, english lacks a word for the joy you feel at another’s pain. You have to go to Germany in order to name Schadenfreude. However, English is perfectly capable of describing what schadenfreude is. I sometimes become nonverbal due to my autism. In the moment, there is no way I could possibly describe what I am feeling. But that is a limitation of my temporarily panicked mind, not a limitation of language itself. Sufficiently gifted writers and poets have described things once thought indescribable. I believe language can describe anything with a book long enough and a writer skilled enough.
I thought this was an inciteful comment. Language is a kind of ‘view’ (in the model view controller sense) of intelligence. It signifies a thought or meme. But, language is imprecise and flawed. It’s a poor representation since it can be misinterpreted or distorted. I wonder if language based AIs are inherently flawed, too.
Language based AIs will always carry the biases of the language they speak. I am certain a properly trained bilingual AI would be smarter than a monolingual AI of the same skill level
Shit as dumb as decision trees are considered AI. As long as there’s an if-statement somewhere in the app, they can slap the label AI on it, and it’s technically correct.
But I take your point. This stuff will continue to advance.
But the important argument today isn’t over what it can be, it’s an attempt to clarify for confused people.
While the current LLMs are an important and exciting step, they’re also largely just a math trick, and they are not a sign that thinking machines are almost here.
Some people are being fooled into thinking general artificial intelligence has already arrived.
If we give these unthinking LLMs human rights today, we expand orporate control over us all.
These LLMs can’t yet take a useful ethical stand, and so we need to not rely on then that way, if we don’t want things to go really badly.
None of it is even AI, Predicting desired text output isn’t intelligence
You hold artificial intelligence to the standards of general artificial intelligence, which doesn’t even exist yet. Even dumb decision trees are considered an AI. You have to lower your expectations. Calling the best AIs we have dumb is unhelpful at best.
We never called if statements AI until the last year or so. It’s all marketing buzz words. It has to be more than just “it makes a decision” to be AI, or else rivers would be AI because they “make a decision” on which path to take to the ocean based on which dirt is in the way.
deleted by creator
Yeah, and highlighting that difference is what is important right now.
This is the first AI to masquerade as general artificial intelligence and people are getting confused.
This current thing doesn’t have or need rights or ethics. It can’t produce new intellectual property. It’s not going to save Timmy when he falls into the well. We’re going to need a new Timmy before all this is over
At this point i just interpret AI to be "we have lots of select statements and inner joins "
There are also threshold functions and gradient calculations.
Rightly so, as decision trees are also considered AI, which are very dumb in comparison to LLMs. People have way too high expectations for AI.
Pick a number from 1 to 2^63 - 1 ~= 9 x 10^19 randomly. See AI is easy /s
Echo $RANDOM og ai
I do agree, but on the other hand…
What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?
I’ve seen this argument so many times and it makes zero sense to me. I don’t think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don’t think “I just said predicting, what’s the next likely word to come after it”. That’s not even remotely similar to how I think at all.
Inject personal biases :)
AI is whatever machines can’t do yet.
Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn’t AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became “machine learning” or “model”.
Machine learning is still AI. Specifically, it’s a subset of AI.
Language is a method for encoding human thought. Mastery of language is mastery of human thought. The problem is, predictive text heuristics don’t have mastery of language and they cannot predict desired output
Sorry, but you oversimplify a lot here, it hurts. Language can express and communicate human thought, sure, but human thought is more than language. Human thought includes emotions, experiences, abstract concepts, etc. that go beyond what can be expressed through language alone. LLMs are excellent at generating text, often more skilled than the average person, but training data and algorithms limit LLMs. They can lack nuances of context, tone, or intent. TL;DR.: Understanding language doesn’t imply understanding human thought.
I’d love to know how you even came to your conclusion.
Many languages lack words for certain concepts. For example, english lacks a word for the joy you feel at another’s pain. You have to go to Germany in order to name Schadenfreude. However, English is perfectly capable of describing what schadenfreude is. I sometimes become nonverbal due to my autism. In the moment, there is no way I could possibly describe what I am feeling. But that is a limitation of my temporarily panicked mind, not a limitation of language itself. Sufficiently gifted writers and poets have described things once thought indescribable. I believe language can describe anything with a book long enough and a writer skilled enough.
I’m not a native English speaker so I might be wrong, but isn’t “sadism” also covers “schadenfreude”?
No, schaudenfreude is an emotion, sadism is a personality trait
I thought this was an inciteful comment. Language is a kind of ‘view’ (in the model view controller sense) of intelligence. It signifies a thought or meme. But, language is imprecise and flawed. It’s a poor representation since it can be misinterpreted or distorted. I wonder if language based AIs are inherently flawed, too.
Edit: grammar, ironically
Language based AIs will always carry the biases of the language they speak. I am certain a properly trained bilingual AI would be smarter than a monolingual AI of the same skill level
“Mastery of language is mastery of human thought.” is easy to prove false.
The current batch of AIs is an excellent data point. These things are very good at language, and they still can’t even count.
The average celebrity provides evidence that it is false. People who excel at science often suck at talking, and vice-versa.
We didn’t talk our way to the moon.
Even when these LLMs master language, it’s not evidence that they’re doing any actual thinking, yet.
I think the current batch of AIs and the Kardashians are bad at using language
Always remember that it will only get better, never worse.
They said “computers will never do x” and now x is assumed.
There’s a difference between “this is AI that could be better!” and “this could one day turn into AI.”
Everyone is calling their algorithms AI because it’s a buzzword that trends well.
Shit as dumb as decision trees are considered AI. As long as there’s an if-statement somewhere in the app, they can slap the label AI on it, and it’s technically correct.
That’s not technically correct unless the thresholds in those if statements are updated on the information gained for the data.
It usually also gets worse while it gets better.
But I take your point. This stuff will continue to advance.
But the important argument today isn’t over what it can be, it’s an attempt to clarify for confused people.
While the current LLMs are an important and exciting step, they’re also largely just a math trick, and they are not a sign that thinking machines are almost here.
Some people are being fooled into thinking general artificial intelligence has already arrived.
If we give these unthinking LLMs human rights today, we expand orporate control over us all.
These LLMs can’t yet take a useful ethical stand, and so we need to not rely on then that way, if we don’t want things to go really badly.
Depends on your definition of AI, and everyone’s definition is different.