Xavienth

  • 3 Posts
  • 941 Comments
Joined 5 years ago
cake
Cake day: July 18th, 2020

help-circle






  • One, I said they are no more commonplace than they were ten years ago.

    Two, I never said LLMs will go away. In fact I said they have their uses. But, and I will say this again in stronger terms: They are stupid, rote memorizers. Their fundamental flaw is that they cannot apply intelligent, rational thought to novel problems. Using them in situations that require rational thought is a mistake. This is an architectural flaw, not a problem of data. Large language models predict text, they cannot think. They can give an illusion of thought by aping a large body of text that itself demonstrates thought processes, but the moment a problem strays from the existing high quality data, the facade crumbles, it produces nonsense, and it is clear that there never was any thought in the first place. And now that we’ve scraped all the text there is, the body of problems LLMs can imitate the solution for has reached its greatest extent. GPT will never lead to a rational agent, no matter how much OpenAI and co say it will.


  • Smartphones reached their current saturation about 10 years ago, and perhaps not coincidentally that’s when they stopped improving. Can you honestly say that since 2015, cell phones in developed countries have gotten more common? At a time when people were already giving them to 10 year olds? Can you even say they’ve become more useful, when you could already browse social media, check the weather, apply for jobs, write documents, and order food to your door with them?



  • We’ve hit a wall in terms of progress with this technology. We’ve literally vacuumed up all the training data there is. What is left is improvements in efficiency (see DeepSeek).

    LLMs are cool, they have their uses, but they have fundamental flaws as rational agents, and will never be fit for this purpose.