Lugh@futurology.todayM to Futurology@futurology.todayEnglish · 7 months agoEvidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.arxiv.orgexternal-linkmessage-square34fedilinkarrow-up1287arrow-down127
arrow-up1260arrow-down1external-linkEvidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.arxiv.orgLugh@futurology.todayM to Futurology@futurology.todayEnglish · 7 months agomessage-square34fedilink
minus-squareCubitOom@infosec.publinkfedilinkEnglisharrow-up11arrow-down2·7 months agoI wonder where the line is drawn between an emergent behavior and a hallucination. If someone expects factual information and gets a hallucination, they will think the llm is dumb or not helpful. But if someone is encouraging hallucinations and wants fiction, they might think it’s an emergent behavior. In humans, what is the difference between an original thought, and a hallucination?
I wonder where the line is drawn between an emergent behavior and a hallucination.
If someone expects factual information and gets a hallucination, they will think the llm is dumb or not helpful.
But if someone is encouraging hallucinations and wants fiction, they might think it’s an emergent behavior.
In humans, what is the difference between an original thought, and a hallucination?