GPT4 is dramatically less likely to hallucinate than 3.5, and we’re barely starting the exponential growth curve.
Is there a risk? Yes. Humans do it too though if you think about it, and all AI has to do is better than humans, which is a milestone it’s already got within sight.
Is there no risk of the LLM hallucinating cases or laws that don’t exist?
GPT4 is dramatically less likely to hallucinate than 3.5, and we’re barely starting the exponential growth curve.
Is there a risk? Yes. Humans do it too though if you think about it, and all AI has to do is better than humans, which is a milestone it’s already got within sight.