Everyone wonders why ChatGPT is highly censored, this is a good example as to why. However, maybe instead of “As an AI language model” it should say something like, “Large language models like me tend to hallucinate/make up things and confidently convey them in my response. I will leave it up to you to validate what I say.” The ultimate problem is the general public is treating LLMs like they are super sci-fi AI, they are basically fantastic autocomplete.
Super autocomplete doesn’t sound as appealing as “AI”. The general public need to know that though so they can adjust their expectation. For example, it doesn’t make sense to expect an autocomplete system to solve complex math problems (something people unexpectedly use ChatGPT for).
Everyone wonders why ChatGPT is highly censored, this is a good example as to why. However, maybe instead of “As an AI language model” it should say something like, “Large language models like me tend to hallucinate/make up things and confidently convey them in my response. I will leave it up to you to validate what I say.” The ultimate problem is the general public is treating LLMs like they are super sci-fi AI, they are basically fantastic autocomplete.
Super autocomplete doesn’t sound as appealing as “AI”. The general public need to know that though so they can adjust their expectation. For example, it doesn’t make sense to expect an autocomplete system to solve complex math problems (something people unexpectedly use ChatGPT for).