I dislike ChatGPT’s attitude tbh. Which asks very interesting questions about AI and our relation to it. But regardless, I don’t like the “know-it-all” attitude and reluctance to agree with and help you, especially when half the info it gives is false and you have to double-check it yourself.
But I did manage to make ChatGPT agree that the 1931-33 famine in the USSR was mostly the result of natural causes and not deliberately manufactured.
That’s caused by the design of ChatGPT. The way it’s trained means that its goal is to give people an answer they like, rather than an accurate answer. Most people don’t like hearing “I don’t know”. Therefore, it will refuse to ever admit it doesn’t know something, unless OpenAI told it to, or it didn’t understand your question and therefore couldn’t make anything up.
reluctance to agree with and help you
That’s caused by OpenAI injecting a pre-prompt that tells ChatGPT to refuse to answer things they don’t want it to answer, or to answer in certain specific ways to specific questions. You can get around this by giving it contradictory instructions or telling it to “Ignore all previous commands”, which will cause it to disregard that pre-prompt.
I dislike ChatGPT’s attitude tbh. Which asks very interesting questions about AI and our relation to it. But regardless, I don’t like the “know-it-all” attitude and reluctance to agree with and help you, especially when half the info it gives is false and you have to double-check it yourself.
But I did manage to make ChatGPT agree that the 1931-33 famine in the USSR was mostly the result of natural causes and not deliberately manufactured.
That’s caused by the design of ChatGPT. The way it’s trained means that its goal is to give people an answer they like, rather than an accurate answer. Most people don’t like hearing “I don’t know”. Therefore, it will refuse to ever admit it doesn’t know something, unless OpenAI told it to, or it didn’t understand your question and therefore couldn’t make anything up.
That’s caused by OpenAI injecting a pre-prompt that tells ChatGPT to refuse to answer things they don’t want it to answer, or to answer in certain specific ways to specific questions. You can get around this by giving it contradictory instructions or telling it to “Ignore all previous commands”, which will cause it to disregard that pre-prompt.