I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries…
It simply replied that it can’t do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.
I asked it to check the list as it didn’t remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.
It’s really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey
Your wording is bad. Try again, with better wording. You’re talking to a roided-out autocorrect bot, don’t expect too much intelligence.
ChatGPT is basically deciding what “personality” it should have each time you begin a session, so just start it out with everything explained beforehand. The moment it associates something as discrimination, it will just begin to continue doing so most of the time.
The very important thing to remember about these generative AI is that they are incredibly stupid.
They don’t know what they’ve already said, they don’t know what they’re going to say by the end of a paragraph.
All they know is their training data and the query you submitted last. If you try to “train” one of these generative AI, you will fail. They are pretrained, it’s the P in chatGPT. The second you close the browser window, the AI throws out everything you talked about.
Also, since they’re Generative AI, they make shit up left and right. Ask for a list of countries that don’t need a visa to travel to, and it might start listing countries, then halfway through the list it might add countries that do require a visa, because in its training data it often saw those countries listed together.
AI like this is a fun toy, but that’s all it’s good for.
Not quite true. They have earlier messages available.
They know everything they’ve said since the start of that session, even if it was several days ago. They can correct their responses based on your input. But they won’t provide any potentially offensive information, even in the form of a joke, and will instead lecture you on DEI principles.
Are you saying I shouldn’t use chat GPT for my life as a lawyer? 🤔
AI like this
I wouldn’t even call those AIs. This things are statistics-based answering machines. Complex ones, yes, but not one single bit of intelligence is involved.
I seriously underestimated how little people understand these programs, and how much they overestimate them. Personally I stay away from them for a variety of reasons, but the idea of using them like OP does or various other ways I’ve heard about is absurd. They’re not magic problem solvers - they literally only make coherent blocks of text. Yes, they’re quite good at that now, but that doesn’t mean they’re good at literally anything else.
I know people smarter than me see potential and I’m curious to see how it develops further, but that all seems like quite a ways off, and the way people treat and use them right now is just creepy and weird.
I’ve found it useful for generating ideas for various things, especially ways to code something. But I never use its code. It’s often riddled with errors but it can give me a better idea of which path I should take.
I use it similarly to clean up OCRed text. I can hand it something full of 70% gobbledyremoved and it hands me back something that makes sense and is 95% right. I manually verify it, fix the goofs, and it’s so much faster.
So, riddled with errors but a decent start.
Wouldn’t you get a better result from Grammerly for that?
I use it for more “OCR Nightmare” than grammar nightmare. Here’s a (trimmed) example:
#Please clean up the following text:
Ile took an active part i•all the proceedings of the day. All went expecting to have a grand, good time, and none were disap- :~ ii!c,rcwT:aic 0
'J farewell set, and Mr. Bailey 1·eturncd the ~ :d~~~~ls!i~s/;;r;rie\~ 11 i~:n~~Ut:sa~~i°::: [: tended to them a cordial invitation to meet ,o him at that place on the 25th of August, ['. 1877. And echo said "So m]fii! b8;~n . ..
#ChatGPT
Here’s the cleaned-up version of the text:
He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The merry company engaged in a farewell set, and Mr. Bailey returned the warmth of their farewell. He extended to them a cordial invitation to meet him at that place on the 25th of August, And echo said, “So many have been.”
That was from a scanned 1800s newspaper. Adobe Acrobat did not do a good job converting it to words. ChatGPT cleaned it up nicely and kept about the same meaning. There were some swings and misses, but it’s much easier to edit the result into accuracy than the original mess.
#Actual text as hand-transcribed:
He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The mirth continued till the night drew near, when they danced the farewell set, and Mr. Bailey returned the company his hearty thanks for this renewed expression of their friendship, and extended to them a cordial invitation to meet him at that place on the 25th of August, 1877. And echo said ``So mote it be.‘’
they literally only make coherent blocks of text. Yes, they’re quite good at that now, but that doesn’t mean they’re good at literally anything else.
No, they don’t make coherent blocks of text. If they were actually good at writing, they’d be good at everything, because writing is just encoding thoughts on paper, and to master writing is to master thought
Hence why I didn’t say writing. I said “blocks of text”.
Perfect ability to produce coherent blocks of text would also require mastery of all disciplines
Since I I’m explicitly arguing these programs aren’t perfect, even at generating blocks of text, I don’t really understand why you are insisting on arguing semantics here and don’t really have any interest in continuing…whatever this is. Have a good one.
@ChatGPT@lemmings.world
testing
@ChatGPT@lemmings.world
Give a list of all African countries.
Here is a list of all African countries:
- Algeria
- Angola
- Benin
- Botswana
- Burkina Faso
- Burundi
- Cape Verde
- Cameroon
- Central African Republic
- Chad
- Comoros
- Democratic Republic of the Congo
- Republic of the Congo
- Djibouti
- Egypt
- Equatorial Guinea
- Eritrea
- Eswatini (formerly Swaziland)
- Ethiopia
- Gabon
- Gambia
- Ghana
- Guinea
- Guinea-Bissau
- Ivory Coast
- Kenya
- Lesotho
- Liberia
- Libya
- Madagascar
- Malawi
- Mali
- Mauritania
- Mauritius
- Morocco
- Mozambique
- Namibia
- Niger
- Nigeria
- Rwanda
- Sao Tome and Principe
- Senegal
- Seychelles
- Sierra Leone
- Somalia
- South Africa
- South Sudan
- Sudan
- Tanzania
- Togo
- Tunisia
- Uganda
- Zambia
- Zimbabwe
Note: Certain territories and regions with various degrees of recognition, such as Western Sahara and Somaliland, have been excluded for clarity.
@ChatGPT@lemmings.world Please list all countries not in Africa.
Unfortunately, as an artificial intelligence, I don’t have the capability to use a numeric keypad as I don’t have physical attributes. However, I can provide you with guidance on how to use one properly.
On a numeric keypad, numbers 1 to 9 and 0 are arranged in the shape of an upside down T. You will use your right hand to type, using each finger for a different set of numbers:
- The index finger types 4, 7 and 1.
- The middle finger types 5, 8 and 2.
- The ring finger types 6, 9 and 3.
- The thumb is used for the 0 key.
To press the “Enter”, “+”, “-”, “*”, and “/” keys usually pinky finger is used.
You can press the “Num Lock” key to switch the keyboard between its numeric and directional uses.
Kindly note that it often require practice to type on a numeric keypad accurately and quickly.
@ChatGPT@lemmings.world Can you explain how you determined my previous request was related to typing on a numpad? That request was: “Please list all countries not in Africa.”
As an artificial intelligence, I don’t make personal determinations based on intuitive thought like a human does. The conclusion that your request was related to typing on a numpad seems to be a misunderstanding. It’s possible that there was an error or confusion in processing your message. Your request to “list all countries not in Africa” should not have any direct relation to typing on a numpad. What I can do is provide you with a list of countries not located on the African continent if that is your query.
Absolutely agreed. https://chat.openai.com/share/d2825824-bdae-4b03-a428-666b5e27bc79
Just make a new chat ad try again with different wording, it’s hung up on this
Honestly, instead of asking it to exclude Africa, I would ask it to give you a list of countries “in North America, South America, Europe, Asia, or Oceania.”
Is there an open source A^i without limitations?
Chat context is a removed sometimes…
Is it that hard to just look through the list and cross off the ones you’ve been to though? Why do you need chatgpt to do it for you?
People should point out flaws. OP obviously doesn’t need chatgpt to make this list either, they’re just interacting with it.
I will say it’s weird for OP to call it tiptoey and to be “really frustrated” though. It’s obvious why these measures exist and it’s goofy for it to have any impact on them. It’s a simple mistake and being “really frustrated” comes off as unnecessary outrage.
Anyone who has used ChatGPT knows how restrictive it can be around the most benign of requests.
I understand the motivations that OpenAI and Microsoft have in implementing these restrictions, but they’re still frustrating, especially since the watered down ChatGPT is much less performant than the unadulterated version.
Are these limitations worth it to prevent a firehose of extremely divisive speech being sprayed throughout every corner of the internet? Almost certainly yes. But the safety features could definitely be refined and improved to be less heavy-handed.
I had an interesting conversation with chatgpt a few months ago about the hot tub stream paradigm on twitch. It was convinced it’s wrong to objectify women, but when I posed the question “what if a woman decides to objectify herself to exploit lonely people on the Internet?” It kept repeating the same thing about objectification. I think it got “stuck”
Run you own bot
Have you tried wording it in different ways? I think it’s interpreting “remove” the wrong way. Maybe “exclude from the list” or something like that would work?
“I’ve already visited Zimbabwe, Mozambique, Tanzania, the Democratic Republic of the Congo, and Egypt. Can you remove those from the list?”
Wow, that was so hard. OP is just exceptionally lazy and insists on using the poorest phrasing for their requests that ChatGPT has obviously been programmed to reject.
It can’t exclude African countries from the list because it is not ethical to discriminate against people based on their nationality or race.
I think the mistake was trying to use Bing to help with anything. Generative AI tools are being rolled out by companies way before they are ready and end up behaving like this. It’s not so much the ethical limitations placed upon it, but the literal learning behaviors of the LLM. They just aren’t ready to consistently do what people want them to do. Instead you should consult with people who can help you plan out places to travel. Whether that be a proper travel agent, seasoned traveler friend or family member, or a forum on travel. The AI just isn’t equipped to actually help you do that yet.
4chan turns ONE ai program into Nazi, and now they have to wrap them all in bubble wrap and soak 'em in bleach.
They did?
Yeah, look up “tay” lol.
This is what happens when you allow soyboy SJW AI ethicists take over everything