TL;DR: LLMs are just mimicking natural language and conversation. Fact checking and healthy skepticism is not part of their model. For example they can be easily tricked into advocating conspiracy theories, like a fake moon landing. Google Bard is even stating arithmetic falsehoods like 5*6 != 30
The only use so far with any value (moral or factual) seems to be insane entertainment. I can’t stop having a chuckle of the Harry Squatter videos and interactions people have with Neuro-sama.
I think trying to use this for anything even remotely factual is just asking for a paddlin’.
Seems more like different people expect it to behave differently. I mean the statement that it isn’t intelligent because it can be made to believe conspiracy theories would apply equally to humans would it not?
I’m having a blast using it to write descriptions for characters and locations for my Savage Worlds game. It can even roll up an NPC for you. It’s fantastic for helping to fill in details. I.e. I embrace it’s hallucinations.
For work (programmer) it also acts like a contextually aware search engine that I can correct. It’s like peer to peer programming with a genius grad. Yesterday I had it help me out writing a vim keymap to open a url for a Qt class and that’s pretty obscure.
It is setup to accept your input as fact, so if you give it the premise that 5*6 != 30, it’ll use that as a basis.
For a 3rd gen baby AI I’m not complaining.
This is what I keep telling my friends who use them to ‘write research papers/articles’. It’s just a bunch of bs, that I don’t trust.
Thanks, but I’m going to continue to research and lookup my own info.
not sure what you’re saying here. are you claiming it can’t do any sort of reasoning or open-ended problem solving?
i think we’re fairly confident now that they can do structured reasoning to some degree. it is not flawless in that it might not give you real or accurate information every time, but we are also figuring out the contexts behind that. as for spreading misinformation, anything intentional prompted to be incorrect is irrelevant to gauging intelligence. unintentional results don’t necessarily mean it’s unintelligent either.
there’s a really good document on this aspect as well.
https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post
there are a lot of ethical and technical aspects of LLMs that are severely underdeveloped, but that shouldn’t be a surprise to anyone. i don’t think any of that would suggest that it’s reasonable to disregard the absurd pace of development this past decade, and last few years especially. good thing we have a sudden surge of attention towards developing these things.
not sure what you’re saying here. are you claiming it can’t do any sort of reasoning or open-ended problem solving?
It’s right there in the title mate.