• AzPsycho@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    I experimented by asking it to write a SQL query for a platform that has its entire database map available online. The data I asked for was impossible to get without exporting some of the data in those tables into temp tables using sub queries and then running comparative omissions analysis.

    Instead of doing that it just made up fake tables and wrote a query that proclaimed the data was in these fake tables.

    • SJ_Zero@lemmy.fbxl.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      I asked it to write a review of beowulf in the style of beowulf. It wrote something rhyming which is not they style of beowulf. I said “rewrite this so it doesn’t rhyme” and it gave me something rhyming. I tried several times in several different ways including reasoning with it, and it just kept on kicking out a rhyming poem.

    • 50gp@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      its good to remember that many of these chatbot AIs want to give an answer to the prompt instead of saying “sorry, thats not possible” and will then generate something completely garbage as result

    • fearout@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Out of curiosity, are you using 3.5 or 4? I found that gpt4 is pretty good at these tasks, while 3.5 is almost useless. A thing that often helps is to ask it “is your answer correct?”. That seems to make it find the errors and fix them.