From https://twitter.com/llm_sec/status/1667573374426701824

  1. People ask LLMs to write code
  2. LLMs recommend imports that don’t actually exist
  3. Attackers work out what these imports’ names are, and create & upload them with malicious payloads
  4. People using LLM-written code then auto-add malware themselves
  • smstnitc@lemmy2.addictmud.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Considering the Legal Eagle video I just watched about the lawyer getting into trouble because chatgpt citied non-existent legal cases for him that nobody verified really existed, you shouldn’t ever trust it at face value. Use it to aid your research, sure. But don’t blindly present it’s findings to a judge, ha.

    • alr@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      That was just crazy. And instead of owning up to it, they doubled down on the whole thing by having GPT invent fake rulings which they could claim to have cited and wrapped up the whole shebang by revealing that the attorney who filed the stupid thing wasn’t the person who “wrote” it and hadn’t even read it and therefore shouldn’t be held accountable for its contents.