• xavier666@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 months ago
      Hi! Copilot has detected informal language in your response. Are you stressed by any chance? I have scheduled a priority meeting with your allocated HR during your lunch break to sort things out. Please let me know if you need anything else. Happy coding!
      
    • M137@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      And it’ll tell you it can’t respond to that because of its rules (censorship) and then say that using glue is a good way to off yourself.

  • peto (he/him)@lemm.ee
    link
    fedilink
    English
    arrow-up
    51
    ·
    2 months ago

    Hey, you know that thing you use? What if it had a button on it that opened an AI prompt?

    Well my mum says it’s a really smart idea from her special little innovator.

  • onlooker@lemmy.ml
    link
    fedilink
    arrow-up
    47
    arrow-down
    1
    ·
    2 months ago

    “It has a gradient so you know it’s AI.” <- Uh, what does this mean?

    • dexa_scantron@lemmy.world
      link
      fedilink
      arrow-up
      33
      arrow-down
      1
      ·
      2 months ago

      I thought it meant that all the icons/interfaces for AI seem to have a graphical gradient between colors, usually cool colors like blue/purple/pink. (Like the face in the meme)

      • monsterpiece42@reddthat.com
        link
        fedilink
        arrow-up
        7
        ·
        2 months ago

        Yes this is the correct answer. The words in the meme are written to a hypothetical end user. They would not reference technology like the other person said.

        • watersnipje@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          2 months ago

          No. Nobody uses gradient descent anymore, it’s just the technique you learn about in beginner level machine learning courses. It’s about the color gradient in all the AI logos.

    • maniclucky@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      3
      ·
      2 months ago

      Gradient descent is a common algorithm in machine learning (AI* is a subset of machine learning algorithms). It refers to using math to determine how wrong an answer is in a particular direction and adjusting the algorithm to be less wrong using that information.

      • xthexder@l.sw0.com
        link
        fedilink
        arrow-up
        12
        ·
        edit-2
        2 months ago

        The way you phrased that perfectly illustrates the current problem AI has: In a problem space as large as natural language, there are nearly an infinite number of ways it can be wrong. So no matter how much data we feed it, there will always be some “brand new sentence” someone asks that breaks it and causes a wrong answer.

        • maniclucky@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          2 months ago

          Absolutely. It’s why asking it for facts is inherently bad. It can’t retain information, it is trained to give output shaped like an answer. It’s pretty good at things that don’t have a specific answer (I’ll never write another cover letter thank blob).

          Now, if someone were to have the good sense to have some kind of lookup to inject correct information between the prompt and the output, we’d be cooking with gas. But that’s really human labor intensive and all the tech bros are trying to avoid that.

    • IninewCrow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      What are you talking about asking questions? It’s AI … it’s all we need to know

    • mkwt@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      2 months ago

      “gradient descent” is a jargon word for one kind of training method.

      • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 months ago

        “Gradient descent” ≈ on a “hilly” (mathematical) surface, try to find the lowest point by finding the lowest point near an initial guess. “Gradient” is basically the steepness, or rate that the thing you’re trying to optimize changes as you move through “space”. The gradient tells you mathematically which direction you need to go to reach the bottom. “Descent” means “try to find the minimum”.

        I’m glossing over a lot of details, particularly what a “surface” actually means in the high dimensional spaces that AI uses, but a lot of problems in mathematical optimization are solved like this. And one of the steps in training an AI agent is to do an optimization, which often does use a gradient descent algorithm. That being said, not every process that uses gradient descent is necessarily AI or even machine learning. I’m actually taking a course this semester where a bunch of my professor’s research is in optimization algorithms that don’t use a gradient descent!

        • mbtrhcs@feddit.org
          link
          fedilink
          arrow-up
          5
          ·
          2 months ago

          This is a decent explanation of gradient descent but I’m pretty sure the meme is referencing the color gradients often used to highlight when something is AI generated haha

  • Destide@feddit.uk
    link
    fedilink
    English
    arrow-up
    34
    ·
    2 months ago

    I hope this comment finds you well,

    This meme perfectly captures the desperate plea of tech companies trying to get users to embrace their AI features. It’s like they’re saying, “We promise it’s worth it—just look at that gradient!” 😅

    I am an person

    • Rolando@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      2 months ago

      I’m sorry, but I don’t feel comfortable writing a reply to this comment because the only possible intelligent replies involve profanity or hate speech. Would you prefer a nice cookie recipe instead?

  • Got_Bent@lemmy.world
    link
    fedilink
    arrow-up
    30
    ·
    2 months ago

    Fucking Adobe PDF is becoming damn near unusable because of this. Frustrating because I absolutely have to use it all day every day.

      • ThePJN@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        The ability to filter comments actively as you mark them off as completed is magnificent.

        You mark a comment, it hides itself. Neat and tidy, fantastic.

        Why doesn’t Adobe do this, you ask? Who the fuck knows. Especially since you used to be able to in Acrobat.

        Why? Were people complaining it was too helpful?

        • Track_Shovel@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          I’m still learning it. It has a ton of capabilities, but haven’t got to it yet. It’s OCR is kind of meh, even at highest setting

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    ·
    2 months ago

    Please bro please let me generate a few sentences of garbled sentences for you please bro I fucking love to say stuff like “delve” please

  • ArchRecord@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 months ago

    I have never once found an “AI” feature integrated by a corporation useful.

    I have only ever found “AI” useful when it’s unobtrusive, and something I chose to use manually. Sometimes an LLM is useful to use, but I don’t need it shilled to me inside a search bar or in a support chat that won’t solve my problem until I bypass the LLM.

    • LouNeko@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      I find customer support service Chatbots useful, they tend to ask the right questions before connecting me to an actual human, so I don’t have to explain myself over and over. They also categorize your problem so you won’t be forwarded 3 times till you finally reach the right department. They’re essentially like the “press 1 to…, press 2 to…” shtick during a service call, except the customer support person has access to your chat history.

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I find those kinds of chatbots useful, but those aren’t the ones I encounter 90% of the time. Most of the time, it’s a chatbot that summarizes the help articles I just read, giving faulty interpretations of the source material, that then goes on to never direct me to a real person unless I tell it multiple times that the articles it’s paraphrasing aren’t helping. (and sometimes, they have no live support at all, and only an LLM + support articles)

    • Wogi@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      I have occasionally found the Google search AI handy in pointing me in the right direction, like when I can’t remember or don’t know a particular term for something, it’s decent at giving me the term I’m actually searching for. Can’t trust it for shit as it’s intended to be used though.

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        Oh yeah, it’s definitely useful for that!

        Since LLMs are essentially just very complicated probabilistic links between words, it seems to be extremely good at picking the exact word or phrase that even a thesaurus couldn’t get me.

      • Xanvial@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        ChatGPT itself is usually useful. I usually asked it to explain something new as a base start for searching it myself

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I primarily end up using LLMs through DuckDuckGo’s private frontend alongside a search, so if my current search doesn’t yield the correct answer to my question (i.e. I ask for something but those keywords only ever turn up search results on a different, but similar topic) then I go to the LLM and ask a more refined question, that otherwise doesn’t produce any relevant results in a traditional keyword search.

        I also use integrated LLMs to format and distill my offhand notes, (and reformat arbitrary text based on specific criteria repeatedly for structured notes,) learn programming syntax more at my own pace and in my own way, and just generally get answers on more well-known topics a lot faster than I would scrolling past 5 pages of SEO-“optimized” garbage just designed to fill time for the ads to load before actually giving me a good answer.

    • Diurnambule@jlai.lu
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      2 months ago

      At work I use the summary function in edge to generate code since all tohet llm are blocked. It is really helpful to burp templates of programs when you tell it your grand mother is dying

  • NigelFrobisher@aussie.zone
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    2 months ago

    I don’t even use LLMs to generate code because all we ever do anymore is migrate the horde of microservices with one or two endpoints that was going to fix software development forever three years ago to the latest hype hosting and devops platform that will somehow balance out the maintenance cost of having all those services this time for real.

  • Sam_Bass@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    2 months ago

    If the amount of money spent equalled the amount of utility in the stuff it would be more popular than it is

  • fubarx@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    2 months ago

    I actually like it when these code helpers guess from one line what the rest should be and suggest it. It’s even more fun when it keeps guessing and the suggestions get progressively more whacky. Then they just start making completely unrelated shit up.

    Once you say no, it goes back to the beginning and meekly repeats the very first suggestion, like a scolded puppy.

  • Robert Ian Hawdon@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    2 months ago

    Kagi’s AI summariser is pretty good. It cites its sources and, by default, it only kicks in when you search with a ? on the end.

    To be fair, I’m pretty impressed with Kagi Search overall. But that’s a topic for a different thread, I think.

  • Roopappy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I like how on Amazon, the “Rufus” thing always pops up over the stuff I’m trying to read.

    “How can I turn off rufus” didn’t come up with anything except how to turn it off in the app, not on the website.

    I had to use Ublock Origin to select and block it.