• Hot Potato@lemmy.world
    link
    fedilink
    arrow-up
    57
    arrow-down
    8
    ·
    edit-2
    4 months ago

    For the people who didn’t read the article. Read this TLDR: When you open a Google Doc. A Gemini sidebar appears, so you can ask questions about the document. Here, it summarized a document without the user asking.

    The article title makes it seem like they are using your files to train AI which no proof exists for that(yet)

    • GolfNovemberUniform@lemmy.ml
      link
      fedilink
      arrow-up
      30
      arrow-down
      3
      ·
      edit-2
      4 months ago

      At least the data is sent to Gemini servers. This alone can be illegal but I’m not sure. What I’m more sure about is that they do use the data to train the models.

      • poVoq@slrpnk.net
        link
        fedilink
        arrow-up
        28
        ·
        4 months ago

        Since it is Google Docs, the data is already on Google servers. But yeah, it doesn’t exactly instill confidence into the confidentiality of documents on Google Docs.

    • sunzu@kbin.run
      link
      fedilink
      arrow-up
      8
      ·
      4 months ago

      Thank you for the service!

      I see your point re training, but aint the entire point why they want peasants using their models is to train them more?

      • eRac@lemmings.world
        link
        fedilink
        arrow-up
        10
        ·
        4 months ago

        Generative AI doesn’t get any training in use. The explosion in public AI offerings falls into three categories:

        1. Saves the company labor by replacing support staff
        2. Used to entice users by offering features competitors lack (or as catch-up after competitors have added it for this reason)
        3. Because AI is the current hot thing that gets investors excited

        To make a good model you need two things:

        1. Clean data that is tagged in a way that allows you to grade model performance
        2. Lots of it

        User data might meet need 2, but it fails at need 1. Running random data through neural networks to make it more exploitable (more accurate interest extraction, etc) makes sense, but training on that data doesn’t.

        This is clearly demonstrated by Google’s search AI, which learned lots of useful info from Reddit but also learned absurd lies with the same weight. Not just overtuned-for-confidence lies, straight up glue-the-cheese-on lies.

        • sunzu@kbin.run
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          Thank you for explaining this.

          Ok so what is ChatGPT angle here providing this services for “free”

          What do they get out of it? or is this just a google play to get you in the door, then data mine?

          • eRac@lemmings.world
            link
            fedilink
            arrow-up
            4
            ·
            4 months ago

            They have two avenues to make money:

            1. Sell commercial services such as customer support bots. They get customers thanks to the massive buzz their free services generated.
            2. Milking investors, the real way to make money.