In my view, this is the exact right approach. LLMs aren’t going anywhere, these tools are here to stay. The only question is how they will be developed going forward, and who controls them. Boycotting AI is a really naive idea that’s just a way for people to signal group membership.

Saying I hate AI and I’m not going to use it is really trending and makes people feel like they’re doing something meaningful, but it’s just another version of trying to vote the problem away. It doesn’t work. The real solution is to roll up the sleeves and built an a version of this technology that’s open, transparent, and community driven.

  • chgxvjh [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    11 days ago

    Over the last 5 years Mozilla has absorbed and abandoned multiple pretty novel machine learning projects, just to copy whatever big tech is doing in a way that won’t work in a useful way on most people’s devices for years to come.

    Now AI is becoming the new intermediary. It’s what I’ve started calling “Layer 8” — the agentic layer that mediates between you and everything else on the internet.

    This is a god awful vision. People shouldn’t be consuming all their information through a stochastic model based on unquestioned consent.

    The rest really doesn’t have anything to do with a web browser either.

    And we are still waiting for the other shoe to drop, we haven’t really learned yet how Mozilla is planning to make money in the future, to both fund browser development and all of this.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 days ago

      I don’t know why people feel the need to proclaim that in every single post about LLMs. Clearly it’s not a subject that concerns you, so why not just move on and comment on things you care about? These kinds comments only add toxicity and nothing of value.

      • hello_hello [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 days ago

        When a corporate blog opens with “The future of intelligence is being set right now” and ends with “The future of intelligence is being set now. The question is whether you’ll own it, or rent it.” I feel as though I’m being threatened into action when I’ve never used this technology for anything requiring intelligence.

        Woke AI is never going to exist no matter how much the bourgeois executives at Google’s antitrust body-shield headquarters want to wish cast themselves back into relevance. Mozilla drops projects like servo, graveyards products like Pocket, tries fooling people with “privacy preserving attribution” and then turns around and has the gall to fucking threaten me saying how I’m not going to be in their brave new world, maybe I lose my patience after a while.

        Mozilla is missing the forest for the trees when LLMs are just the way for silicon valley to financialize access to information, the technology itself doesn’t matter, just how it gives these corporations an excuse to move money around.

  • dat_math [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    11 days ago

    I like a lot of what this says but I have trepidations about the final tipping point in this figure:

    Is mozilla acknowledging communism is a necessary condition for the fulfillment of the mozilla manifesto?

      • dat_math [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 days ago

        I don’t think the mechanics of coordinating the distributed computing is the barrier so much as the economics involved in getting an extremely large scale distributed compute economy running. Is the proposal essentially a ratio system to measure balance/imbalance of use of the network?

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 days ago

          Think of it like seti@HOME or bittorrent but for compute. Most computers are not working at 100% capacity at all times. So, if you had a network where people would run a background service that shared a bit of their computing power you could have a huge distributed computing network.

          There are also two aspects to this, one is training models which is the really expensive and compute intensive process, and the other is running them. The cost of running models continues to drop and you can already run models locally that needed a whole data centre even a couple of years ago. I expect that inference costs will continue to drop, and there are already many papers published outlining how that can be done. People just haven’t got around to doing it yet, and a lot of these approaches are complementary as well.

          • dat_math [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 days ago

            Think of it like seti@HOME or bittorrent but for compute. Most computers are not working at 100% capacity at all times. So, if you had a network where people would run a background service that shared a bit of their computing power you could have a huge distributed computing network.

            Yeah, I skimmed the petals repo. Again, I don’t think there’s a significant mechanical barrier to overcome, but I do see an economic one. Letting my gpu sit idly when I’m not actively using it is essentially free, but to convince me to run it for a project like petals I’d have to earn something valuable enough to offset my cost for power for the gpu and cooling my house. Is the only value returned by joining the petals network the capacity to run my own distributed training/inference on the same network? Would usage be balanced by some kind of ratio system similar to private tracker groups or have others proposed a kind of cryptocurrency? Aside: how does the network verify that the results of distributed computation are genuine and that a user isn’t taking advantage of the network (or is this not possible because it would corrupt a user’s own results as well?)?

            Sorry I have a lot of questions and not enough time to read the petals paper linked on the repo until tomorrow. If the answers are “read the damn paper”: bean

            • ☆ Yσɠƚԋσʂ ☆OP
              link
              fedilink
              English
              arrow-up
              3
              ·
              11 days ago

              I mean we already do this for stuff like torrents, I don’t think it would be that different. The whole thing with scale, you amortize the work across many machines, so there’s not a big cost for any individual. And getting something valuable would be having some app that does useful things presumably. So, you use the app and while you use it, you’re also contributing some resources. From my reading, that’s what Mozilla is proposing. They want to add features to the browser that would improve experience, and maybe when you use the browser you can do a bit of computing right through it to help the training network.

              There are plenty of algorithms for doing things like load balancing, there’s even stuff like homomorphic encryption which allows doing computing on encrypted data. So, you could be sending out something to be processed without people even knowing what it is they’re processing. The way to verify something is genuine is usually by giving it to at least two nodes to compute and comparing. These are all solved problems with existing implementations in wide use.

                • ☆ Yσɠƚԋσʂ ☆OP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  11 days ago

                  We’ll see what they actually do, of course, I’m just showing that there is absolutely a viable path towards something like this. Actually starting a discussion going about this is the first step, and I think it’s a very good thing that Mozilla is doing something constructive in this space. This is far more productive than people just whinging about how much they hate AI.