It’s less spicy than the usual r/StableDiffusion slop, but it’s just too cringe not to post.

  • UlyssesT [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 days ago

    A match made in bazinga hell: the condescending corporate-art aesthetics of Reddit’s favorite “if 99% of humanity dies because of capitalism that’s still a win for Team Humanity” corpo-propaganda videos mixed with the latest hype fixation of the same crowd of computer touchers: planet-burning treat printer shit.

    Kurzgesagt’s “win for team humanity” prediction could very well be fulfilled by 99% of humanity being wiped out and 1% languishing in the same planetary conditions that killed the other 99% in part because of unchecked expansion of this LLM shit. elmofire

    • ExotiqueMatter
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      “We just have to sit back until EnTRepReNeURs solve the climate catastrophe by the magic of green capitalism, and if that doesn’t work out we’ll just have to start over civilization after 99% of the population dies.”

    • KobaCumTribute [she/her]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      unchecked expansion of this LLM shit

      Just to be clear, LLMs are something different even if both sorts of models use tensor math. This is some hobbyist with awful taste (like 95+% of Stable Diffusion/Flux hobbyists) making a Flux LORA on consumer hardware with typical gaming power consumption - he’s a dork doing something cringe, but it’s about as innocuous as if he’d instead spent the time playing some GPU intensive game with the settings cranked up. It’s the giant chatbot training datacenters that are burning massive amounts of energy in the hopes that naive text prediction will somehow become smart if you just make it big enough and train it long it enough, and then trying to deploy those dogshit chatbots everywhere they possibly can even though they still suck horribly.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        6 days ago

        It blurs together for me, but if it isn’t wasting that much energy and dumping that much carbon, it’s fine as a hobby gimmick, I suppose.

        It’s still a problem on the corpo scale that is stoking and profiting from the hype wave.

        • KobaCumTribute [she/her]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 days ago

          it’s fine as a hobby gimmick, I suppose.

          I do want to clarify that my position is still that 90+% of Stable Diffusion/Flux hobbyists should be pikmin-carry-lwojak-nooopikmin-carry-r barbara-pit because the scene is overrun with chuds, grifters, pedophiles, and people who are just too cringe. But the tech itself is cool and relatively innocuous at a hobbyist scale.

          It’s still a problem on the corpo scale that is stoking and profiting from the hype wave.

          Yeah, I think it’s basically a problem of scale and induced demand: one image generated with a hobbyist setup takes a few seconds to a few minutes depending on the model and GPU, is pretty comparable in energy usage to using that machine for more mundane rendering tasks for the same amount of time, and is probably not meaningfully distinct from how much energy would be used over the hours that making it with more traditional digital methods would use; the problem is that having such a fast and convenient way of producing images induces a demand for more of them so it’s not just one image it’s dozens or hundreds or thousands of them all but one or two of which will get thrown out. At the corporate scale it’s technically more efficient per image, but it scales even further and tries to draw in more people so now it’s many millions upon millions of images and because it’s an uncontrollable remote server instance instead of the comparatively sophisticated tools that a local machine can run every single one of those images is useless noise.

          I hate the corporate shit so much. It’s just all bad, all the time, at a huge cost, with no redeeming qualities whatsoever. At least with the hobbyist stuff there’s at least something interesting and potentially useful to it among all the bad.

          • UlyssesT [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            But the tech itself is cool and relatively innocuous at a hobbyist scale.

            The tech is a set of tools, which unfortunately are being used primarily by tools right now.

            I can totally see a less fucked up society using this technology like pretty much anything else in art: as a means to help produce art, not a single button press to churn out slop.

            • KobaCumTribute [she/her]@hexbear.netOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 days ago

              Yeah, like just looking at this in a vacuum the tech as it stands now could probably let a team of animators eschew the need to go and contract out other studios to do a bunch of extra grunt work like hand interpolating between keyframes, etc. In a better system that would be amazing because it would mean that artists could produce things without the need to subordinate so many others to their vision and without needing the sorts of institutional backing necessary to get all those extra hands involved, and that artists wouldn’t get stuck doing thankless grunt work for someone else like they do now.

              But instead it’s used as a glorified gacha pull system for the worst people alive just hitting the treat button over and over, and when it does see corporate animation use it’ll be used to cut costs and pad exec salaries and investor profits instead of being used to pay artists better or allow artist-led projects to become more viable and prevalent. And that’s without getting into Hollywood’s interest in using it to make even shittier post production CGI effects for their ever worsening slop.

              • Belly_Beanis [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 days ago

                Tweening right now is already finicky and it’d be nice to have tools to make it better. I think what I’ve seen the most of is iterations of the entire image, then linked together. So instead of rendering just a hand or mouth moving, the software generates an entirely new image similar to the previous frame. Incredibly inefficient way of doing animation.

                I’ve wanted to do something like upload everything I’ve ever drawn and then train an AI to replicate my own technique. But the ethics behind setting a car on fire to save me 30~60 minutes of work isn’t something I’m interested in. Not to mention all the issues with copyright. Immediately my work will be paywalled. I won’t see a dime and the other user will be paying for something I’d give them for free.

                Whole thing is a fuck.

                • KobaCumTribute [she/her]@hexbear.netOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 days ago

                  I’ve wanted to do something like upload everything I’ve ever drawn and then train an AI to replicate my own technique. But the ethics behind setting a car on fire to save me 30~60 minutes of work isn’t something I’m interested in. Not to mention all the issues with copyright. Immediately my work will be paywalled. I won’t see a dime and the other user will be paying for something I’d give them for free.

                  What you’d want to do there is pick an open source model like SDXL or Flux and then train a LORA for it, which depending on your hardware you might be able to do locally in a couple of hours. There’s also sites you can pay to do the training for you for a dollar or so, like civitai, and you’d get the safetensor file and then be able to either make it free for download there or keep it private and distribute it however you like instead. With a small enough dataset it’s not that long or energy intensive a process and you would retain control of it yourself.

                  You would have to tag your images yourself in ways that the machine can process, though, and I don’t know anything about that. Some models want keyword salad and others want natural language descriptions, and I couldn’t tell you what the best practices for either are.

                  That’s not to actively encourage going and doing that, of course, I’m just saying it’s more accessible and efficient at a hobbyist scale these days than you’d think.

              • UlyssesT [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                6 days ago

                It doesn’t help that the tech’s loudest stans on this site run somewhere between smug tech inevitabilists (you know, the kind that also said we’d all be wearing VR headsets for corporate meetings by now and that we’d all be switching to cryptocurrency for our daily needs) that get lost somewhere in the is/ought fallacy because they shit as inevitable no matter how harmful or undesirable it actually is, and the full blown “just like in the treats” fantasists that think that a sapient and (paradoxically) unconditionally loving mommy bangmaid is just around the corner and the waifu of the future just needs a slightly bigger database and a few more destroyed forests and dried up lakes.

  • Belly_Beanis [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    Holy shit you weren’t kidding. This is like…what I could do in MS Paint when I was in middle school and was making my own Pokémon knock-offs. I don’t mean that as an insult to Kurzgesagt, but rather the style is easy to replicate.

    Imagine being a grown-ass adult and needing an AI to do the paintbucket fill tool for you.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      Imagine being a grown-ass adult and needing an AI to do the paintbucket fill tool for you.

      Energy expenditure and floods of carbon waste don’t matter as long as the end-user bazinga likes the convenience of the resulting treats.

      The only effort they put into them seems to be stanning for their existence, and maybe they’ll phase that out soon and just rely on even more LLMs to do that part for them too at the low low price of a few additional acres of forests burned and lakes turned to dust. elmofire