The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

  • ShadowRam@kbin.social
    link
    fedilink
    arrow-up
    147
    arrow-down
    20
    ·
    1 year ago

    The majority of U.S. adults don’t understand the technology well enough to make an informed decision on the matter.

    • Moobythegoldensock@lemm.ee
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      If you look at the poll, the concerns raised are all valid. AI will most likely be used to automate cyberattacks, identity theft, and to spread misinformation. I think the benefits of the technology outweigh the risks, but these issues are very real possibilities.

    • meseek #2982@lemmy.ca
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      11
      ·
      1 year ago

      Informed or not, they aren’t wrong. If there is an iota that something can be misused, it will be. Human nature. AI will be used against everyone. It’s potentially for good is equally as strong as its potential for evil.

      But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

      Just one likely incident when big brother knows all and can connect the dots using raw compute power.

      Having every little secret parcelled over the internet because we live in the digital age is not something humanity needs.

      I’m actually stunned that even here, among the tech nerds, you all still don’t realize how much digital espionage is being done on the daily. AI will only serve to help those in power grow bigger.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 year ago

        But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

        None of this requires “AI.” At most AI is a tool to make this more efficient. But then you’re arguing about a tool and not the problem behavior of people.

      • aidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        1 year ago

        AI is not bots, most of that would be easier to do with traditional code rather than a deep learning model. But the reality is there is no incentive for these entities to cooperate with each other.

    • cybersandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      1 year ago

      But our elected officials like McConnell, feinstein, Sanders, Romney, manchin, Blumenthal, Marley have us covered.

      They are up to speed on the times and know exactly what our generations challenges are. I trust them to put forward meaningful legislation that captures a nuanced understanding that will protect the interests of the American people while positioning the US as a world leader on these matters.

    • ZzyzxRoad@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Seeing technology consistently putting people out of work is enough for people to see it as a problem. You shouldn’t need to be an expert in it to be able to have an opinion when it’s being used to threaten your source of income. Teachers have to do more work and put in more time now because ChatGPT has affected education at every level. Educators already get paid dick to work insane hours of skilled labor, and students have enough on their plates without having to spend extra time in the classroom. It’s especially unfair when every student has to pay for the actions of the few dishonest ones. Pretty ironic how it’s set us back technologically, to the point where we can’t use the tech that’s been created and implemented to make our lives easier. We’re back to sitting at our desks with a pencil and paper for an extra hour a week. There’s already AI “books” being sold to unknowing customers on amazon. How long will it really be until researchers are competing with it? Students won’t be able to recognize the difference between real and fake academic articles. They’ll spread incorrect information after stealing pieces of real studies without the authors’ permission, then mash them together into some bullshit that sounds legitimate. You know there will be AP articles (written by AI) with headlines like “new study says xyz!” and people will just believe that shit.

      When the government can do its job and create fail safes like UBI to keep people’s lives/livelihoods from being ruined by AI and other tech, then people might be more open to it. But the lemmy narrative that overtakes every single post about AI, that says the average person is too dumb to be allowed to have an opinion, is not only, well, fucking dumb, but also tone deaf and willfully ignorant.

      Especially when this discussion can easily go the other way, by pointing out that tech bros are too dumb to understand the socioeconomic repercussions of AI.

    • bob_wiley@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Those who do know it have a strong bias toward new tech, which blinds them from reality or any possible negatives. We’ve see this countless times in tech. Like when NFTs were going to change the world, you couldn’t tell those guys otherwise without being branded out of touch or someone who doesn’t understanding the tech.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Wasn’t it the ones who didn’t understand NFTs who were the fan boys? Everyone who knew what they were said they were bloody stupid from the get-go.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I mean, NFT’s is a ridiculous comparison because those that understood that tech were exactly the ones that said it was ridiculous.

        • bob_wiley@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I have to believe the crypto bros understood it; they were just blinded my dollar signs… like much of those involved in AI right now.

    • archon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      You can make an observation that something is dangerous without intimate knowledge of its internal mechanisms.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        Sure you can, but that doesn’t change the fact that your ignorant whether it’s dangerous or not.

        And these people are making ‘observations’ without knowledge of even the external mechanisms.

        • archon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I’m sure I can name many examples of things I observed as dangerous, and the observation being correct. But sure, claim unilateral ignorance and dismiss anyone who don’t agree with your view.

  • GreenBottles@lemmy.world
    link
    fedilink
    English
    arrow-up
    114
    arrow-down
    12
    ·
    1 year ago

    Most adult Americans don’t know the difference between a PC Tower and Monitor, or a Modem and a PC, or an ethernet cable and a usb cable.

  • Uncle_Iroh@lemmy.world
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    35
    ·
    1 year ago

    Most of the U.S. adults also don’t understand what AI is in the slightest. What do the opinions of people who are not in the slightest educated on the matter affect lol.

      • GigglyBobble@kbin.social
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        edit-2
        1 year ago

        You need to understand to correctly classify the danger though.

        Otherwise you make stupid decisions such as quiting nuclear energy in favor of coal because of an incident like Fukushima even though that incident just had a single casualty due to radiation.

      • StereoTrespasser@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        12
        ·
        1 year ago

        I’m over here asking chatGPT for help with a pandas dataframe and loving every minute of it. At what point am I going to feel the effects of nuclear warfare?

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          23
          arrow-down
          5
          ·
          1 year ago

          I’m confused how this is relevant. Just pointing out this is a bad take, not saying nukes are the same as AI. chatGPT isn’t the only AI out there btw. For example NYC just allowed the police to use AI to profile potential criminals… you think that’s a good thing?

              • Jerkface@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                The take is “let’s not forget to hold people accountable for the shitty things they do.” AI is not a killing machine. Guns aren’t particularly productive.

      • WhyIDie@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        8
        ·
        edit-2
        1 year ago

        you also don’t have to understand how 5g works to know it spreads covid /s

        point is, I don’t see how your analogy works beyond the limited scope of only things that result in an immediate loss of life

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          I don’t need to know the ins and outs of how the nazi regime operated to know it was bad for humanity. I don’t need to know how a vaccine works to know it’s probably good for me to get. I don’t need to know the ins and outs of personal data collection and exploitation to know it’s probably not good for society. There are lots of examples.

          • WhyIDie@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            okay, I’ll concede, my scope also was pretty limited. I still stand by not trusting the public with deciding what’s the best use of AI, when most people think what we have now is anything more than statistics supercharged in its implementation.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I can certainly give that “you” don’t need to know but there are a lot of differing opinions on even the things you’re talking about inside of the people that are in this very community.

            I would say that the Royal we need to know because there are a lot of opinions on facts that don’t line up with actual facts for a lot of people. Sure, not you, not me but a hell of a lot of people.

            • walrusintraining@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              I don’t disagree that people are stupid, but the majority of people got/supported the vaccine. Majority is sometimes a good indicator, that’s how democracy works. Again, it’s not perfect, but it’s not useless either.

      • Uncle_Iroh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        You chose an analogy with the most limited scope possible but sure I’ll go with it. To understand how dangerous an atomic bomb is exactly without just looking up a hiroshima you need to have atleast some knowledge on the subject, you’d also have to understand all the nuances etc. The thing about AI is that most people haven’t a clue what it is, how it works, what it can do. They just listen to the shit their telegram loving uncle spewed at the family gathering. A lot of people think AI is fucking sentient lmao.

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          I don’t think most people think ai is sentient. In my experience the people that think that are the ones who think they’re the most educated saying stuff like “neural networks are basically the same as a human brain.”

          • Uncle_Iroh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            You don’t think, yet a software engineer from google, Blake Lemoine, thought LaMDA was sentient. He took a lot of idiots down with him when he went public with said claims. Not to mention the movies that were made with the premise of sentient AI.

            Your anecdotal experience and your feelings don’t in the slightest affect the reality that there is tons of people who think AI is sentient and will somehow start some fucking robo revolution.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      Because they live in the same society as you, and they get to decide who goes to jail as much as you do

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      You can not know the nuanced details of something and still be (rightly) sketched out by it.

      I know a decent amount about the technical implementation details, and that makes me trust its use in (what I perceive as) inappropriate contexts way less than the average layperson.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      1 year ago

      Well and being a snob about it doesn’t help. If all the average joe knows about AI is what google or openAI pushed to corporate media, that shouldn’t be where the conversation ends.

      • Uncle_Iroh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 year ago

        The average joe can have their thoughts on it all they want, but their opinions on the matter aren’t really valid or of any importance. AI is best left to the people who have a deep knowledge of the subject, just as nuclear fusion is best left to scientists studying the field. I’m not going to tell average Joe the mechanic that I think the engine he just revised might just blow up, because I have no fucking clue about it. Sure I have some very basic knowledge of it, that’s pretty much where it end too though.

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    51
    arrow-down
    2
    ·
    1 year ago

    The problem is that there is no real discussion about what to do with AI.

    It’s being allowed to be developed without much of any restrictions and that’s what’s dangerous about it.

    Like how some places are starting to use AI to profile the public Minority Report style.

    • pavnilschanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      Yep. It’s either “embrace the future, adapt or die” or “let’s put the technological genie back in the bottle”. No actual nuance.

      • PopOfAfrica@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        1 year ago

        The problem is capitalism puts us in this position. Nobody is abstractly upset the jobs we hate can now be automated.

        What is upsetting is that we wont be able to eat because of it.

    • RememberTheApollo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Depends on who you talk to. If you’re a business that can replace human labor with AI, you’re probably discussing it pretty hard.

      What restrictions should it have? How would you implement them, because there would certainly be “you can’t make “x” with AI, unless of course you’re a big business that can profit off of it?

  • Dasnap@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    3
    ·
    1 year ago

    The past decade has done an excellent job of making people cynical about any new technology. I find looking at what crypto bros are currently interested in as a good canary for what I should be suspicious of.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      1 year ago

      The vaccine saved millions of lives, yet people will be cynical despite reality

      • Dasnap@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        I feel like anti-vaccine groups have been around for a good chunk of time, but they certainly seemed to get a boost from the internet.

        • rayyyy@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          It doesn’t help that big corporations promoted 100% hydrogenated Crisco as a healthy alternative to lard or the drug Thalidomide that caused horrible birth defects.
          People need to be informed but many are swamped with the task of making a living and constant social “guidance”.

        • huginn@feddit.it
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          If more of your family and friends are dying why would you avoid the ounce of prevention? That doesn’t make sense

          • GigglyBobble@kbin.social
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            1 year ago

            They wouldn’t attribute it to the virus but something like 5G radiation. And yes, it doesn’t make sense.

    • raktheundead@fedia.io
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      It’s also worth noting that the same VCs who backed cryptocurrency have pivoted to generative AI. It’s all part of the same grift, just with different clothes.

      • WldFyre@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most major companies didn’t touch crypto with a 10ft pole, but they’ve leapt at the chance to use AI tech. I don’t think it’s the same grift at all personally.

        • raktheundead@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          A lot of companies investigated cryptocurrency obliquely; “blockchain” was the hype word for several years in tech. And several of those companies had a serious sunk-cost fallacy going when they perpetuated their blockchain projects, despite blockchain only at best being a case of Worse Is Better, where a solution that sucks, but exists can be better than a perfect option that doesn’t.

    • Fermion@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I am really dissapointed that crypto became synonymous with speculative “investing.” The core blockchain technology seems like it could be useful for enhancing privacy online. However, the majority of groups loudly advertising that they use crypto are exploitative money grabs.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It doesn’t hurt that the same companies that did all the things that made people cynical about technologies are the ones perpetrating this round of BS

  • orca@orcas.enjoying.yachts
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    2
    ·
    1 year ago

    I work with AI and don’t necessarily see it as “dangerous”. CEOs and other greed-chasing assholes are the real danger. They’re going to do everything they can to keep filling human roles with AI so that they can maximize profits. That’s the real danger. That and AI writing eventually permeating and enshittifying everything.

    A hammer isn’t dangerous on its own, but becomes a weapon in the hands of a psychopath.

    • q47tx@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      1 year ago

      Exactly. AI should remain a tool for the human to use, not something to replace the human.

        • Eccitaze@yiffit.net
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 year ago

          And if the odds of that happening are literally zero, what then? If the only feasible outcome of immediate, widespread AI adoption is an empty suit using the heel of their $750 Allen Edmonds shoe to grind the face of humanity even further into the mud, should we still plow on full steam ahead?

          The single biggest lesson humanity has failed to learn despite getting repeatedly smacked in the face since the industrial revolution is that sometimes new technologies and ideas aren’t worth the cost despite the benefits. Factories came and covered vast swaths of land in soot and ash, turned pristine rivers and lakes into flaming rivers of toxic sludge, and poisoned the earth. Cars choked the skies with smog, poisoned an entire generation with lead, and bulldozed entire neighborhoods and parks so that they could be paved over for parking lots and clogged freeways. Single use plastics choke the life out of our oceans, clog our waterways with garbage, and microplastics have infused themselves into our very biology, with health implications that will endure for generations. Social media killed the last remaining vestiges of polite discourse, opened the floodgates on misinformation, and gave a safe space for conspiracy theories and neonazis to fester. And through it all, we continue to march relentlessly towards a climate catastrophe that can no longer be prevented, with the only remaining variable being where the impact will lie on the spectrum from “life will suck for literally everyone, some worse then others” to “humanity will fall victim to its own self-created mass extinction event.”

          With multiple generations coming to the realization that all the vaunted progress of mankind will directly make their lives worse, an obvious trend line of humanity plowing ahead with the hot new thing and ignoring the consequences even after they become obvious and detrimental to society as a whole, and the many, instantly-obvious negative impacts AI can have, is it any wonder that so many are standing up and saying “No?”

    • Mjpasta@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      So, because of greed and endless profit seeking, expect all corporations to replace everything that can be replaced - with AI…?

      • orca@orcas.enjoying.yachts
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I mean, they’re already doing it. Not in every role because not every one of them can be filled by AI, but it’s happening.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    13
    ·
    1 year ago

    At first I was all on board for artificial intelligence and spite of being told how dangerous it was, now I feel the technology has no practical application aside from providing a way to get a lot of sloppy half assed and heavily plagiarized work done, because anything is better than paying people an honest wage for honest work.

    • nandeEbisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      1 year ago

      AI is such a huge term. Google lens is great, when I’m travelling I can take a picture of text and it will automatically get translated. Both of those are aided by machine learning models.

      Generative text and image models have proven to have more adverse affects on society.

      I think we’re at a point where we should start normalizing using more specific terminology. It’s like saying I hate machines, when you mean you hate cars, or refrigerators or air conditioners. It’s too broad of a term to be used most of the time.

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 year ago

        Yeah, I think LLMs and AI art have overdominated the discourse to the degree that some people think they’re the only form of AI that exists, ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

        Some forms of AI are debatable of their value (especially in their current form). But there’s other types of AI that most people consider highly useful and I think we just forget about it because the controversial types are more memorable.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          AI is a tool, its value is dependent on whatever the application is. Transformer architectures can be used for generating text or music, but they were also originally developed for text translation which people have fewer qualms with.

        • SnipingNinja@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

          AFAIK two of those are generative AI based or as you said LLMs and AI art

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Its not a matter of slang, its referring to too broad of a thing. You don’t need to go as deep as the type of model, something like AI image generation, or generative language models is what you would refer to. We’ll hopefully start converging on shorthand from there for specific things.

        • kicksystem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’d like people to make a distinction between AI and machine learning, machine learning and neural networks (the word deep is redundant nowadays). And then have some sense of different popular types of neural nets: GANs, CNN, Transformer, stable diffusion. Might be nice if people know what is supervised unsupervised and reinforcement learning. Lastly people should have some sense of the difference between AI and AGI and what is not yet possible.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I’m kind of surprised people are more concerned with the output quality for chatGPT, and not where they source their training set from, like for image models.

          Language models are still in a stage where they aren’t really a product by themselves, they really need to be cajoled into becoming a good product, like looking up context via a traditional search and feeding it to the model, or guiding it towards solving problems. That’s more of a traditional software problem that leverages large language models.

          Even the amount of engineering to go from text prediction model trained on a bunch of articles to something that infers you should put an answer after a question is a lot of work.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      1 year ago

      This is basically how I feel about it. Capital is ruining the value this tech could have. But I don’t think it’s dangerous and I think the open source community will do awesome stuff with it, quietly, over time.

      Edit: where AI can be used to scan faces or identify where people are, yeah that’s a unique new danger that this tech can bring.

      • Alenalda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve been watching a lot of geoguesser lately and the number of people who can pinpoint a location given just a picture is staggering. Even for remote locations.

    • Chickenstalker@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      5
      ·
      1 year ago

      Dude. Drones and sexbots. Killing people and fucking (sexo) people have always been at the forefront of new tech. If you think AI is only for teh funni maymays, you’re in for a rude awakening.

      • mriormro@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 year ago

        you think AI is only for teh funni maymays

        When did they state this? I’ve seen it used exactly as they have described. My inbox is littered with terribly written ai emails, I’m seeing graphics that are clearly ai generated being delivered as ‘final and complete’, and that’s not to mention the homogeneous output of it all. It’s turning into nothing but noise.

  • DarkGamer@kbin.social
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    1 year ago

    “Can’t we just make other humans from lower socioeconomic classes toil their whole lives, instead?”

    The real risk of AI/automation is if we fail to adapt our society to it. It could free us from toil forever but we need to make sure the benefits of an automated society are spread somewhat evenly and not just among the robot-owning classes. Otherwise, consumers won’t be able to afford that which the robots produce, markets will dry up, and global capitalism will stop functioning.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Most US adults couldnt tell you what LLM stands for, nevermind tell you how stable diffusion works. So theres not much point in asking them as they wont understand the benefits and the risks

  • vzq@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    The problem is that I’m pretty sure that whatever benefits AI brings, they are not going to trickle down to people like me. After all, all AI investments are coming from the digital land lords and are designed to keep their rent seeking companies in the saddle for at least another generation.

    However, the drawbacks certainly are headed my way.

    So even if I’m optimistic about the possible use of AI, I’m not optimistic about this particular stand of the future we’re headed toward.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    The general public don’t understand what they’re talking about so it’s not worth asking them.

    What is the point in surveys like this, we don’t operate on direct democracy so there’s literally no value in these things except to stir the pot.

  • balloflearning@midwest.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    Generally, people are wary of disruptive technology. While this technology has potential to displace a plethora of jobs for the sake of increased productivity, companies won’t be able to move product if unemployment skyrockets.

    Regardless of what people think, the Pandora’s box of AI is opened and now the only way forward is to adapt.

  • hamid@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I ask chat gpt for really specific things like creating template language and writing short powershell scripts I could write but don’t have the time/don’t care about. It is useful but not revolutionary or risky for me.

  • peopleproblems@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    A majority of U.S. adults don’t belive jack shit about the benefits of most things.

    I’m more angry I can’t use a co-pilot at work yet