The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

  • ShadowRam@kbin.social
    link
    fedilink
    arrow-up
    147
    arrow-down
    20
    ·
    1 year ago

    The majority of U.S. adults don’t understand the technology well enough to make an informed decision on the matter.

    • Moobythegoldensock@lemm.ee
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      If you look at the poll, the concerns raised are all valid. AI will most likely be used to automate cyberattacks, identity theft, and to spread misinformation. I think the benefits of the technology outweigh the risks, but these issues are very real possibilities.

    • meseek #2982@lemmy.ca
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      11
      ·
      1 year ago

      Informed or not, they aren’t wrong. If there is an iota that something can be misused, it will be. Human nature. AI will be used against everyone. It’s potentially for good is equally as strong as its potential for evil.

      But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

      Just one likely incident when big brother knows all and can connect the dots using raw compute power.

      Having every little secret parcelled over the internet because we live in the digital age is not something humanity needs.

      I’m actually stunned that even here, among the tech nerds, you all still don’t realize how much digital espionage is being done on the daily. AI will only serve to help those in power grow bigger.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 year ago

        But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

        None of this requires “AI.” At most AI is a tool to make this more efficient. But then you’re arguing about a tool and not the problem behavior of people.

      • aidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        1 year ago

        AI is not bots, most of that would be easier to do with traditional code rather than a deep learning model. But the reality is there is no incentive for these entities to cooperate with each other.

    • cybersandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      1 year ago

      But our elected officials like McConnell, feinstein, Sanders, Romney, manchin, Blumenthal, Marley have us covered.

      They are up to speed on the times and know exactly what our generations challenges are. I trust them to put forward meaningful legislation that captures a nuanced understanding that will protect the interests of the American people while positioning the US as a world leader on these matters.

    • ZzyzxRoad@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Seeing technology consistently putting people out of work is enough for people to see it as a problem. You shouldn’t need to be an expert in it to be able to have an opinion when it’s being used to threaten your source of income. Teachers have to do more work and put in more time now because ChatGPT has affected education at every level. Educators already get paid dick to work insane hours of skilled labor, and students have enough on their plates without having to spend extra time in the classroom. It’s especially unfair when every student has to pay for the actions of the few dishonest ones. Pretty ironic how it’s set us back technologically, to the point where we can’t use the tech that’s been created and implemented to make our lives easier. We’re back to sitting at our desks with a pencil and paper for an extra hour a week. There’s already AI “books” being sold to unknowing customers on amazon. How long will it really be until researchers are competing with it? Students won’t be able to recognize the difference between real and fake academic articles. They’ll spread incorrect information after stealing pieces of real studies without the authors’ permission, then mash them together into some bullshit that sounds legitimate. You know there will be AP articles (written by AI) with headlines like “new study says xyz!” and people will just believe that shit.

      When the government can do its job and create fail safes like UBI to keep people’s lives/livelihoods from being ruined by AI and other tech, then people might be more open to it. But the lemmy narrative that overtakes every single post about AI, that says the average person is too dumb to be allowed to have an opinion, is not only, well, fucking dumb, but also tone deaf and willfully ignorant.

      Especially when this discussion can easily go the other way, by pointing out that tech bros are too dumb to understand the socioeconomic repercussions of AI.

    • bob_wiley@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Those who do know it have a strong bias toward new tech, which blinds them from reality or any possible negatives. We’ve see this countless times in tech. Like when NFTs were going to change the world, you couldn’t tell those guys otherwise without being branded out of touch or someone who doesn’t understanding the tech.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I mean, NFT’s is a ridiculous comparison because those that understood that tech were exactly the ones that said it was ridiculous.

        • bob_wiley@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I have to believe the crypto bros understood it; they were just blinded my dollar signs… like much of those involved in AI right now.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Wasn’t it the ones who didn’t understand NFTs who were the fan boys? Everyone who knew what they were said they were bloody stupid from the get-go.

    • archon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      You can make an observation that something is dangerous without intimate knowledge of its internal mechanisms.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        Sure you can, but that doesn’t change the fact that your ignorant whether it’s dangerous or not.

        And these people are making ‘observations’ without knowledge of even the external mechanisms.

        • archon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I’m sure I can name many examples of things I observed as dangerous, and the observation being correct. But sure, claim unilateral ignorance and dismiss anyone who don’t agree with your view.