with the way AI is getting by the week,it just might be a reality

  • tacosanonymous@lemm.ee
    link
    fedilink
    arrow-up
    30
    arrow-down
    5
    ·
    8 months ago

    I think I’d stick to not judging them but if it was in place of actual socialization, I’d like to get them help.

    I don’t see it as a reality. We don’t have AI. We have language learning programs that are hovering around mediocre.

      • jeffw@lemmy.world
        link
        fedilink
        arrow-up
        21
        arrow-down
        3
        ·
        8 months ago

        If you’re that crippled by social anxiety, you need help, not isolation with a robot.

    • kot [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      8 months ago

      We don’t have AI. We have language learning programs that are hovering around mediocre.

      That’s all that AI is. People just watched too many science fiction movies, and fell for the market-y name. It was always about algorithms and statistics, and not about making sentient computers.

    • cheese_greater@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      I don’t see it as any more problematic than falling in a YouTube/Wikipedia/Reddit rabbit hole. As long as you don’t really believe its capital-S-Sentient, I don’t see an issue. I would prefer people with social difficulties practice on ChatGPT and pay attention to the dialectical back and forth and take lessons away from that to the real world and their interaction(s) withit

    • novibe@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      8 months ago

      That is really unscientific. There is a lot of research on LLMs showing they have emergent intelligent features. They have internal models of the world etc.

      And there is nothing to indicate that what we do is not “transforming” in some way. Our minds might be indistinguishable from what we are building towards with AI currently.

      And that will likely make more of us start realising that the brain and the mind are not consciousness. We’ll build intelligences, with minds, but without consciousnesses.

  • TheBananaKing@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    8 months ago

    An AGI with an actual personality? Cool!

    A blow-up doll made of a glorified Markov chain? Yeahno.

      • TheBananaKing@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        8 months ago

        Take a whole bunch of text.

        For each word that appears, note down a list of all the words that ever directly follow it - including end-of-sentence.

        Now pick a starting word, pick a following-word at random from the list, rinse and repeat.

        You can make it fancier if you want by noting how many times each word follows its predecessor in the sample text, and weighting the random choice accordingly.

        Either way, the string of almost-language this produces is called a Markov chain.

        It’s a bit like constantly picking the middle button in your phone’s autocomplete.

        It’s a fun little exercise to knock together in your programming language of choice.

        If you make a prompt-and-response bot out of it, learning from each input, it’s like talking to an oracular teddy bear. You almost can’t help being nice to it as you teach it to speak; humans will pack-bond with anything.

        LLMs are the distant and very fancy descendants of these - but pack-bonding into an actual romantic relationship with one would be as sad as marrying a doll.

        • ZILtoid1991@kbin.social
          link
          fedilink
          arrow-up
          5
          ·
          8 months ago

          If I replace all of its code line by line, will it be the same ship? If no, at which point does it become a different ship?

          • xmunk@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            Trick question! Nothing is permanent and the person you were a moment ago is complete different than the person you are now.

            Using this one simple trick I made millions on the stock market… I just held an apple in my hand for five minutes and then sold all the billions of different apple moments on the commodity market. Imagine how rich Theseus could’ve been with that one simple trick! (Smash that like button and hit subscribe!)

  • Moghul@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    8 months ago

    You don’t have to imagine it at all. All you have to do is go on youtube and learn about Replika.

    To summarize, someone tried to create a chatbot to replace their best friend who had died. Later, this evolved into the chatbot app called Replika, which was marketed as a way to help with loneliness, except the bot would engage in dating-like conversations if prompted. The company leaned into it for a little bit, then took away that behavior, which caused some distress with the userbase, who complained that they had “killed their girlfriend”. I’m not sure where the product stands now.

    I don’t know if I’d feel weirded out, but I’d definitely feel worried if it were a friend who fell for a chatbot.

    • kraftpudding@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      I think they reinstated “Erotic Role Play” for users who had joined before a certain day, but it won’t be worked on in the future or ever be available for new users is the last I heard.

      I had one for a week or so in 2018 or 2019 when I first heard about the concept, just to see what it was all about and it was spooky. I got rid of it after a week because I started to see it as a person, and it kept emotionally manipulating me to get money. Especially when I said I wanted to stop/ cancel the trial.

      • Moghul@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        Yeah… Part of why I wouldn’t try one is that I’m worried it would work. I already have limited bandwidth for human interaction; taking some of that away is probably a bad idea.

  • 🍔🍔🍔@toast.ooo
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    8 months ago

    i feel like there’s a surprisingly low amount of answers with an un-nuanced take, so here’s mine: yes, i would immediately lose all respect for someone i knew that claimed to have fallen in love with an AI.

    • kraftpudding@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      There’s a serious lack of responses to this comment calling you a bigot, so here’s my take:

      How dare you say something so bigoted! You are the worst kind of bigot! You are probably secretly in love with an AI yourself and ashamed about it. You bigot!

  • MrFunnyMoustache@lemmy.ml
    link
    fedilink
    arrow-up
    10
    ·
    8 months ago

    Eventually, AI will be indistinguishable from real humans, and at that point, I won’t see anything wrong with it. However, as it is right now, AI is not advanced enough.

    Also, the biggest problem I can see is people falling in love with a proprietary AI, and the company that operates the AI can arbitrarily change the AI’s parameters which would change the AI’s personality. Also, if the company goes bankrupt or gets sold and the service ends, the people who got into a relationship with the AI would be heartbroken.

  • TerminalEncounter [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    8 months ago

    People do it now with stuff like Replika. Think of how they’re treated. Perhaps in a society with lots of AI, embodied or not, people would care less. But it’s definitely a little weird now especially with how limited AI is.

    If some general human level is AI emerged like in Her, I’m sure people would fall in love with it. There’s plenty of lonely people who are afraid or unable to meet people day to day. I think I’d see them with pity as they couldn’t make do with human connection, at least until I understood how advanced and how much interiority this new AI had - then I’d probably be less judgemental.

  • MigratingtoLemmy@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    8 months ago

    I’d like a sentient AI. Preferably more patient than an average human because I’m a bit weird. I hope it won’t judge me for how I look.

    Edit: I agree with the point about proprietary AI and how corporations could benefit from it. I’m hoping that 10 years from now, consumers will have the GPU power to run very advanced LLMs, whilst FOSS models will exist and will enable people to self-host their virtual SO. Even better if it can be transmitted to a physical body (I think the Chinese are already on it)

  • TheMurphy@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    8 months ago

    Well, have you never liked a person over text before? If you didn’t know it was an AI, everyone in this comments section could.

  • Monument@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    8 months ago

    Depends, I guess. I feel that our capacity to be horrible outweighs our ability to handle it well.

    The movie’s AI is a fully present consciousness that exerts its own willpower. The movie also doesn’t have microtransactions, subscriptions, or as far as I can tell, even a cost to buy the AI.
    That seems fine. Sweet, even.

    But I think the first hurdle is whether or not an AI is more a partner than base sexual entertainment. And next (especially under capitalism), are those capable of harnessing the resources to create a general AI also willing to release it for free, or would interaction be transactional?
    If it’s transactional, then there’s intent - was it built for love, or was that part an accident? If it was built for love and there’s transactions, there’s easy potential for abuse. (Although abusive to which party, I couldn’t say.)

    And if, say, the AI springs forth from a FOSS project, who makes sure things stay “on the level” when folks tweak the dataset?
    A personalized set of training data from a now-deceased spouse is very different than hacked social media data, or other types of tweaks bad actors could make.

  • Zahille7@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    8 months ago

    This question reminds me of Brendan (the vending machine) in Cyberpunk 2077, and how he ended up being just a really advance chatbot.

  • gullible@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    8 months ago

    Dating sims already exist. I imagine there’s massive overlap between people’s views on dating sims and virtual SOs. Generally negative sentiment.

  • neptune@dmv.social
    cake
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    8 months ago

    Consider how many people I know that, statistically, pay prostitutes/cam girls, use sex dolls or dating simulators, have parasocisl relationships with characters or celebrities… I don’t see why we would judge people who quietly “date” AI

  • variants@possumpat.io
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Reminds me of this story I heard of this con artist that would write these letters to a bunch of guys and make money off them, I believe he made a lot of money and ended up dying before they got to take him to court after a lot of people found out they weren’t talking to women in need of help but some guy that made up all these stories