I read the article but I didn’t check out the platform yet. Thought it might be useful for my fellow autistic people.

  • ZILtoid1991@kbin.social
    link
    fedilink
    arrow-up
    30
    arrow-down
    4
    ·
    10 months ago

    AI has bias issues. While humans can be aware of them and course-correct, with the AI, not so much, and that’s just comes before all the biased data it was trained on.

    • Haui@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      10 months ago

      Ok, I understand. As someone who worked with AI and in hiring in the past I feel like (specifically ND focused) AI can’t do a worse job than traditional recruiting (which is also increasingly done with AI). But I might be wrong. On the other hand so could be you. Have a good one. :)

      • Pirky@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        10 months ago

        Another thing to add on: it can be difficult for AI to “unlearn” things. So if it learned a bias that it shouldn’t have, getting rid of it will be particularly hard.

      • sky@codesink.io
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        10 months ago

        It absolutely can do a worse job, and be more biased. Not to mention Sam Altman is backing it? Yeesh. I’m good.

        • Haui@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          Can you somehow prove that? I don’t see how „absolutely“ reinforces your claim. If conventional hiring wasn’t a bag of dicks, hiring companies (which are shit as well) wouldn’t make billions in revenue.

          But I don’t recognize altman. The name sounds familiar. I might need to check him out.

          • 520@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            10 months ago

            AI can absolutely screw up these things as bad or worse than any other program.

            AI sucks at nuances it isn’t explicitly trained on. That’s how you get AIs at eating disorder charities recommending things like 500 calorie daily deficits (this actually happened).

            AI might be able to get a technically accurate translation, but can’t always tell what’s culturally offensive or colloquially given a new meaning.

            For example, in Spanish “Soy” means “I”, and “Caliente” means “Hot”. What do you think “Soy caliente” means?

            Well if you got ‘I am hot’, Google Translate will actually agree with you…but it doesn’t mean that at all. What it actually means is ‘I am horny’.

            • Haui@discuss.tchncs.deOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 months ago

              Yeah, I get it. Pretty rough around the edges, no doubt. I still don’t think this makes „AI powered“ or „assistet“ worse than conventional recruiting. That’s all I‘m saying. It’s also a buzz word that gets used for a lot more than it is worth btw.

              • 520@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                10 months ago

                The quality of conventional recruiters can vary wildly. I’ve dealt with both actual pieces of shit recruiters (the kind that try outright guilt tripping and manipulation) and some amazing ones.

                • Haui@discuss.tchncs.deOP
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  10 months ago

                  Sure, that’s the same I have experienced but the argument I was making is that it’s not going to be worse if you train AI to especially target ND folks. It’s probably going to be worse than the good recruiters and better than the worst.

                  • 520@kbin.social
                    link
                    fedilink
                    arrow-up
                    4
                    ·
                    10 months ago

                    You do realise that’s going to be a metric fuckton harder than targeting neurotypicals, right? Like, bordering on impossible.

                    The clue is in the D of ND. To put it another way, let’s forget the entire spectrum of ND for a second and focus on ASD.

                    You not only need to train your AI on every possible interaction quirk an ASD person can have, such as trigger phrases to avoid and jobs they absolutely will not be able to do, you need it to be adaptable such that it can be useful to high functioning ASDers who can mask, to low functiiners that may not be able to leave the house but can maybe do some light computing work, and everywhere in between. And you need it to be able to detect which one it is dealing with.

                    That’s an impossible task, because the exact combination of issues, quirks, triggers, etc, are often very rare, if not completely unique.

                    But surely the AI can learn what the quirks of an individual are, right? Nope. AI learning relies on large datasets to do its work. Datasets that will not exist for all except the most common of issues and quirks. The most an AI can do is avoid a given topic when asked.

                    Now extrapolate that to the entire ND community. Good luck.

            • yetAnotherUser@feddit.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              10 months ago

              While not Google Translate, it’s a more advanced translation service.

              AI is surprisingly advanced and there’s a lot more towards translation than you might think. But you’re right: AI absolutely sucks at nuances it isn’t trained on. That’s pretty much the reason ChatGPT and other “general purpose AIs” will always perform (much) worse than specialized ones.

          • PsychedSy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 months ago

            I don’t know if there’s a great way to compare AI vs worthless recruiters, so finding something objective might be difficult. AI is going to pick up on systemic biases in reality and I’m not sure you can sanitize the data enough to avoid that.

            • Haui@discuss.tchncs.deOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              I agree that this is unfortunate. I think what I‘m trying to say is that we see this in AI while recruiting in most companies is trash and most people familiar with AI have no knowledge of how bad recruiting actually is.

          • Black AOC
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            But I don’t recognize altman.

            OpenAI’s founder/CEO… So yeah, I’ll be taking two or three large steps back from this idea.

    • inspxtr@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 months ago

      I think the bias issues will always be there, but usually worsened, less detected (or delayed detection), and exacerbated when the people working on the original problem do not suffer such issues. Eg: if most people working on facial recognition are white and male.

      While I do have my reservation with AI technologies, I think this is a worthwhile effort that the people encountering the same issues work to identify and address them, especially in this case they lead the effort, rather than just be a consultant on it.

      They can lead the effort on collecting new data, or adapt new ways of looking at data, metricizing objectives in a more appropriate manner for the targeted audience. Based on the article, I think they are doing this.

      • 520@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        10 months ago

        How would a bias towards NDs work?

        ND is a wiiiiide spectrum of conditions, and even within those conditions, you have subsets of quirks that are rare if not unique to a person.

        How would an AI know how to tailor its operating methods and communication?

          • 520@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            10 months ago

            By being trained on how that wide range of NDs communicate, what their symptoms are, how medical professionals diagnose them, etc.

            That’s the problem. The standard for NDs in terms of how they communicate can be literally anything that isn’t typical of an NT. Same with symptoms, and even medical professionals can often fuck up diagnoses.

            NDs tend to recognize other NDs; if we can do it, an AI sure as hell can.

            There are plenty of NDs that are very good at masking. To the point where no one would be able to tell just by looking at them.

            And an AI doesn’t have the same datasets you do. You can look at their body language, listen to their voice, etc. Any privacy respecting AI will have to go from written language alone. And have fun adapting your model for other languages!