Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism

  • DavidGarcia@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Putting any other issues aside for a moment, I’m not saying they’re not true also. Cameras need light to make photos, the more light they get, the better the image quality. Just look at astronomy, we don’t find the dark astetoids/planets/stars first, we find the ones that are the brightest and we know more about them than about a planet with lower albedo/light intensity. So it is literally physically harder to collect information about anything black, that includes black people. If you have a person with a skin albedo of 0.2 vs one with 0.6, you get 3x less information in the same amount of time all things being equal.

    And also consider that cameras have a limited dyanmic range and white skin might often be much closer to most objects around us than black skin. So if the facial features of the black person might fall out of the dynamic range of the camera and be lost.

    The real issue with these AIs is that they aren’t well calibrated, meaning the output confidence should mirror how often predictions are correct. If you get a 0.3 prediction confidence, among 100 predictions 30 of them should be correct. Then any predictions lower than 90% or so should be illegal for the police to use, or something like that. Basically the model should tell you that it doesn’t have enough information and the police should appropriately act on that information.

    I mean really facial recognition should be illegal for the police to use, but that’s besides the point.

  • DessertStorms@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    It’s amazing how hard some people will work to deny that demonstrable biases influenced by the society we live in, exist in and massively impact science and technology, as if they are above such things, while literally demonstrating their own biases.

    • hh93@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I always wonder if the people that are so hard against systemic/structural racism are really thinking that they are being oppressed if someone tries to address that or if they are fully aware of the advantages they have just because they are born with the “right” skincolor in the right neighbourhoods and are against it for purely egoistic reasons because they don’t want to lose that advantage

  • mohKohn@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    12 people. we’re talking about 12 people, so any conclusions are suspect. that being said, facial recognition struggling with black faces from insufficient data is an extremely common problem, so it’d be unsurprising

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      That’s exactly my idea on media sensationalism. It’s really not a large sample. Way more people have been arrested and imprisoned by the justice system without any AI involvement.

  • Yoruio@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Was this AI trained on an unbalanced dataset (only black folks?)

    It’s probably the opposite. the AI was likely trained on a dataset of mostly white people, and thus more easily able to distinguish between white people.

    It’s a problem in ML that has been seen before, especially for companies based in the US where it is just easier to find a large amount of white people as opposed to people of other skin colors.

    It’s really not dissimilar to how people work either, humans are generally more able to distinguish between two people who are races that they grew up with. You’ll make more mistakes when trying to identify people of races you aren’t as familiar with too.

    The problem is when the police use these tools as an authoritative matching algorithm.

    • gramathy@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Also makes me wonder if our defined digital color spaces being bad at representing darker shades contributes as well.

    • lntl@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      I thought they would have trained it on mugshots. Either way, it should never be used to make direct arrests. I feel like it’s best use would be something like an anonymous tip line that leads to investigation.

      • Yoruio@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Using mugshots to train AI without consent feels illegal. Plus, it wouldn’t even make a very good training set, as the AI would only be able to identify perfectly straight images shot in ideal lighting conditions.