• 0 Posts
  • 6 Comments
Joined 26 days ago
cake
Cake day: August 24th, 2024

help-circle
  • Ok, I could have been clearer. Blame the long covid.

    You should be wearing N95/KN94 masks whenever you are outside of that closed alone room, especially when you can’t keep a 6’ gap from others, washing your hands and using 70% ethanol hand sanitiser, and not touching all the things all the time. These are effective and backed by science. Additional measures such as blocking the gap under your door and buying air purifiers for your closed apartment where you remain alone are worthless additions to that routine.




  • I wear a mask when I leave the apartment, and use my pinky to open doors where possible. Elevator buttons get pushed with my keys. I haven’t caught it again in 2+ years doing this. The times I caught it were from my office.

    Covid is spread via water droplets and touch, so additional measures are pretty worthless beyond keeping your door shut.


  • References weren’t paywalled, so I assume this is the paper in question:

    Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. AI generates covertly racist decisions about people based on their dialect. Nature (2024).

    Abstract

    Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4,5,6,7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.