• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    This is the best summary I could come up with:


    The products it assesses are ratings across several AI principles, including trust, kids’ safety, privacy, transparency, accountability, learning, fairness, social connections, and benefits to people and society.

    “All generative AI, by virtue of the fact that the models are trained on massive amounts of internet data, host a wide variety of cultural, racial, socioeconomic, historical, and gender biases – and that is exactly what we found in our evaluations,” she said.

    Other generative AI models like DALL-E and Stable Diffusion had similar risks, including a tendency toward objectification and sexualization of women and girls and a reinforcement of gender stereotypes, among other concerns.

    That issue came to a head this week, as 404Media called out Civitai’s capabilities, but the debate as to the responsible party — the community aggregators or the AI models itself — continued on sites like Hacker News in the aftermath.

    “Consumers must have access to a clear nutrition label for AI products that could compromise the safety and privacy of all Americans—but especially children and teens,” said James P. Steyer, founder and CEO of Common Sense Media, in a statement.

    If the government fails to ‘childproof’ AI, tech companies will take advantage of this unregulated, freewheeling atmosphere at the expense of our data privacy, well-being, and democracy at large,” he added.


    The original article contains 1,010 words, the summary contains 215 words. Saved 79%. I’m a bot and I’m open source!