• Emotional_Series7814@kbin.cafe
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    “We believe that users should have a say in how their attention is directed, and developers should be free to experiment with new ways of presenting information,” Bluesky’s chief executive, Jay Graber, told me in an email message.

    Of course, there are also challenges to algorithmic choice. When the Stanford political science professor Francis Fukuyama led a working group that in 2020 proposed outside entities offer algorithmic choice, critics chimed in with many concerns.

    Robert Faris and Joan Donovan, then of Harvard’s Shorenstein Center, wrote that they were worried that Fukuyama’s proposal could let platforms off the hook for their failures to remove harmful content. Nathalie Maréchal, Ramesh Srinivasan and Dipayan Ghosh argued that his approach would do nothing to change the some tech platforms’ underlying business model that incentivizes the creation of toxic and manipulative content.

    Mr. Fukuyama agreed that his solution might not help reduce toxic content and polarization. “I deplore the toxicity of political discourse in the United States and other democracies today, but I am not willing to try solving the problem by discarding the right to free expression,” he wrote in response to the critics.

    When she ran the ethics team at Twitter, Rumman Chowdhury developed prototypes for offering users algorithmic choice. But her research revealed that many users found it difficult to envision having control of their feed. “The paradigm of social media that we have is not one in which people understand having agency,” said Ms. Chowdhury, whose Twitter team was let go when Mr. Musk took over. She went on to found the nonprofit Humane Intelligence.

    But just because people don’t know they want it doesn’t mean that algorithmic choice is not important. I didn’t know I wanted an iPhone until I saw one.

    And with another national election looming and disinformation circulating wildly, I believe that asking people to choose disinformation — rather than to accept it passively — would make a difference. If users had to pick an antivaccine news feed, and to see that there are other feeds to choose from, the existence of that choice would itself be educational.

    Algorithms make our choices invisible. Making those choices visible is an important step in building a healthy information ecosystem.