First, a disclaimer: I work somewhere that is relevant to this topic1 so I want to be extra clear that I am only communicating my personal views.

Les seems to be (maybe! he can reply if I’m wrongly interpreting) thinking about the sorts of responsibilities We The Public have assigned to entities like social media companies without them really, uh, rising well to meet the challenge. I have been thinking a lot recently about Parler and particularly about how misunderstandings about “digital space” imply very problematic things because they’re not tied to how the actual internet works.

So when I’ve been thinking about this kind of thing recently, I’ve been having very similar ideas to Les on this part:

It goes something like this - freedom of speech does not imply a right to amplification.

The former is your unfettered ability to speak using your own capacity. The latter is others relaying, repeating, augmenting your speech.

I believe the former is an individual right - balanced by the right of others’ expression.

The latter is not a right - because it would essentially demand others be enslaved in service to your speech.

The comparisons are clear. You’ve always had a right to go shout on a sidewalk. As when, say, to pick a company not carefully at all, Twilio drops Parler, that’s fine, because you’ve never had a right to force a publisher to carry your screed on Algerian mind control tomatoes.

And yet.

And yet.

Put differently, I don’t think you get to be preternaturally loud without the help & consent of others. And I think maybe there should be accountability for providing that help & consent.

I think this runs into conflict with notions of common carriage and safe harbor. But I’m not sure these are unalloyed goods. We’re building huge, largely unsupervised event spaces that have become chaotic attractive nuisances. They’re like empty swimming pools in vacant rental properties - but with scant accountability for the landlord when a kid falls in and cracks their skull.

I think this is a fair analogy, but not necessarily a complete analogy. I’ve written out and deleted about five different ideas about why at this point, so I’m going to just give you one for now and it may not be that well-worded.

As easy as it is to say that private internet companies are enacting private choices just like an absentee landlord on their own land, there is an aspect here where this doesn’t quite match.

You know, there is a concept about public data networks. I’m told the term kind of died once we got to the internet, but I can’t help thinking that it’s a meaningful concept. The internet was publicly funded, of course, at various times in its development. More than other equivalent research there’s something public about it that we have to acknowledge. The internet is better for being an everyone network. It doesn’t have to be an unalloyed good for there to be some aspect of the good that is tied to its access being public, and that the public benefits from.

There is therefore some real interest we have in making sure that all children are at least free to traipse about on unfenced properties in a sense, which doesn’t quite match the metaphor.

I want there to be some people who do have responsibilities to provide networked computer services with equal availability for all. That work is nobler for its being equally accessed, even if that does mean some awful people benefit from it. Awful people benefit from water treatment facilities too, or phone lines to let them call their awful loved ones. I’m at peace with that. I want a gay kid in a podunk town to get the same big gay internet the rest of us make great even if their local authorities aren’t keen on the idea.

At the same time, we’re going about it in the exact wrong way when we can see columnists at the national level bemoaning that the U.S. President has been silenced because his Twitter account was suspended. If he wants to hire his own people to hook up his own computers to the internet, he has enough money to do it, and enough people to hire from.

(…well, Parler was apparently one giant Wordpress install, so maybe the tech community, they’re not sending their best… but you don’t need startup energy or BigCo talent to serve out a text file of whatever he would have been tweeting, which answers the important freedom of speech question here.)

Anyway, I’ve been typing enough out here that I have about as much saved in abortive paragraphs in another file, so I’ll stop for now. Suffice to say that this is really important stuff, and I think more tech people should be talking about it publicly because we’re in the position of understanding the power the industry does and doesn’t have.


1: I have literally zero internal knowledge about my employer’s relevant involvement or decisions. The internal knowledge about other stuff that I do have from working there is not at all referenced in any of this, so I am merely Jane Q. Public, cloud-knowledgeable techperson.

  • ufra@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    4 years ago

    A for effort!

    I don’t currently have the energy budget to decompose all the information presented in the main post but it looks fairly reasonable.

    In the first linked post about spaces is the basic gist that it’s not entirely clear who is responsible for community generated content as opposed to supply-side information?

    If so, I share an unease about what level of responsibility a server operator should have for crowd-sourced content. I fret about what expense an operator should be required to entail and what legal ramifications exist for small businesses who want to run a service without dealing with a regulatory burden against the deep pockets of trillionaire cos.

    On the other hand, if that service is actively using the content to repackage and profit from value-ads they should take some level of responsibility for expending the resources to reasonably ascertain its lack of harm. How much, I don’t know.

    • zkikiz@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      4 years ago

      I think that, interestingly, it seems that decent human moderation doesn’t scale. The big companies seem to just be auto-banning any mention of bad words while allowing actual fascist behavior to continue unopposed (because fascists with more than two brain cells don’t use the words that describe them. Abusers don’t say “I’m going to abuse you!”)

      The three most workable moderation strategies appear to be Reddit’s, Mastodon’s, and email/spam/abuse scoring. FB/Twitter’s model seems very flawed even if technically compliant.

      Of course more laws on the topic will just lead to regulatory capture and monopolies. But I think we as internet denizens can see the right way forward.

      • roastpotatothief@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        4 years ago

        I guess both exist, the carrier and the publishers. Maybe there needs to be a legal distinction - so a service can declare itself one or the other, and then follow appropriate rules.

        What would really happen if people on a “carrier” started anonymously posting bad stuff - like blackmailing, doxing, threatening? What model would prevent that?

        • Maybe a quorum or users could ban the offending user. To stop him immediately joining again, you could have a waiting list to join the service.
        • Maybe you by default block content from strangers. You just follow the people you know.
        • Maybe there’s no problem to solve. These bad things are already crimes and IMO they are very rare. Communities and police already have ways of dealing with them.

        Imagine this scenario. “A president want to suppress some scandal (he is abusing his office to steal from orphaned children) so he orders the arrest of the journalist who is trying to expose him. He uses a carrier communiation channel to communicate and enforce this arrest warrent.” This is an extreme example - any robust solution should be able to deal with that scenario.

        BTW the banning of bad words is IMO just the only censorship which (today) can easily be fully automated. The services which today have forbidden words, they will have forbidden ideas as soon as the technology exists to automate that.

        • roastpotatothief@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 years ago

          …But TBH this post touches on a few topics - all of them difficult - none can ever have clean answers. It’s confusing because it’s a jumble of ideas.

          But they are all important and interesting. And what impressed me from this post was that there is so little groupthink here on Lemmy. People with opposite opinions happily debate each other.

          So I suggest you make an individual post about each topic. If you can figure out each answer, maybe you can tie them all together.

          • ability to communicate freely and effectively - a journalist should not be denied his platform - a local pub should have the same ability to advertise an upcoming gig, as a club owned by the mayor has
          • ability to prevent/punish harmful speech/actions - like publishing the addresses of civil rights activists along with bomb making instructions.
          • preventing accidental harm - like children seeing sword-swallowing tutorials
          • the value of the commons - having conversations with strangers and exploring ideas, somewhere your employer can’t eavesdrop.
          • the right to do wrong - to be a gay in the 1950 or to be nazi today - it’s impossible to know what is truly bad, and what people in the future will realise is really fine. Therefore we must tolerate things we consider bad.
          • the internet (and effective communiation) as a public service - a right. We can punish bad people for crimes, but we still don’t deny them human rights.
      • ufra@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        4 years ago

        I think that, interestingly, it seems that decent human moderation doesn’t scale.

        this is actually interesting to think about in terms what limitiations of so called AI and machine really exist. Moreover, what would happen if a company like twtr launched a manhatten project to apply human moderation (it seems FB already tried this in some capacity, but perhaps not to the standards of a nuclear project, more like mechanical turk). Apparently, a successful ai/ml would dwarf the cost of employing thousands of humans and all their management. What the limitations truly are says a good amount about prospects for agi, imo.

  • zkikiz@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 years ago

    I like this approach to thinking about the problem and the only thing that I would add is that we need to get our terms correct and maybe create some new definitions as far as what sorts of online “spaces” or “carriers” or “publishers” are responsible for what.

    // BEGIN supporting context most of us already know //

    We all know that the power of the internet lies in its ability, like a sidewalk or water pipe or street or public park, to allow anybody with a voice to connect up and say something as long as they’re not actively harming the carrier itself. (You can usually be a loud Nazi on the sidewalk without the government arresting you, but if you block the sidewalk or start damaging it with a sledgehammer, that’s when they step in.)

    But there’s a limit when you get to the idea of “publishers” that make you louder than everyone else. I don’t have a god-given right to unlimited megaphones that I can use on any sidewalk at any hour of the day: that bumps up against others’ right to be secure in their homes and not have their ears assaulted.

    // END supporting context most of us already know //

    And as the post says, there’s a sort of middle ground “space” that is maybe publicly accessible, maybe privately owned, but at some point someone’s liable for bad/unsafe/dangerous things that happen there: you can’t for example make a skate park with a sign saying “this is a free for all” but was constructed negligently such that its users get injured. If it becomes a haven for predatory people and drug dealers to evade responsibility, at some point the police will be knocking on the owner’s door because this is a problem that needs to be solved: the world isn’t actually a “sovereign citizen”/freeman/cultist/libertarian’s utopia where you can hide a smuggling operation behind a “no trespassing” sign and expect that no person or government will be able to stop you. A skate park that’s doing double duty as a drug dealer headquarters is called a “front operation” and people can see straight through it.

    I think it’s real important at a technical level to distinguish tools and networks that are just used to connect to other things, like your email app or OS or IRC chat app or Bittorrent program, versus tools and networks that are primarily used for a service owned and operated by one entity, like FB Messenger, the eBay app, and the Twitter app. In the first case it’s impossible for an email app programmer to be held responsible for the emails that flow through it, once it’s downloaded it’s out of their hands. But in the second case the app is necessarily part of a service that is provided and monitored by (usually) the same company. Of course they can be expected to have some responsibility. Even if megaphones and spray paint can be used as a nuisance, it shouldn’t be illegal to buy them, even if some additional anti-abuse steps are taken. But if a store starts selling literal bomb-making kits, you bet they and anyone buying them need to be held accountable. Intent and context matters. A Lemmy or Mastodon app shouldn’t be held to the same content moderation standards as a Parler or Reddit app.