In the recent months, we’ve been getting more blogspam accounts, and the administrators have been discussing behind the scenes on how to deal with it. Blogspam is against the rules of this Lemmy instance and is treated the same as any other spam. That is, offending posts will be removed and blogspammer banned. I thought I’d share my thought process of moderating stuff like this.

Blogspam is kind of a controversial topic and has a lot of grey areas. It basically involves accounts seemingly made specifically to post links to a specific website, usually with the intent of generating ad revenue. Herein lies the grey area, because simply posting links to your own website or a website you like isn’t spam, nor is it against the rules to post websites that have ads, nor is it against the rules for an organization to have an official account on Lemmy, so it becomes a problem of where to draw the line. You can also run into problems where it’s hard to tell if someone is intentionally spamming or if they’re just enthusiastic about the content of a site.

That said, here are my general criteria on what is considered blogspam, with some wiggle room on a case by case basis:

  • Does the user only post links to one or a few sites? Do they have any other activity, such as commenting or moderating communities?

  • How often does the user post? For example, it might not be reasonable to consider an account to be blogspamming if they only post a few articles a month, even if they only post one site.

  • Does the user post the same link repeatedly? Do they post to communities where it would be off topic?

  • Is the user trying to manipulate the search feature in Lemmy? For example, by including a large number of keywords in their title or post body?

  • Are the links posted “clickbait” or otherwise designed to mislead the reader?

  • Is the site trying to extract data or payment from readers? Examples include forcing users to sign up or pay for a membership before letting them read the article.

  • Is the site itself well-known and reputable or obscure and suspicious?

  • Does the site have an “inordinate” number of ads? Are the ads intrusive (autoplaying video ads versus simple sponsor mentions for example).

  • Is there evidence that the user is somehow affiliated with the site? Examples include sponsored links or having the username be the same as the site name.

  • Is there evidence that the user is a bot?

Not all of these have to be satisfied for it to be blogspam, and it’s usually up to the administrators to make a rational decision on whether to intervene.

Note that these criteria apply to sites that are generally benign, but is being posted in a way that might count as spam. If the site contains malware, engages in phishing, is blatantly “fake news”, is a scam, is generally malicious, etc, those alone are reason enough for it to be removed and the posted potentially banned, and would constitute as a much more serious violation of our rules.

I’m open to feedback on this, feel free to discuss in the comments!

  • AgreeableLandscape@lemmy.mlOP
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    4 years ago

    I got some concerns with the “fake news” part of this post. Let me just clarify that personally, I’ll only remove things that are without a doubt false, and maliciously so, things like “Trump won the 2020 election”, “Climate change isn’t real” or “Vaccines cause autism”. Or, from “news” sites that have been proven to be puppets of organizations like the CIA. Stuff that “might be wrong” is generally left to up/down votes.

    Though keep in mind that I’m talking about removing things at the site level with this, moderators of individual communities are generally free to remove stuff the admins don’t have a problem with at their own discretion.

    @ufrafecy@lemmy.ml, @keo@lemmy.ml

    • Anachron@lemmy.ml
      link
      fedilink
      arrow-up
      7
      ·
      4 years ago

      I just want to chime in and say thank you for handling this topic so well! This is exactly how it should be done in my opinion.

    • [object Object]@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      4 years ago

      Replacing “blatantly fake news” to “without a doubt false” doesn’t solve the problem at all. It still relies on your view of things. Why you in particular should be the arbiter of Truth?

      Even some of the topics you listed are worthy of discussion in my opinion and should not be outright banned by instance admins on such ambiguous terms that could be stretched to infinity. Most of them not even proven to be false but rather gathered large amount of evidence against them. Who’s to say that new evidence won’t change it tomorrow? Or that evidence we already have are not intentionally misleading to benefit someone? Or just statistical error?

      Previously common knowledge was that homosexuals are pedophiles, there were actual scientific evidence supporting that. Imagine labeling any counterargument to that as “without a doubt false”?

      I understand the desire to shield people from trolls and misinformation and I want that too, but ambiguous rules that rely on personal world view is a terrible way to go about it.

      • Bloodaxe@lemmy.ml
        link
        fedilink
        arrow-up
        8
        ·
        4 years ago

        Well, someone’s gotta be an admin at the end of the day. If this is not adequate, perhaps hosting your own instance is a good alternative?

        • CutieCactus@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          3 years ago

          Exactly - this is the whole point of federation. If you have a problem with an instance, just create your own one.

  • [object Object]@lemmy.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    4 years ago

    Seems mostly reasonable. Not sure about ban for “fake news” tho. It could be abused to silence critical voices. In the end it might produce echo chambers.

    Maybe it’s a good idea to add “repeated ban evasion” to the list. It could be checked through IP or fingerprinting. Doing it without sacrificing privacy would be difficult however. Account age might also indicate that something fishy is going on.

    As a solution, could such people just be downvoted to hell? Self moderation is a nice benefit of reddit-like voting. Their effectiveness would plummet if nobody is seeing them, so they would eventually stop, right? To prevent spam they additionally could be rate-limited if they get downvotes only. Also, what about user reports? Maybe they should be also taken in consideration (their relative quantity and validity). Restricting new accounts might also help (until certain amount of karma has been reached, for example) against bots.

    • nutomic@lemmy.mlM
      link
      fedilink
      arrow-up
      13
      ·
      4 years ago

      I think it would be good to have a simple rule, you need to make at least 5 comments before you can create a post. Could be overall or in each community, and configurable.

        • k_o_t@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          4 years ago

          sounds good, although I think 5 votes per the 5 comments necessary is a bit too high (comment get much less traction than posts naturally), but if this would be fine tuned based on user feed back, it should be fine

          what are your thoughts on adding an optional time delay (the way reddit does now allow to post until your account is X hours long or something similar)?

          • nutomic@lemmy.mlM
            link
            fedilink
            arrow-up
            1
            ·
            4 years ago

            That also makes sense. I didnt want to make the issue more complicated, but you could mention that in a comment there.

      • Ephera@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        4 years ago

        I mean, I’ve seen (what I deemed to be) blogspam accounts that had created their own Community, so there’s probably rather much work those blogspammers are willing to go through for setting it up.
        Of course, if the hurdle can be raised without impacting normal users, that would likely still help.

        Maybe the comment-requirement could also kick in after a few posts, so that new users/accounts can immediately create a post, but if they’ve created five posts without commenting once, then they get stopped from creating another post.

      • k_o_t@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        4 years ago

        better yet, don’t make this rule public and change it from time to time 😉

        • nutomic@lemmy.mlM
          link
          fedilink
          arrow-up
          2
          ·
          4 years ago

          I dont think that is the right way to go for an open source project, it will just lead to frustration for new users. Its better to make the details public, then everyone can participate in the discussion if the rules make sense.

      • iortega@lemmy.eus
        link
        fedilink
        arrow-up
        0
        ·
        4 years ago

        So they could just make 5 random comments before posting? That seems to me just a pretty easy to evade measure.

    • GrassrootsReview@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      4 years ago

      I remember an article claiming that Reddit had it relatively easy in dealing with disruptive groups compared to other social media system is that they have a ban evasion rule. So that when a Subreddit was banned and created a new sub they could be banned again before creating problems.

  • CAP_NAME_NOW_UPVOTE@lemmy.ml
    link
    fedilink
    arrow-up
    10
    arrow-down
    3
    ·
    4 years ago

    Removing blogspam increases quality, no question. I keep meaning to write better feedback for Lemmy with what I’ve learned over the years, but blogspam is a hot topic of mine so I’ve listed some thoughts here.

    One addition to your list is insistent self-linking to the website itself at many points through the story, very little sources outside of themselves.

    Blogspam usually copies or re-writes source content that is usually linked in the story. A big problem with Linux news are the re-writes of mailing list posts with added opinion.

    On top of ads, referral links are common, especially by gaming and hardware blogs.

    Finally, and it’s hard to describe, but a lot of blogspam sources have a cult following. If they take action to harass people because their content was removed, they should be banned entirely. I’ve had two website owners get their little cultists to harass me because their content was removed.

    • sirsquid@lemmy.ml
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      3 years ago

      You tried to get the full-time Project Manager of Godot Engine banned from their own subreddit, because they had the audacity to very politely talk to you about the ban on me from /r/linux. You’re fucking power mad. You should not be listened to, ever.

      Admins, if you listen to CAP, you will ruin Lemmy.

      The name alone is such a ridiculous scream for attention it’s absurd.

  • sirsquid@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    3 years ago

    Well, by definition of “blogspam” that would basically ban me and GamingOnLinux.

    Frankly, I really think it’s a gross term that often discriminates against websites doing good work, and I’m not even talking about myself here. The /r/linux community on Reddit is notorious for this, banning multiple sites giving out good news and I see CAP_NAME is here as a moderator of the Linux community - sad to see this. CAP is a power-hungry tool who claims harassment whenever people don’t agree with his way of thinking.

    The problem should not be banning “blogspam”, the issue should be looking at what the sites and the people running/posting them actually bring to Lemmy. Think about the posts themselves. Do they generate discussion? Get regular upvotes? Do the majority enjoy the content. Lemmy itself is small, being hostile to people that might actually help bring traffic is not going to do it any favours right now. I’ve been constantly advertising Lemmy to multiple thousands of people through GOL social accounts, and our website.

    Are the people/websites posting about something other people aren’t? Often yes. Are those news websites the initial source most people get the info from? Usually also yes. People love to claim otherwise (hello CAP_NAME), but the majority do not follow hundreds of mailing lists and RSS feeds to source the info like the news sites do.

    Honestly, If the route we go down here is to start shouting “blogspam” and turn into another Reddit with far less people but the same hostile rules, then I’m out and I won’t look back.

    I love the idea of Lemmy, so please think on all this very carefully.

  • ufra@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    4 years ago

    Good to see a lot of thought went into to this and most of those criteria look right.

    A couple comments:

    Is the site itself well-known and reputable or obscure and suspicious?

    I don’t think well-known and reputable sites should be exempt if they fit the other patterns.

    For example, if a fedora enthusiast creates an account that does nothing but post to fedoramagazine.org they should have the same consequence, especially if they don’t participate in the community otherwise.

    nor is it against the rules for an organization to have an official account on Lemmy

    same as above. organisations should be treated by the same rules as any other user

    For me, a grey area would be if someone like logrocket got someone to join the community as an active user and posted logrocket articles as well as contributing to the community with posts to other sites, comments on other posts etc. Not ideal, but hard to say they are breaking the rules.

    is blatantly “fake news”

    Not a fan of this one because some people’s idea of fake news varies widely and you are stepping on slippery slopes. I understand the intent, and agree but maybe there is a less editorial way to conceive it.

    Good work.

  • dumpsterlid@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    4 years ago

    I think in general, if something is

    1. extremely low effort
    2. playing on lazy stereotypes or conspiracies without bringing anything to the conversation

    Then treating it as spam and removing it isn’t a bad idea.

    When it comes down to it, moderation is always going to be about the grey areas.

    That is why it needs to be done by humans and its also why there needs to be many communities with different moderators so that no one moderation policy/team has too much power/cultural blindspots are limited in their impact.

    Ultimately I have seen very little evidence that communities don’t need strong moderation.

  • quiteStraightEdge@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    4 years ago

    I’m really new to lemmy, but I’m really not into having someone with too much power. If someones activity brings downvotes and reports their posts and comments should be much more hidden. If their activity is bombarded by community, only then administrator should step in to judge situation and bring some harder penalty to a user, like limiting comments per day or even deleting account if really downvotes went off the roof. Basing judgement on one person point of view isn’t the way to go (usually, depends on context). If people didn’t liked something, then an outsider (in this situation admin) should step in to give his second opinion, if he agrees with people then harsh penalties are used. If not they still can downvote the hell out of something and have real impact on that matter.

  • GrassrootsReview@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 years ago

    Is the main side-wide problem that these posts turn up in the search results? Otherwise they are only seen by people subscribing the moderated communities people subscribed to?

    If yes, an option for (low volume) spammers could be to only exclude them from search results and otherwise have the moderators of the communities and their downvoters deal with them. As you already write many cases are grey areas, so maybe in such cases such more subtle mechanisms are enough.

    As only removing from the search results is less disruptive one could moderate more posts this way.

  • tronk@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 years ago

    Depending on what level you’re tackling this issue at, I might be guilty of having done this.

    The thing is, I recently created “DankMusicFragments”, and I posted there a bunch of times song fragments that I like. That meant my community was ‘trending’, which is true in that there was a lot of activity, but also sort of feels a bit cheap. So I stopped posting.

    I may post here and there in the future, but I actually didn’t like that the community was having so much ‘activity’ to appear in the trending section…

    Just a consideration to have regarding blogspam: I’d like to be able to post lots of my favorite song fragments, in case other people like them and so that users see that the community is alive. But at the same time, I don’t want to annoy y’all with a community or posts that y’all aren’t really that interested in… Is there a way of making both of those desires (post a lot of my favorite stuff and not annoying y’all) viable simultaneously?