• litchralee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 months ago

    Obligatory link to Statistics Done Wrong: The Woefully Complete Guide, a book on how statistics can and has been abused in subtle and insidious ways, sometimes recklessly. Specifically, the chapters on the consequences of underpowered statistics and comparing statistical significance between studies.

    I’m no expert on statistics, but I know enough that repeated experiments should not yield wildly different results unless: 1) the phenomenon under observation is extremely subtle so results are getting lost in noise, 2) the experiments were performed incorrectly, or 3) the results aren’t wildly divergent after all.

    • ArcticPrincess@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago
      1. the whole point of statistics is to extract subtle signals from noise, if you’re getting wildly different results, the problem is you’re under-powered.

      Thanks for taking the time to post these links, just letting you know you’re efforts have benefited at least one person who’s gonna enjoy reading this.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    5 months ago

    Just eyeballing the linked image… it looks like most of them agree?

    The bias almost certainly exists, according to nearly all analysis here. They just disagree on its magnitude. And for the most part they don’t disagree by much.

  • Daefsdeda@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    5 months ago

    I really found this out while writing my essay. If I wanted to I could interpret it slightly differently, resulting in totally different results.

  • BearOfaTime@lemm.ee
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    5 months ago

    Scientists who fiddle around like this — just about all of them do, Simonsohn told me — aren’t usually committing fraud, nor are they intending to. They’re just falling prey to natural human biases that lead them to tip the scales and set up studies to produce false-positive results.

    Since publishing novel results can garner a scientist rewards such as tenure and jobs, there’s ample incentive to p-hack.

    I mean really, making claims they aren’t committing fraud yet in the very next paragraph demonstrates their motivation… To commit fraud

    Nevermind the numerous cases of published papers being bunk. And that something like 80% of published science isn’t reproduceable…which is part of what publishing is to enable.

  • Zagorath@aussie.zone
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    Why have 4 of the studies seemingly not used error bars at all‽ Like I get that different analyses will arrive at different results, but they should always have error bars, right?