See title

  • modulus@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    To look at this issue we have to consider what Popper was trying to do with falsificationism, and the current of thought he was embedded in (logical positivism). The reason of being of a notion like falsificationism is the so-called problem of demarcation: i.e., how can we distinguish science from non-science? And, even more particularly, how can we distinguish science from science-looking or science-claiming things (pseudoscience)?

    Falsificationism has the virtue of giving a simple answer to the problem of demarcation. Science happens when theory offers hypotheses that can be subjected to empirical tests, and, upon disconfirming data, the hypotheses are abandoned. The problem is, this is not how science works, but also, this is not how science ought to work.

    An example: Newtonian physics. This is an especially good example, because the Vienna Circle were clear that, if there was something worth being called science, it was physics. So, a demarcation criterion that tells us Newtonian mechanics is pseudoscience is very much off.

    Welcome to stellar parallax. Parallax is the apparently shift of position of an object (in this case a star) because of the movement of the observer (in this case, Earth). Newtonian mechanics and the heliocentric model predicted stellar parallax, but until the 1830s it had been impossible to detect:

    Stellar parallax is so small that it was unobservable until the 19th century, and its apparent absence was used as a scientific argument against heliocentrism during the early modern age. It is clear from Euclid’s geometry that the effect would be undetectable if the stars were far enough away, but for various reasons, such gigantic distances involved seemed entirely implausible: it was one of Tycho Brahe’s principal objections to Copernican heliocentrism that for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn and the eighth sphere (the fixed stars).

    So here we have a hypothesis: stars will appear to shift due to the movement of the Earth. We have an observation, in fact lots of observations through about two centuries: stars appear fixed. And yet neither the Copernican hypothesis nor Newtonian mechanics were abandoned (nor should they have been!).

    So does that mean science should be immune to disconfirmation? Don’t we learn anything from data? Obviously not. But we need a bit more of a sophisticated view than falsificationism. A clue is given us in Quine’s essay, Two dogmas of empiricism. In this essay, Quine points out that when we speak of confirmation (this was earlier than the notion of falsification) we shouldn’t so much think of a given predicate or hypothesis, but to the whole system (this is what’s usually called, for this reason, confirmation holism).

    The point here is, a single datum shouldn’t make us throw away an entire theory that is otherwise predicting a lot of other data. In fact, a priori, we don’t know necessarily what’s going on. The parallax of stars wasn’t observed because stars were a lot further away than it was commonly believed, but the instruments, experimental technique and grounding theory were broadly correct.

    Both general mechanics and quantum mechanics are just as false as Newtonian mechanics, in a certain sense. QM fails to predict gravitational lensing, and GR fails to predict interference patterns in the double slit experiment.

    Yet we don’t throw them away, wisely.

    Sometimes when we have a very strong theory and a datum that contradicts it, what we should do is throw it away. Example: if we get measurements that suggest speeds greater than light, we’re probably having measurement error somewhere.

    It’s better to look at this issue through the lens of Lakatos’ notion of a research programme, that has certain core commitments, and auxiliary or compensatory hypotheses. In Lakatos’ terms, a programme is productive if it still makes non-trivial, novel predictions, and becomes degenerate if the auxiliary hypotheses grow too numerous and difficult to sustain, on the face of new data.

    All models are (probably) wrong, but some are more useful than others. Science requires us to consider whether our theory generates accurate predictions in advance, without having to create endless numbers of special cases every time we make new observations. If several theories exist, they probably all have holes, and should be judged in terms of whether they are still useful to predict and understand new things.