Time to ditch falsifiability?


230px-Karl_Popper

SelfAwarePatterns made me aware of this essay by theoretical physicist Sean Carroll who expressed opinion that some scientific theories can still be called scientific, even though they are claimed to be unfalsifiable:

Modern physics stretches into realms far removed from everyday experience, and sometimes the connection to experiment becomes tenuous at best. String theory and other approaches to quantum gravity involve phenomena that are likely to manifest themselves only at energies enormously higher than anything we have access to here on Earth. The cosmological multiverse and the many-worlds interpretation of quantum mechanics posit other realms that are impossible for us to access directly. Some scientists, leaning on Popper, have suggested that these theories are non-scientific because they are not falsifiable.

The truth is the opposite. Whether or not we can observe them directly, the entities involved in these theories are either real or they are not. Refusing to contemplate their possible existence on the grounds of some a priori principle, even though they might play a crucial role in how the world works, is as non-scientific as it gets.

The falsifiability criterion gestures toward something true and important about science, but it is a blunt instrument in a situation that calls for subtlety and precision. It is better to emphasize two more central features of good scientific theories: they are definite, and they are empirical. By “definite” we simply mean that they say something clear and unambiguous about how reality functions. String theory says that, in certain regions of parameter space, ordinary particles behave as loops or segments of one-dimensional strings. The relevant parameter space might be inaccessible to us, but it is part of the theory that cannot be avoided. In the cosmological multiverse, regions unlike our own are unambiguously there, even if we can’t reach them. This is what distinguishes these theories from the approaches Popper was trying to classify as non-scientific. (Popper himself understood that theories should be falsifiable “in principle,” but that modifier is often forgotten in contemporary discussions.)

Seanmcarroll2

Carroll suggests to replace the requirement of falsifiability for a scientific theory with two requirements: being “definite” and being “empirical“.  IMO, it’s the same as falsifiability.  A few points:

  1. “Unfalsifiable with today’s technology” and “unfalsifiable in principle” are two different matters.  E.g., we may be able to observe some effect of the multiverse on our universe in the future.
  2. “Unfalsified” and “unfalsifiable” are different matters.  Just because a theory has not been proven false, does not mean that it cannot be proven false.
  3. “True” and “useful” are different matters.  Scientific models and theories are created to explain empirical data and make useful predictions.  If a theory does not do that, it can still be scientific, but it is rejected as useless (e.g. aether theories)  The concepts of electron or radio wave are not “true” or “false”.  The fact that we cannot “see” them does not seem to embarrass any scientist.  We imagine electrons to be particles and it helps to explain empirical data.  But electrons are not “particles” per se.  We imagine electrons to be waves and it helps to explain empirical data, but electrons are not exactly like the waves on the surface of an ocean, for example.   As long as these models and visualizations help explain empirical data, these models are empirical and falsifiable because if they fail to explain empirical data, they will be falsified.
  4. The question “can a theory be proven false?” is ambiguous.  It can mean two different things: (1) “Is the theory likely to be false?” or (2) “Can the theory be proven false, in principe?”  These questions are not to be confused.  E.g. Evolution appears to be falsifiable because, if we found human remains predating dinosaur fossils, evolution would be proven false, but it’s unlikely we ever will.  “The universe appeared from nothing” appears to be unfalsifiable theory because I cannot imagine what evidence of “nothing” might look like, even in theory.  I don’t even know what “nothing” is.  I don’t even know if I can say “nothing is” or “was“.

Regarding high energies, we don’t necessarily need to “have access” to them on Earth (e.g. build huge particle colliders).  We can observe phenomena happening at these high energies in the space.

Multiverse is just a concept or a model that is supposed to explain certain empirical data.  It’s completely possible to use this model to predict phenomena that we can observe.  If these predictions prove to be incorrect, we can say that we have falsified the multiverse as a useful scientific theory.  I think, the falsifiability principle still stands.  What do you think?

See also: “Evolution and Philosophy: Is Evolution Science?”

Update 1/21/2014: This video explains the different concepts of “multiverses” mentioning that there <em>can</em> be experimental evidence of them, in principle.  So, these hypotheses are not unfalsifiable.

 

Advertisements

4 thoughts on “Time to ditch falsifiability?

  1. It seems that you and I think in very similar ways. Your definition of falsifiability makes it acceptable to me:
    A model has to be usable to make predictions, if these predictions are reliable, we can use the model to guide our choices. Newtonian physics can’t be applied to every scale & domain, but at the time and size scales that we happen to live our lives is still a useful model to make practical predictions, and we use it all the time.
    However, the consequence of this line of thought is that falsified (and hence, any falsifiable theory) theories can be solidly scientific, so we need another, more nuanced tool to guide us in the difficult task of distinguishing science from pseudoscience. This happens because every theory can be seen as a model, and every model, by definition, is a synthesised representation of reality. As such, no model can claim 100% reliability, so one can predict that all theories may one day be falsified. This applies to models that can generate predictions, making them theoretically falsifiable.

    You point out that a theory that is not falsifiable today (in practice) does not need to to be unfalsifiable in principle, and you are right, and I could add that a buddying model that can’t generate predictions may still be a scientific effort, or rather, pre-scientific, and in my own definitions (http://wp.me/p3NcXb-3x), strictly philosophical.

    All this is needed to reach some conclusions:
    – devising a new theory can be seen as scientific, if and when one is trying (not necessarily succeeding) to generate a model that can generate predictions.
    – once a model is established, using the model can be seen as scientific if and when one is using the model to gain more insight, so as to refine the model and improve its predictive powers.
    – different endeavours will come with different intrinsic limits to their reliability:
    + a religious model of afterlife makes predictions with exactly 0 reliability (in the sense that we can’t possibly verify them), making the model completely worthless as a scientific effort.
    + the currently accepted models of applied physics generate very accurate predictions, approximating with reliability = 1 and still improving. (as they should, there is no limit to the precision they may achieve, except the fact that they will never quite get to exactly 100% accuracy)
    + evolutionary theory can make plenty of reliable predictions, but can’t foresee the consequences of the next (useful) random mutation, so there is an intrinsic limit to what it can achieve. Doesn’t mean that it is not improving, in fact, it is.

    What is interesting is that for all examples above it is possible to use two criteria: do they make predictions? and are they generating more and more reliable predictions? If they do, they are currently scientific.
    Now consider classic psychoanalysis: my classification says that when it was born, and for some considerable time after that, it was without doubt scientific (was improving our foresight). However, now it is reactive: tries to fit new data in the existing model so to justify its validity. This can be done, but generates little or no new/improved predictive power. Hence, its current validity as a science is dubious as best.

    I don’t claim to have solved the demarcation problem once and for all, though. I’m just saying that my own model generates good & reliable predictions, in fact, better predictions that a simple “falsiability” criterion. Applying the model to itself however allows to predict that someday someone will create a new improved system to evaluate the validity of science, making my own effort outdated.

    See my series of posts on science epistemology, starts here: http://wp.me/p3NcXb-2a
    and my conclusion on Popper, Kuhn and Lakatos: http://wp.me/p3NcXb-2z
    Comments will be highly appreciated.

Feel free to leave your comments and sarcastic remarks

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s