What happens, however, when events in the world do not conform to the predictions (implicit or explicit) of your model? Imagine, for example, that when you let something slip out of your hand, it floats instead of falling. Do you question your eyesight or your model? Or do you ask whether you are in strange conditions where the model “does not apply”? Note that this, essentially, is what scientists should do when they first meet surprising phenomena (where surprising means relative to model-based expectations).
Surprising results can have three causes: (1) the method used to obtain the result was flawed (in the example just given, perhaps there is something wrong with your eyesight?); (2) the model really is incorrect (left by themselves, objects do float instead of fall); and (3) there are specific circumstances – perhaps not previously encountered – where the model does not apply (perhaps you observed the object while traveling in a space vehicle where gravity has no effect?).
When a theory fails, what should one do? First, it is appropriate to question the methodology that yielded the erroneous prediction. Clearly, one should discount results produced by inappropriate methods. However, what if the methodology is appropriate and, in addition, several replications confirm the original results? If this is the case, it seems almost trivial to state that the model should be amended – either rejected as incorrect or specified to be more limited than originally thought. However, the history of science is replete with examples where this does not happen. Indeed, some time ago Kuhn (1962) brilliantly described the difficulty of replacing obsolete scientific paradigms .
As the evidence reviewed above indicates, people – both in science and
everyday life – are slow to accept evidence that challenges their beliefs and particularly when they have a stake in the latter. At one level, I see this as the inevitable consequence of a dilemma that has to be managed continuously by all living systems. This is the simultaneous need to adapt to change and yet maintain continuity and stability across time. Moreover, adapting to perceived change can involve two kinds of errors (i.e., adapting when one should not, and not adapting when one should) and the costs of error are not necessarily symmetric. Thus, without trying to rationalize what might seem to be dysfunctional behavior, it is legitimate to ask what conditions favor the adoption of new ideas that challenge the
status quo and what, if anything, scientists can do to improve present practice.
From a descriptive viewpoint, economic incentives play an important role. For example, from the forecasting case study above, it is clear that practitioners in industry accept the implications of the time-series competitions even though theoretical statisticians might not share their enthusiasm. For scientists and others not faced by direct economic incentives, preserving reputation seems to be the greatest concern. The paradox, however, is that scientists who acknowledge that their theories are mistaken should – in principle – enhance their long-term reputations as scientists. Instead, there seems to be a larger short-term concern to preserve the status quo.
Some twenty years ago, Hofstee (1984) suggested that scientists engage in a system of reputational bets. That is, scientists with contradictory theories can jointly define how different outcomes of a future experiment should be interpreted (i.e., which theory is supported by the evidence). In Hofstee’s scheme, the scientists assess probabilistic distributions over the outcomes (thereby indicating “how much” of their reputational capital they are prepared to bet) and a third, independent scientist runs the experiment. The outcomes of the experiment then impact on the scientists’ reputational capitals or “ratings.” However, I know of no cases where this system has actually bee
A similar scheme involves a proposal labeled “adversarial collaboration.” Here again, the disagreeing parties agree on what experiments should be run. An independent third party then runs the experiment which all three publish jointly. Unfortunately, it is not clear that this procedure resolves disputes. The
protagonists may still disagree about the results (see, e.g., Mellers, Hertwig, & Kahneman, 2001).
Possibly one way to think about the situation is to use the analogy of the market place for ideas where, in the presence of efficiency, ideas that are currently “best” are adopted quickly. However, like real markets in economics and finance, the market for scientific ideas is not necessarily efficient. There are many situations where the market is “thin” and not all traders (i.e., scientists) have access to information. There are speculative “bubbles” or fashions as some theories become extremely popular for a time and then fade away (consider what happened to many learning models in psychology or applications of chaos theory in the social sciences).
Finally, it is important not to consider the previous paragraph as suggesting a pessimistic cynicism. Each generation does see scientific progress and the accessibility of information has increased exponentially in recent years. The road to enlightenment, however, is bumpy.
Abstract:
No comments:
Post a Comment