Quotes

"Dialogue is mutual search for a new reality, not debate to win with stronger arguments. In a dialogue propositions are pointers toward a common new reality; not against each other to win a verbal battle, but complementing each other in an effort to accommodate legitimate goals of all parties, inspired by theories and values, and constructive-creative-concrete enough to become a causa finalis". Galtuung


"I use the concept of affect as away of talking about a margin of manouverability, the 'where we might be able to go' and 'what we might be able to do' in every present situation. I guess 'affect' is a word I use for 'hope': Massumi


"A discourse is a system of words, actions, rules, beliefs, and institutions that share common values. Particular discourses sustain particular worldviews. We might even think of a discourse as a worldview in action. Discourses tend to be invisible--taken for granted as part of the fabric of reality."Fairclough


Emergence is “the principle that entities exhibit properties which are meaningful only when attributed to the whole, not to its parts.” Checkland


"What the designer cares about is whether the user perceives that some action is possible (or in the case of perceived non-affordances, not possible)." Norman




Thursday 12 April 2012

Scientific evidence & Mental Models - Hogarth R.



What happens, however, when events in the world do not conform to the predictions (implicit or explicit) of your model? Imagine, for example, that when you let something slip out of your hand, it floats instead of falling. Do you question your eyesight or your model? Or do you ask whether you are in strange conditions where the model “does not apply”? Note that this, essentially, is what scientists should do when they first meet surprising phenomena (where surprising means relative to model-based expectations). 



Surprising results can have three causes: (1) the method used to obtain the result was flawed (in the example just given, perhaps there is something wrong with your eyesight?); (2) the model really is incorrect (left by themselves, objects do float instead of fall); and (3) there are specific circumstances – perhaps not previously encountered – where the model does not apply (perhaps you observed the object while traveling in a space vehicle where gravity has no effect?). 




When a theory fails, what should one do? First, it is appropriate to question the methodology that yielded the erroneous prediction. Clearly, one should discount results produced by inappropriate methods. However, what if the methodology is appropriate and, in addition, several replications confirm the original results? If this is the case, it seems almost trivial to state that the model should be amended – either rejected as incorrect or specified to be more limited than originally thought. However, the history of science is replete with examples where this does not happen. Indeed, some time ago Kuhn (1962) brilliantly described the difficulty of replacing obsolete scientific paradigms .



Concluding comments

As the evidence reviewed above indicates, people – both in science and

everyday life – are slow to accept evidence that challenges their beliefs and particularly when they have a stake in the latter. At one level, I see this as the inevitable consequence of a dilemma that has to be managed continuously by all living systems. This is the simultaneous need to adapt to change and yet maintain continuity and stability across time. Moreover, adapting to perceived change can involve two kinds of errors (i.e., adapting when one should not, and not adapting when one should) and the costs of error are not necessarily symmetric. Thus, without trying to rationalize what might seem to be dysfunctional behavior, it is legitimate to ask what conditions favor the adoption of new ideas that challenge the 
status quo and what, if anything, scientists can do to improve present practice.


From a descriptive viewpoint, economic incentives play an important role. For example, from the forecasting case study above, it is clear that practitioners in industry accept the implications of the time-series competitions even though theoretical statisticians might not share their enthusiasm. For scientists and others not faced by direct economic incentives, preserving reputation seems to be the greatest concern. The paradox, however, is that scientists who acknowledge that their theories are mistaken should – in principle – enhance their long-term reputations as scientists. Instead, there seems to be a larger short-term concern to preserve the status quo.


Some twenty years ago, Hofstee (1984) suggested that scientists engage in a system of reputational bets. That is, scientists with contradictory theories can jointly define how different outcomes of a future experiment should be interpreted (i.e., which theory is supported by the evidence). In Hofstee’s scheme, the scientists assess probabilistic distributions over the outcomes (thereby indicating “how much” of their reputational capital they are prepared to bet) and a third, independent scientist runs the experiment. The outcomes of the experiment then impact on the scientists’ reputational capitals or “ratings.” However, I know of no cases where this system has actually bee



A similar scheme involves a proposal labeled “adversarial collaboration.” Here again, the disagreeing parties agree on what experiments should be run. An independent third party then runs the experiment which all three publish jointly. Unfortunately, it is not clear that this procedure resolves disputes. The 
protagonists may still disagree about the results (see, e.g., Mellers, Hertwig, & Kahneman, 2001). 



Possibly one way to think about the situation is to use the analogy of the market place for ideas where, in the presence of efficiency, ideas that are currently “best” are adopted quickly. However, like real markets in economics and finance, the market for scientific ideas is not necessarily efficient. There are many situations where the market is “thin” and not all traders (i.e., scientists) have access to information. There are speculative “bubbles” or fashions as some theories become extremely popular for a time and then fade away (consider what happened to many learning models in psychology or applications of chaos theory in the social sciences). 


In the final analysis, the market for scientific ideas can only become efficient in a long run sense. Unfortunately, as implied in a famous statement by Lord Keynes, our lives do not extend that far.
Finally, it is important not to consider the previous paragraph as suggesting a pessimistic cynicism. Each generation does see scientific progress and the accessibility of information has increased exponentially in recent years. The road to enlightenment, however, is bumpy. 





Abstract:


It is well accepted that people resist evidence that contradicts their beliefs. Moreover, despite their training, many scientists reject results that are inconsistent with their theories. This phenomenon is discussed in relation to the field of judgment and decision making by describing four case studies. These concern findings that “clinical” judgment is less predictive than actuarial models; simple methods have proven superior to more “theoretically correct” methods in times series forecasting; equal weighting of variables is often more accurate than using differential weights; and decisions can sometimes be improved by discarding relevant information. All findings relate to the apparently difficult-to-accept idea that simple models can predict complex phenomena better than complex ones. It is true that there is a scientific market place for ideas. However, like its economic counterpart, it is subject to inefficiencies (e.g., thinness, asymmetric information, and speculative bubbles). Unfortunately, the market is only “correct” in the long-run. The road to enlightenment is bumpy.



Keywords:







Decision making, judgment, forecasting , linear models, heuristics



JEL codes:







D81, M10



Area of Research:







Behavioral and Experimental Economics



Published in:







In P. M. Todd, G. Gigerenzer, & The ABC Research Group (Eds.), Ecological rationality: Intelligence in the world. Oxford: Oxford University Press
2006

No comments:

Post a Comment