In response to their own expectations, mental health professionals often ask questions that can only confirm their a priori hypotheses. At PsychLaw.net we know that fundamental considerations of scientific logic, however, dictate that mental health professionals engage in a process known as “proof by disproof.”[1] In other words, a scientific hypothesis is tentatively accepted if, and only if, it cannot be disproven. Scientific experiments, are therefore designed to disprove hypotheses. Similarly, physicians typically reach their diagnostic conclusions attempting to rule out alternative explanations for a condition.
Most of us do not pursue strategies of “proof by disproof” without formal training. Why? Because making judgments premised on “proof by disproof” is counterintuitive. People typically prefer to test hypotheses by seeking information that confirms or rules‑in ‑ rather than disconfirming or ruling‑out ‑ the hypothesis under consideration. Assume, for instance, you are confronted with four cards. Each card has a number or a letter on the side facing up, and another letter or number on the side facing down. On the four cards facing up before you, are: A, B, 2 and 3. You are asked to test the following hypothesis: “All cards with a vowel on one side have an even number on the other.” In particular, you are asked to select the two cards to turn over that are most relevant to testing this hypothesis[2].
Most people respond by turning over cards “A” and “2.” Card A is relevant to testing the hypothesis. If the number on the other side of Card A is not an even number, the hypothesis can be rejected. Turning over card “2,” however, is irrelevant to testing the hypothesis. Remember, the hypothesis was: “All cards with a vowel on one side have an even number on the other.” The hypothesis does not require ‑ All cards with an even number on one side have a vowel on the other side. Therefore, we could find combinations of even numbers and consonants on either side of a card that would not disprove the hypothesis. Nonetheless, the participants in this experiment frequently turned card 2 in a misdirected attempt at ruling in the hypothesis.
Very few people participating in this experiment turned over the one most important card, the card with a “3.” Turning over the card “3”, amounts to effectively testing the hypotheses via disproof. Think about it, if the “3” card has a vowel on the other side, the hypothesis can then be rejected or ruled‑out in just one turn. Stuck in similar thinking patterns, many mental health professionals seek evidence attempting to rule‑in their hypotheses. Simultaneously, they neglect to consider how attempting to rule‑out hypotheses is a more logically efficient procedure.
In a 1978 study, college students interviewed a young woman to determine if she was an extrovert. In attempting to identify extroversion, these students frequently resorted to one‑sided questions (e.g., “What would you do if you wanted to liven things up at a party?”) A question such as this is clearly biased toward ruling‑in extroversion. Practically everyone, even withdrawn introverts, have attended a few parties; and as a result, they can at least discuss how to “liven things up.” Any response to this question, therefore, can be interpreted as indicating that this person is an extrovert. At PsychLaw.net we understand that related research has demonstrated that experienced mental health professionals also prefer biased, one‑sided questions when addressing assessment of extroversion[3].
During their interviews, mental health professionals often question patients in a manner that biases the information they obtain[4]. Assumptions about a patient’s drinking, marriage, or anger for example increase the frequency of questions directed at those topics ‑ and asking enough questions allows mental health professionals to find the answers they are seeking[5]. Interviewers expecting to find evidence of alcohol abuse, for example, resort to the following kinds of questions:
(1) Do you sometimes think that maybe you drink too much?
(2) Is it possible there are experiences you cannot remember because of your drinking?
(3) Are there other people who might say you drink too much?
(4) But won’t tell you because of anticipating your negative reaction?
These kinds of questions are so vague and ill‑defined that basic considerations of logic dictate affirmative answers to them. Asked to consider whether “maybe” you drink too much, the use of “maybe” makes it difficult to say no. Similarly, if you cannot recall any memory lapses related to your drinking, a mental health professional can conclude what you do not recall confirms your presumed memory lapses.
At PsychLaw.net we understand the profound difficulties for the legal system engendered by this rule-in bias has been studied by a variety of legal commentators. Perhaps the most erudite of these commentators is David Faigman. According to theorist Faigman, social science evidence should not be presented to jurors unless it rests on a scientific theory that has been empirically tested:
“[f]alisifiability or testability represents the line of demarcation between science and pseudo-science, and the strength of particular scientific statements depends on the extent to which they have been tested appropriately.”[6]
Faigman advocates for threshold admissibility determinations of “scientific” validity, calling them as essential for “soft” evidence as for “hard” evidence. He insists that judges must undertake threshold screening addressing the methodology on which the social science evidence rests. For Faigman, a restrictive test prohibits statements from experts that “reflect personal values rather than scientific observation,” and guards against “experts … [who] nullify legal rules themselves, by confusing jurors, or … call upon the jury to nullify a legal rule on the basis of policy considerations that the rule does not reflect.” Faigman specifically advises that courts look for the signposts of scientific methodology before allowing an expert to render an opinion based on the social sciences. Faigman counsels that courts require the proposed expert to provide “a cogent explanation of the methods and analyses that produced the scientific opinion.”
Footnotes
[1]. Schuck, P.H. (1993). Multi-Culturalism Redux: Science, Law and Politics, 11 Yale Law & Policy Review 1, 16 (“Scientists subscribe to and are actuated by rigorous standards of empirical investigation and proof; to deviate from these standards is to be deemed professionally incompetent, or worse.”)
[2]. Wason, P.C. (1966). Reasoning. In B.M. Foss (Ed). New horizions in psychology. Harmondsworth: Penguin.
[3]. Dallas, M.E. & Baron, R.S. (1985). Do psychotherapists use a confirmatory strategy during interviewing? Journal of Social and Clinical Psychology, 3, 106-122.
[4]. Snyder, M. & Thomsen, C.J. (1988). Interactions between therapists and clients: Hypotheses testing and behavioral confirmation. In D.C. Turk & P. Salovey (Eds.) Reasoning, influence, and judgment in clinical psychology. New York: Free Press.
[5]. Arkes, H.R. (1981). Impediments to accurate clinical judgment and possible ways to minimize their impact. Journal of Consulting and Clinical Psychology, 49, 323-330.
[6]. Faigman, D.L. (1989). To Have and Have Not: Assessing the Value of Social Science to the Law as Science and Policy, 38 Emory L.J. 1005 . See also Faigman, D.L. (1992). Struggling to Stop the Flood of Unreliable Expert Testimony, 76 Minn. L. Rev. 877.