The distinction between the strong and the weak use of significance tests is logical or epistemological; it is not a statistical issue. The weak use of significance tests asks merely whether the observations are attributable to “chance” (i.e., no relation exists) when a weak theory can only predict some sort of relation, but not what or how much. The strong use of significance tests asks whether observations differ significantly from the numerical values that a strong theory predicts, and it leads to the fourth figure of the syllogism—*p* ⊃ q, ~q , infer ~p—which is formally valid, the logician’s *modus tollens* (“destroying mode”). Psychologists should work hard to formulate theories that, even if somewhat weak, permit derivation of numerical point values or narrow ranges, yielding the possibility of *modus tollens* refutations. [422]

# Category Archive: .Meehl, Paul E.

## Weak statistical tests

## The problem is epistemology, not statistics

Significance tests have a role to play in social science research but their current widespread use in appraising theories is often harmful. The reason for this lies not in the mathematics but in social scientists’ poor understanding of the logical relation between theory and fact, that is, a methodological or epistemological unclarity. Theories entail observations, not conversely. Although a theory’s success in deriving a fact tends to corroborate it, this corroboration is weak unless the fact has a very low prior probability and there are few possible alternative theories. The fact of a nonzero difference or correlation, such as we infer by refuting the null hypothesis, does not have such a low probability because in social science everything correlates with almost everything else, theory aside. In the “strong” use of significance tests, the theory predicts a numerical point value, or narrow range, so the hypothesis test subjects the theory to a grave risk of being falsified if it is objectively incorrect. In general, setting up a confidence interval is preferable, being more informative and entailing null hypothesis refutation if a difference falls outside the interval. Significance tests are usually more defensible in technological contexts (e.g., evaluating an intervention) than for theory appraisal. [393]

## Inductive psychology vs deductive physics

Contrast this bizarre state of affairs with the state of affairs in physics. While there are of course a few exceptions, the usual situation in the experimental testing of a physical theory at least involves the prediction of a *form* of function (with parameters to be fitted); or, more commonly, the prediction of a quantitative magnitude (point-value). Improvements in the accuracy of determining this experimental function-form or point-value, whether by better instrumentation for control and making observations, or by the gathering of a larger number of measurements, has the effect of *narrowing* the band of tolerance about the theoretically predicted value. What does this mean in terms of the significance-testing model? It means: *In physics, that which corresponds, in the logical structure of statistical inference, to the old-fashioned point-null hypothesis H _{0} is the value which flows as a consequence of the substantive theory T;* so that an increase in what the statistician would call “power” or “precision” has the methodological effect of stiffening the experimental test, of setting up a more difficult observational hurdle for the theory T to surmount. Hence, in physics the effect of improving precision or power is that of

*decreasing*the prior probability of a successful experimental outcome if the theory lacks verisimilitude, that is, precisely the reverse of the situation obtaining in the social sciences.

As techniques of control and measurement improve or the number of observations increases, the methodological effect in physics is that a successful passing of the hurdle will mean a greater increment in corroboration of the substantive theory; whereas in psychology, comparable improvements at the experimental level result in an empirical test which can provide only a progressively weaker corroboration of the substantive theory.

In physics, the substantive theory predicts a point-value, and when physicists employ “significance tests,” their mode of employment is to compare the theoretically predicted value x_{0} with the observed mean x_{0}, asking whether they differ (in either direction!) by more than the “probable error” of determination of the latter. Hence H : H_{0} = *μ*_{x} functions as a point-_{0} shrinks, values of x_{0} consistent with x_{0} (and hence, compatible with its implicans T) must lie within a narrow range. In the limit (zero probable error, corresponding to “perfect power” in the significant test) any non-zero difference (x_{0} – x_{0}) provides a *modus tollens* refutation of T. If the theory has negligible verisimilitude, the logical probability of its surviving such a test is negligible. Whereas in psychology, the result of perfect power (i.e., certain detection of any non-zero difference in the predicted direction) is to yield a prior probability *p* = ½ of getting experimental results compatible with T, because perfect power would mean guaranteed detection of whatever difference exists; and a difference [quasi] always exists, being in the “theoretically expected direction” half the time if our substantive theories were all of negligible verisimilitude (two-urn model). [112-3]

## Methodological confirmation bias

Inadequate appreciation of the extreme weakness of the test to which a substantive theory T is subjected by merely predicting a directional statistical difference d > 0 is then compounded by a truly remarkable failure to recognize the logical asymmetry between, on the one hand, (formally invalid) “confirmation” of a theory via affirming the consequent in an argument of form: [T ⊃ H_{1}, H_{1}, infer T], and on the other hand the deductively tight *refutation* of the theory *modus tollens* by a falsified prediction, the logical form being: [T ⊃ H_{1}, ~H_{1}, infer ~T].

While my own philosophical predilections are somewhat Popperian, I daresay any reader will agree that no full-fledged Popperian philosophy of science is presupposed in what I have just said. The destruction of a theory *modus tollens* is, after all, a matter of deductive logic; whereas that the “confirmation” of a theory by its making successful predictions involves a much weaker kind of inference. This much would be conceded by even the most anti-Popperian “inductivist.” The writing of behavior scientists often reads as though they assumed—what it is hard to believe anyone would explicitly assert if challenged—that successful and unsuccessful predictions are practically on all fours in arguing for and against a substantive theory. [112]

## The soft corroboration of psychology

Isn’t the social scientist’s use of the null hypothesis simply the application of Popperian (or Bayesian) thinking in contexts in which probability plays such a big role? No, it is not. One reason it is not is that the usual use of null hypothesis testing in soft psychology as a means of “corroborating” substantive theories does not subject the theory to grave risk of refutation *modus tollens*, but only to a rather feeble danger. The kinds of theories and the kinds of theoretical risks to which we put them in soft psychology when we use significance testing as our method are *not* like testing Meehl’s theory of weather by seeing how well it forecasts the number of inches it will rain on certain days. Instead, they are depressingly close to testing the theory by seeing whether it rains in April at all, or rains several days in April, or rains in April more than in May. [821-2]

## Recent Comments