Significance tests have a role to play in social science research but their current widespread use in appraising theories is often harmful. The reason for this lies not in the mathematics but in social scientists’ poor understanding of the logical relation between theory and fact, that is, a methodological or epistemological unclarity. Theories entail observations, not conversely. Although a theory’s success in deriving a fact tends to corroborate it, this corroboration is weak unless the fact has a very low prior probability and there are few possible alternative theories. The fact of a nonzero difference or correlation, such as we infer by refuting the null hypothesis, does not have such a low probability because in social science everything correlates with almost everything else, theory aside. In the “strong” use of significance tests, the theory predicts a numerical point value, or narrow range, so the hypothesis test subjects the theory to a grave risk of being falsified if it is objectively incorrect. In general, setting up a confidence interval is preferable, being more informative and entailing null hypothesis refutation if a difference falls outside the interval. Significance tests are usually more defensible in technological contexts (e.g., evaluating an intervention) than for theory appraisal. 
Contrast this bizarre state of affairs with the state of affairs in physics. While there are of course a few exceptions, the usual situation in the experimental testing of a physical theory at least involves the prediction of a form of function (with parameters to be fitted); or, more commonly, the prediction of a quantitative magnitude (point-value). Improvements in the accuracy of determining this experimental function-form or point-value, whether by better instrumentation for control and making observations, or by the gathering of a larger number of measurements, has the effect of narrowing the band of tolerance about the theoretically predicted value. What does this mean in terms of the significance-testing model? It means: In physics, that which corresponds, in the logical structure of statistical inference, to the old-fashioned point-null hypothesis H0 is the value which flows as a consequence of the substantive theory T; so that an increase in what the statistician would call “power” or “precision” has the methodological effect of stiffening the experimental test, of setting up a more difficult observational hurdle for the theory T to surmount. Hence, in physics the effect of improving precision or power is that of decreasing the prior probability of a successful experimental outcome if the theory lacks verisimilitude, that is, precisely the reverse of the situation obtaining in the social sciences.
As techniques of control and measurement improve or the number of observations increases, the methodological effect in physics is that a successful passing of the hurdle will mean a greater increment in corroboration of the substantive theory; whereas in psychology, comparable improvements at the experimental level result in an empirical test which can provide only a progressively weaker corroboration of the substantive theory.
In physics, the substantive theory predicts a point-value, and when physicists employ “significance tests,” their mode of employment is to compare the theoretically predicted value x0 with the observed mean x0, asking whether they differ (in either direction!) by more than the “probable error” of determination of the latter. Hence H : H0 = μx functions as a point-
Inadequate appreciation of the extreme weakness of the test to which a substantive theory T is subjected by merely predicting a directional statistical difference d > 0 is then compounded by a truly remarkable failure to recognize the logical asymmetry between, on the one hand, (formally invalid) “confirmation” of a theory via affirming the consequent in an argument of form: [T ⊃ H1, H1, infer T], and on the other hand the deductively tight refutation of the theory modus tollens by a falsified prediction, the logical form being: [T ⊃ H1, ~H1, infer ~T].
While my own philosophical predilections are somewhat Popperian, I daresay any reader will agree that no full-fledged Popperian philosophy of science is presupposed in what I have just said. The destruction of a theory modus tollens is, after all, a matter of deductive logic; whereas that the “confirmation” of a theory by its making successful predictions involves a much weaker kind of inference. This much would be conceded by even the most anti-Popperian “inductivist.” The writing of behavior scientists often reads as though they assumed—what it is hard to believe anyone would explicitly assert if challenged—that successful and unsuccessful predictions are practically on all fours in arguing for and against a substantive theory. 
Isn’t the social scientist’s use of the null hypothesis simply the application of Popperian (or Bayesian) thinking in contexts in which probability plays such a big role? No, it is not. One reason it is not is that the usual use of null hypothesis testing in soft psychology as a means of “corroborating” substantive theories does not subject the theory to grave risk of refutation modus tollens, but only to a rather feeble danger. The kinds of theories and the kinds of theoretical risks to which we put them in soft psychology when we use significance testing as our method are not like testing Meehl’s theory of weather by seeing how well it forecasts the number of inches it will rain on certain days. Instead, they are depressingly close to testing the theory by seeing whether it rains in April at all, or rains several days in April, or rains in April more than in May. [821-2]
Rudimentary organs may be compared with the letters in a word, still retained in the spelling, but become useless in the pronunciation, but which serve as a clue in seeking for its derivation. 
But I should go even further and accuse at least some professional historians of ‘scientism’: of trying to copy the method of natural science, not as it actually is, but as it is wrongly alleged to be. This alleged but non-existent method is that of collecting observations and then ‘drawing conclusions’ from them. It is slavishly aped by some historians who believe that they can collect documentary evidence which, corresponding to the observations of natural science, forms the ’empirical basis’ for their conclusions. …
Worse even than the attempt to apply an inapplicable method is the worship of the idol of certain or infallible or authoritative knowledge which these historians mistake for the ideal of science. Admittedly, we all try hard to avoid error; and we ought to be sad if we have made a mistake. Yet to avoid error is a poor ideal: if we do not dare to tackle problems which are so difficult that error is almost unavoidable, then there will be no growth of knowledge. In fact, it is from our boldest theories, including those which are erroneous, that we learn most. Nobody is exempt from making mistakes; the great thing is to learn from them. 
The scientific paper in its orthodox form does embody a totally mistaken conception, even a travesty, of the nature of scientific thought. 
The scientific paper is a fraud in the sense that it does give a totally misleading narrative of the processes of thought that go into the making of scientific discoveries. The inductive format of the scientific paper should be discarded. The discussion which in the traditional scientific paper goes last should surely come at the beginning. The scientific facts and scientific acts should follow the discussion, and scientists should not be ashamed to admit, as many of them apparently are ashamed to admit, that hypotheses appear in their minds along uncharted by-ways of thought; that they are imaginative and inspirational in character; that they are indeed adventures of the mind. 
Thus I oppose the attempt to proclaim the method of understanding as the characterisitic of the humanities, the mark by which we may distinguish them from the natural sciences. And when its supporters denounce a view like mine as ‘positivistic’ or ‘scientistic’,* then I may perhaps answer that they themselves seem to accept, implicitly and uncritically, that positivism or scientism is the only philosophy appropriate to the natural sciences. 
* The term ‘scientism’ meant originally ‘the slavish imitation of the method and language of [natural] science’, especially by social scientists; it was introduced in this sense by Hayek in his ‘Scientism in the Study of Society’, now in his The Counter-Revolution of Science, 1962. In The Poverty of Historicism, p. 105, I suggested its use as a name for the aping of what is widely mistaken for the method of science; and Hayek now agrees (in his Preface to his Studies in Philosophy, Politics and Economics, which contains a very generous acknowledgement) that the methods actually practised by natural scientists are different from ‘what most of them told us … and urged the representatives of other disciplines to imitate’.
A second familiar approach from the same period is Karl Popper’s ‘falsificationist’ criterion, which fares no better. Apart from the fact that it leaves ambiguous the scientific status of virtually every singular existential statement, however well supported (e.g., the claim that there are atoms, that there is a planet closer to the sun than the Earth, that there is a missing link), it has the untoward consequence of countenancing as ‘scientific’ every crank claim which makes ascertainably false assertions. Thus flat Earthers, biblical creationists, proponents of laetrile or orgone boxes, Uri Geller devotees, Bermuda Triangulators, circle squarers, Lysenkoists, charioteers of the gods, perpetuum mobile builders, Big Foot searchers, Loch Nessians, faith healers, polywater dabblers, Rosicrucians, the-world-is-about-to-enders, primal screamers, water diviners, magicians, and astrologers all turn out to be scientific on Popper’s criterion – just so long as they are prepared to indicate some observation, however improbable, which (if it came to pass) would cause them to change their minds. 
The falsifying mode of inference here referred to—the way in which the falsification of a conclusion entails the falsification of the system from which it is derived—is the modus tollens of classical logic. It may be described as follows:
Let p be a conclusion of a system t of statements which may consist of theories and initial conditions (for the sake of simplicity I will not distinguish between them). We may then symbolize the relation of derivability (analytical implication) of p from t by ‘t ➙ p’ which may be read: ‘p follows from t ’. Assume p to be false, which we may write ‘p’, to be read ‘not-p’. Given the relation of deducibility, t ➙ p, and the assumption p, we can then infer t (read ‘not-t ’); that is, we regard t as falsified. If we denote the conjunction (simultaneous assertion) of two statements by putting a point between the symbols standing for them, we may also write the falsifying inference thus: ((t ➙ p).p) ➙ t , or in words: ‘If p is derivable from t, and if p is false, then t also is false’.
By means of this mode of inference we falsify the whole system (the theory as well as the initial conditions) which was required for the deduction of the statement p, i.e. of the falsified statement. Thus it cannot be asserted of any one statement of the system that it is, or is not, specifically upset by the falsification. Only if p is independent of some part of the system can we say that this part is not involved in the falsification.* With this is connected the following possibility: we may, in some cases, perhaps in consideration of the levels of universality, attribute the falsification to some definite hypothesis—for instance to a newly introduced hypothesis. This may happen if a well-corroborated theory, and one which continues to be further corroborated, has been deductively explained by a new hypothesis of a higher level. The attempt will have to be made to test this new hypothesis by means of some of its consequences which have not yet been tested. If any of these are falsified, then we may well attribute the falsification to the new hypothesis alone. We shall then seek, in its stead, other high-level generalizations, but we shall not feel obliged to regard the old system, of lesser generality, as having been falsified.
* Thus we cannot at first know which among the various statements of the remaining sub-system t ′ (of which p is not independent) we are to blame for the falsity of p; which of these statements we have to alter, and which we should retain. (I am not here discussing interchangeable statements.) It is often only the scientific instinct of the investigator (influenced, of course, by the results of testing and re-testing) that makes him guess which statements of t ′ he should regard as innocuous, and which he should regard as being in need of modification. Yet it is worth remembering that it is often the modification of what we are inclined to regard as obviously innocuous (because of its complete agreement with our normal habits of thought) which may produce a decisive advance. A notable example of this is Einstein’s modification of the concept of simultaneity. [55-6]