Category: .Fisher, R.A.

Open to suspicion

In preparing this table [a variation of Elderton’s Table of Goodness of Fit] we have borne in mind that in practice we do not want to know the exact value of P for any observed χ², but, in the first place, whether or not the observed value is open to suspicion. If P is between ·1 and ·9 there is certainly no reason to suspect the hypothesis tested. If it is below ·02 it is strongly indicated that the hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at ·05, and consider that higher values of χ² indicate a real discrepancy. [80, 11th ed.]

In preparing this table [a variation of Elderton’s Table of Goodness of Fit] we have borne in mind that in practice we do not want to know the exact value of P for any observed χ², but, in the first place, whether or not the observed value is open to suspicion. If P is between ·1 and ·9 there is certainly no reason to suspect the hypothesis tested. If it is below ·02 it is strongly indicated that the hypothesis fails to account for the whole of the facts. Belief in the hypothesis as an accurate representation of the population sampled is confronted by the logical disjuction: Either the hypothesis is untrue, or the value χ² has attained by chance an exceptionally high value. The actual value of P obtainable from the table by interpolation indicates the strength of the evidence against the hypothesis. A value of χ² exceeding the 5 per cent. point is seldom to be disregarded. [80, 14th ed.]

Fisher’s severe tests

In choosing the grounds upon which a general hypothesis should be rejected, the exprimenter will rightly consider all points on which, in the light of current knowledge, the hypothesis may be imperfectly accurate, and will select tests, so far as possible, sensitive to these possible faults, rather than to others. [47]

The misuse of significance tests

The examples elaborated in the foregoing sections of numerical discrepancies arising from tbe rigid formulation of a rule, which at first acquaintance it seemed natural to apply to all tests of significance, constitute only one aspect of the deep-seated difference in point of view which arises when Tests of Significance are reinterpreted on the analogy of Acceptance Decisions. It is indeed not only numerically erroneous conclusions, serious as these are, that are to be feared from an uncritical acceptance of this analogy.

An important difference is that Decisions are final, while the state of opinion derived from a test of significance is provisional, and capable, not only of confirmation, but of revision. An acceptance procedure is devised for a whole class of cases. No particular thought is given to each case as it arises, nor is the tester’s capacity for learning exercised. A test of significance on the other hand is intended to aid the process of learning by observational experience.[100]

Fisher on the logic of null hypotheses

In relation to any experiment we may speak of this hypothesis as the “null hypothesis,” and it should be noted that the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation. Every ex­periment may be said to exist only in order to give the facts a chance of disproving the null hypothesis.

It might be argued that if an experiment can disprove the hypothesis that the subject possesses no sensory discrimi­nation between two different sorts of object, it must therefore be able to prove the opposite hypothesis, that she can make some such discrimination. But this last hypothesis, however reasonable or true it may be, is ineligible as a null hypothesis to be tested by experiment, because it is inexact. If it were asserted that the subject would never be wrong in her judgements we should again have an exact hypothesis, and it is easy to see that this hypothesis could be dis­proved by a single failure, but could never be proved by any finite amount of experimentation. [16]

Fisher on Bayesianism

[A]dvocates of inverse probability seem forced to regard mathematical probability, not as an objective quantity mea­sured by observable frequencies, but as measuring merely psychological tendencies, theorems respecting which are useless for scientific purposes. [6-7]

Fisher on significance tests

In considering the appropriateness of any proposed experimental design, it is always needful to forecast all possible results of the experiment, and to have decided without ambiguity what interpretation shall be placed upon each one of them. Further, we must know by what argument this interpretation is to be sustained. …

It is open to the experimenter to be more or less exacting in respect of the smallness of the probability he would require before he would be willing to admit that his observations have demonstrated a positive result. It is obvious that an experiment would be useless of which no possible result would satisfy him. Thus, if he wishes to ignore results having probabilities as high as 1 in 20—the probabilities being of course reckoned from the hypothesis that the phenomenon to be demonstrated is in fact absent … . It is usual and convenient for the experimenters to take 5 per cent. as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means to eliminate from further discussion the greater part of the fluctuations which chance causes have intro­duced into their experimental results. No such selection can eliminate the whole of the possible effects of chance co­incidence, and if we accept this convenient convention, and agree that an event which would occur by chance only once in 70 trials is decidedly “significant”, in the statistical sense, we thereby admit that no isolated experiment, how­ever significant in itself, can suffice for the experimental demonstration of any natural phenomenon; for the “one chance in a million” will undoubtedly occur, with no less and no more than its appropriate frequency, however surprised we may be that it should occur to us. In order to assert that a natural phenomenon is experimentally demonstrable we need, not an isolated record, but a reliable method of procedure. In relation to the test of significance, we may say that a pheno­menon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us a statistically significant result. [12-4]

So you did one study? Do some more.

If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2 per cent. point), or one in a hundred (the 1 per cent. point). Personally, the writer prefers to set a low standard of significance at the 5 per cent. point, and ignore entirely all results which fail to reach this level. A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance. The very high odds sometimes claimed for experimental results should usually be discounted, for inaccurate methods of esti­mating error have far more influence than has the particular standard of significance chosen. [504-5]