Tag: logic

Fisher on significance tests

In considering the appropriateness of any proposed experimental design, it is always needful to forecast all possible results of the experiment, and to have decided without ambiguity what interpretation shall be placed upon each one of them. Further, we must know by what argument this interpretation is to be sustained. …

It is open to the experimenter to be more or less exacting in respect of the smallness of the probability he would require before he would be willing to admit that his observations have demonstrated a positive result. It is obvious that an experiment would be useless of which no possible result would satisfy him. Thus, if he wishes to ignore results having probabilities as high as 1 in 20—the probabilities being of course reckoned from the hypothesis that the phenomenon to be demonstrated is in fact absent … . It is usual and convenient for the experimenters to take 5 per cent. as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means to eliminate from further discussion the greater part of the fluctuations which chance causes have intro­duced into their experimental results. No such selection can eliminate the whole of the possible effects of chance co­incidence, and if we accept this convenient convention, and agree that an event which would occur by chance only once in 70 trials is decidedly “significant”, in the statistical sense, we thereby admit that no isolated experiment, how­ever significant in itself, can suffice for the experimental demonstration of any natural phenomenon; for the “one chance in a million” will undoubtedly occur, with no less and no more than its appropriate frequency, however surprised we may be that it should occur to us. In order to assert that a natural phenomenon is experimentally demonstrable we need, not an isolated record, but a reliable method of procedure. In relation to the test of significance we may say that a pheno­menon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us a statistically significant result. [12-4]

Weak statistical tests

The distinction between the strong and the weak use of significance tests is logical or epistemological; it is not a statistical issue. The weak use of significance tests asks merely whether the observations are attributable to “chance” (i.e., no relation exists) when a weak theory can only predict some sort of relation, but not what or how much. The strong use of significance tests asks whether observations differ significantly from the numerical values that a strong theory predicts, and it leads to the fourth figure of the syllogism—p ⊃ q, ~q , infer ~p—which is formally valid, the logician’s modus tollens (“destroying mode”). Psychologists should work hard to formulate theories that, even if somewhat weak, permit derivation of numerical point values or narrow ranges, yielding the possibility of modus tollens refutations. [422]

Induction, philosophy’s toughest zombie

Science is an exercise in inductive reasoning: we are making observations and trying to infer general rules from them. Induction can never be certain. In contrast, deductive reasoning is easier: you deduce what you would expect to ob­serve if some general rule were true and then compare it with what you actually see. The problem is that, for a scientist, deductive arguments don’t directly answer the question that you want to ask.

Any objective question is subject to proof

Questions of ultimate ends are not amenable to direct proof. Whatever can be proved to be good, must be so by being shown to be a means to something admitted to be good without proof. The medical art is proved to be good by its conducing to health; but how is it possible to prove that health is good? … There is a larger meaning of the word proof, in which this question is as amenable to it as any other of the disputed questions of philosophy. The subject is within the cognisance of the rational faculty; and neither does that faculty deal with it solely in the way of intuition. Considerations may be presented capable of determining the intellect either to give or withhold its assent to the doctrine; and this is equivalent to proof. [ch. I, 157-8]

Man a machine?

The doctrine that men are machines, or robot is, is a fairly old one. Its first clear and forceful formulation is due, it seems, to the title of a famous book by La Mettrie, Man a Machine [1747]; though the first writer to play with the idea of robots was Homer.

Yet machines are clearly not ends in themselves, however complicated they may be. They may be valuable because of their usefulness, or because of their rarity; and a certain specimen may be valuable because of its historical unique­ness. But machines become valueless if they do not have a rarity value: if there are too many of a kind we are prepared to pay to have them removed. On the other hand, we value human lives in spite of the problem of over-population, the gravest of all social problems of our time. We respect even the life of a murderer.

It must be admitted that, after two world wars, and under the threat of the new means for mass destruction, there has been a frightening deterioration of respect for human life in some strata of our society. This makes it particularly urgent to reaffirm in what follows a view from which we have, I think, no reason to deviate: the view that men are ends in them­selves and not “just” machines.

We can divide those who uphold the doctrine that men are machines, or a similar doctrine, into two categories: those who deny the existence of mental events, of personal experiences, or of consciousness; or who say perhaps that the question whether such experiences exist is of minor importance and may be safely left open; and those who admit the existence of mental events, but assert that they are “epiphenomena” – that everything can be explained without them, since the material world is causally closed. But whether they belong to the one category or the other, both must neglect, it seems to me, the reality of human suffering, and the significance of the fight against unnecessary suffering.

Thus I regard the doctrine that men are machines not only as mistaken, but as prone to undermine a humanist ethics. However, this very reason makes it all the more necessary to stress that the great defenders of that doctrine – all up­holders of humanist ethics. From Democritus and Lucretius to Herbert Feigl and Anthony Quinton, materialist philo­sophers have usually been humanists and fighters for freedom and enlightenment; and, sad to say, their opponents have sometimes been the opposite. Thus just because I regard materialism as mistaken – just because I do not believe that men are machines or automata – I wish to stress the great and indeed vital role which the materialist philosophy has played in the evolution of human thought, and of humanist ethics. [4-5]

Popper on Duhem–Quine’s naive falsificationism

The falsifying mode of inference here referred to—the way in which the falsification of a conclusion entails the falsifi­cation of the system from which it is derived—is the modus tollens of classical logic. It may be described as follows:

Let p be a conclusion of a system t of statements which may consist of theories and initial conditions (for the sake of simplicity I will not distinguish between them). We may then symbolize the relation of derivability (analytical implication) of p from t by ‘t ➙ p’ which may be read: ‘p follows from t ’. Assume p to be false, which we may write ‘p’, to be read ‘not-p’. Given the relation of deducibility, t ➙ p, and the assumption p, we can then infer t  (read ‘not-t ’); that is, we regard t as falsified. If we denote the conjunction (simultaneous assertion) of two statements by putting a point between the symbols standing for them, we may also write the falsifying inference thus: ((t ➙ p).p) ➙ t , or in words: ‘If p is derivable from t, and if p is false, then t also is false’.

By means of this mode of inference we falsify the whole system (the theory as well as the initial conditions) which was required for the deduction of the statement p, i.e. of the falsified statement. Thus it cannot be asserted of any one statement of the system that it is, or is not, specifically upset by the falsification. Only if p is independent of some part of the system can we say that this part is not involved in the falsification.* With this is connected the following possibility: we may, in some cases, perhaps in consideration of the levels of universality, attribute the falsification to some definite hypothesis—for instance to a newly introduced hypothesis. This may happen if a well-corroborated theory, and one which continues to be further corroborated, has been deductively explained by a new hypothesis of a higher level. The attempt will have to be made to test this new hypothesis by means of some of its consequences which have not yet been tested. If any of these are falsified, then we may well attribute the falsification to the new hypothesis alone. We shall then seek, in its stead, other high-level generalizations, but we shall not feel obliged to regard the old system, of lesser generality, as having been falsified.

* Thus we cannot at first know which among the various statements of the remaining sub-system t ′ (of which p is not independent) we are to blame for the falsity of p; which of these statements we have to alter, and which we should retain. (I am not here discussing interchangeable statements.) It is often only the scientific instinct of the investigator (influenced, of course, by the results of testing and re-testing) that makes him guess which statements of t ′ he should regard as innocuous, and which he should regard as being in need of modification. Yet it is worth remembering that it is often the modification of what we are inclined to regard as obviously innocuous (because of its complete agreement with our normal habits of thought) which may produce a decisive advance. A notable example of this is Einstein’s modification of the concept of simultaneity. [55-6]

Slaying the hydra of verisimilitude

In my view, trying to measure verisimilitude by counting a theory’s true or false consequences always missed the point. Every false theory has the same number (if we can really talk this way) of true and false consequences as every other. This is a consequence of the truth-functional nature of our logical connectives and the truth-functional definition of validity. But some false statements are still closer to the truth than others. [411]

The logic of discovering our errors

If the purpose of an argument is to prove its conclusion, then it is difficult to see the point of falsifiability. For deductive arguments cannot prove their conclusions any more than inductive ones can.

But if the purpose of the argument is to force us to choose, then the point of falsifiability becomes clear.

Deductive arguments force us to question, and to reexamine, and, ultimately, to deny their premises if we want to deny their conclusions. Inductive arguments simply do not.

This the real meaning of Popper’s Logic of Scientific Discovery—and it is the reason, perhaps, why so many readers have misunderstood its title and its intent. The logic of discovery is not the logic of discovering theories, and it is not the logic of discovering that they are true.

Neither deduction nor induction can serve as a logic for that.

The logic of discovery is the logic of discovering our errors. We simply cannot deny the conclusion of a deductive argu­ment without discovering that we were in error about its premises. Modus tollens can help us to do this if we use it to set problems for our theories. But while inductive arguments may persuade or induce us to believe things, they cannot help us discover that we are in error about their premises. [113-4]

Inductive guesswork

Popper used to call a guess ‘a guess’. But inductivists prefer to call a guess ‘the conclusion of an inductive argument’. This, no doubt, adds an air of authority to it. But the fact that the ‘conclusion’ of an inductive argument may be false even if all of its premises are true means that it is a guess, regardless of what we may or may not like to call it. [9]

The untenability of induction

My own view is that the various difficulties of inductive logic here sketched are insurmountable. So also, I fear, are those inherent in the doctrine, so widely current today, that inductive inference, although not ‘strictly valid’, can attain some degree of ‘reliability’ or of ‘probability’. According to this doctrine, inductive inferences are ‘probable inferences’. ‘We have described’, says Reichenbach, ‘the principle of induction as the means whereby science decides upon truth. To be more exact, we should say that it serves to decide upon probability. For it is not given to science to reach either truth or falsity … but scientific statements can only attain continuous degrees of probability whose unattainable upper and lower limits are truth and falsity’.

At this stage I can disregard the fact that the believers in inductive logic entertain an idea of probability that I shall later reject as highly unsuitable for their own purposes (see section 80, below). I can do so because the difficulties men­tioned are not even touched by an appeal to probability. For if a certain degree of probability is to be assigned to statements based on inductive inference, then this will have to be justified by invoking a new principle of induction, appropriately modified. And this new principle in its turn will have to be justified, and so on. Nothing is gained, more­over, if the principle of induction, in its turn, is taken not as ‘true’ but only as ‘probable’. In short, like every other form of inductive logic, the logic of probable inference, or ‘probability logic’, leads either to an infinite regress, or to the doctrine of apriorism. [6]