Tag: falsification

Scientific methodology (German edition)

3. Die deduktive Überprüfung der Theorien. Die Methode der kritischen Nachprüfung, der Auslese der Theorien, ist nach unserer Auffassung immer die folgende: Aus der vorläufig unbegründeten Antizipation, dem Einfall, der Hypothese, dem theoretischen System, werden auf logisch-deduktivem Weg Folgerungen abgeleitet; diese werden untereinander und mit anderen Sätzen verglichen, indem man feststellt, welche logischen Beziehungen (z. B. Äquivalenz, Ableitbarkeit, Vereinbarkeit, Widerspruch) zwischen ihnen bestehen.

Dabei lassen sich insbesondere vier Richtungen unterscheiden, nach denen die Prüfung durchgeführt wird: der logische Vergleich der Folgerungen untereinander, durch den das System auf seine innere Widerspruchslosigkeit hin zu unter­suchen ist; eine Untersuchung der logischen Form der Theorie mit dem Ziel, festzustellen, ob es den Charakter einer empirisch-wissenschaftlichen Theorie hat, also z. B. nicht tautologisch ist; der Vergleich mit anderen Theorien, um unter anderem festzustellen, ob die zu prüfende Theorie, falls sie sich in den verschiedenen Prüfungen bewähren sollte, als wissenschaftlicher Fortschritt zu bewerten wäre; schließlich die Prüfung durch „empirische Anwendung“ der abgeleiteten Folgerungen.

Diese letzte Prüfung soll feststellen, ob sich das Neue, das die Theorie behauptet, auch praktisch bewährt, etwa in wis­senschaftlichen Experimenten oder in der technisch-praktischen Anwendung. Auch hier ist das Prüfungsverfahren ein deduktives: Aus dem System werden (unter Verwendung bereits anerkannter Sätze) empirisch moglichst leicht nach­prüf­bare bzw. anwendbare singuläre Folgerungen („Prognosen“) deduziert und aus diesen insbesondere jene ausgewählt, die aus bekannten Systemen nicht ableitbar sind, bzw. mit ihnen in Widerspruch stehen. Über diese – und andere – Folgerungen wird nun im Zusammenhang mit der praktischen Anwendung, den Experimenten usw. entschieden. Fällt die Entscheidung positiv aus, werden die singulären Folgerungen anerkannt, verifiziert, so hat das System die Prüfung vorläufig bestanden; wir haben keinen Anlaß, es zu verwerfen. Fällt eine Entscheidung negativ aus, werden Folgerungen falsifiziert, so trifft ihre Falsifikation auch das System, aus dem sie deduziert wurden.

Die positive Entscheidung kann das System immer nur vorläufig stützen; es kann durch spätere negative Entscheidungen immer wieder umgestoßen werden. Solang ein System eingehenden und strengen deduktiven Nachprüfungen standhält und durch die fortschreitende Entwicklung der Wissenschaft nicht überholt wird, sagen wir, daß es sich bewährt.

Induktionslogische Elemente treten in dem hier skizzierten Verfahren nicht auf; niemals schließen wir von der Geltung der singulären Satze auf die der Theorien. Auch durch ihre verifizierten Folgerungen können Theorien niemals als „wahr“ oder auch nur als „wahrscheinlich“ erwiesen werden.

Vague induction

It is clear that, if one uses the word “induction” widely and vaguely enough, any tentative acceptance of the result of any investigation can be called “induction”. In that sense, but (I must emphasize) in no other, Professor Putnam is quite right to detect an “inductivist quaver” in one of the passages he quotes (section 3). But in general he has not read, or if read not understood, what I have written … . [994]

The real Popper

It is worth noting that even in Lakatos’s own “methodology of scientific research programmes” (“MSRP”)—a type of sophisticated methodological falsificationism that Lakatos presents as the crowning synthesis of the “thesis” dogmatic falsificationism and the “antithesis” naive methodological falsificationism—the test statements and interpretative theo­ries still are accepted on the basis of a research program. So Lakatos gives a conventionalist solution to the problem of how basic statements are selected, in his interpretation of Popper’s methodology and in his own methodology as well.

This interpretation of Popper is not correct, and the suggested conventionalist solution to the problem of how test state­ments are accepted is not satisfying. Popper’s criticist solution, which Lakatos has not correctly understood, is much better and is also a solution that allows us to understand the history of science better than Lakatos’s oversophisticated combination of conventionalism and falsificationism. Lakatos maintains that sophisticated methodological falsifica­tionism combines the best elements of voluntarism, pragmatism, and the realist theories of empirical growth. Critical falsificationism is better still, among other reasons because it avoids that kind of eclecticism. And for those interested in the history of ideas, it might be worthwhile to know that the real Popper is neither a dogmatic falsificationist nor a naive or sophisticated methodological falsificationist. Not only Popper0 but also Popper1 and Popper2 are myths created by a misunderstanding of Popper’s critical falsificationism.[53]

Falsification as conditional disproof

Kuhn asked what falsification is, if not conclusive disproof. The answer is that falsification is a conditional disproof, conditional on the truth of the used test statements (and in some cases also on the truth of some used auxiliary hypotheses). Feyerabend’s example of the alleged falsification of the Copernican system with naked-eye observations shows this conditional character of falsifications quite well.

Does this cause any logical or methodological problems? The logical situation is quite clear and unproblematic. The methodological situation is only problematic for those who assume that there are infallible test statements. But as Kuhn said, Popper stresses that test statements are fallible. [56]

More falsificationism strawmen going up in flames

A second familiar approach from the same period is Karl Popper’s ‘falsificationist’ criterion, which fares no better. Apart from the fact that it leaves ambiguous the scientific status of virtually every singular existential statement, however well supported (e.g., the claim that there are atoms, that there is a planet closer to the sun than the Earth, that there is a missing link), it has the untoward consequence of countenancing as ‘scientific’ every crank claim which makes ascertainably false assertions. Thus flat Earthers, biblical creationists, proponents of laetrile or orgone boxes, Uri Geller devotees, Bermuda Triangulators, circle squarers, Lysenkoists, charioteers of the gods, perpetuum mobile builders, Big Foot searchers, Loch Nessians, faith healers, polywater dabblers, Rosicrucians, the-world-is-about-to-enders, primal screamers, water diviners, magicians, and astrologers all turn out to be scientific on Popper’s criterion – just so long as they are prepared to indicate some observation, however improbable, which (if it came to pass) would cause them to change their minds. [121]

Popper on Duhem–Quine’s naive falsificationism

The falsifying mode of inference here referred to—the way in which the falsification of a conclusion entails the falsifi­cation of the system from which it is derived—is the modus tollens of classical logic. It may be described as follows:

Let p be a conclusion of a system t of statements which may consist of theories and initial conditions (for the sake of simplicity I will not distinguish between them). We may then symbolize the relation of derivability (analytical implication) of p from t by ‘t ➙ p’ which may be read: ‘p follows from t ’. Assume p to be false, which we may write ‘p’, to be read ‘not-p’. Given the relation of deducibility, t ➙ p, and the assumption p, we can then infer t  (read ‘not-t ’); that is, we regard t as falsified. If we denote the conjunction (simultaneous assertion) of two statements by putting a point between the symbols standing for them, we may also write the falsifying inference thus: ((t ➙ p).p) ➙ t , or in words: ‘If p is derivable from t, and if p is false, then t also is false’.

By means of this mode of inference we falsify the whole system (the theory as well as the initial conditions) which was required for the deduction of the statement p, i.e. of the falsified statement. Thus it cannot be asserted of any one statement of the system that it is, or is not, specifically upset by the falsification. Only if p is independent of some part of the system can we say that this part is not involved in the falsification.* With this is connected the following possibility: we may, in some cases, perhaps in consideration of the levels of universality, attribute the falsification to some definite hypothesis—for instance to a newly introduced hypothesis. This may happen if a well-corroborated theory, and one which continues to be further corroborated, has been deductively explained by a new hypothesis of a higher level. The attempt will have to be made to test this new hypothesis by means of some of its consequences which have not yet been tested. If any of these are falsified, then we may well attribute the falsification to the new hypothesis alone. We shall then seek, in its stead, other high-level generalizations, but we shall not feel obliged to regard the old system, of lesser generality, as having been falsified.

* Thus we cannot at first know which among the various statements of the remaining sub-system t ′ (of which p is not independent) we are to blame for the falsity of p; which of these statements we have to alter, and which we should retain. (I am not here discussing interchangeable statements.) It is often only the scientific instinct of the investigator (influenced, of course, by the results of testing and re-testing) that makes him guess which statements of t ′ he should regard as innocuous, and which he should regard as being in need of modification. Yet it is worth remembering that it is often the modification of what we are inclined to regard as obviously innocuous (because of its complete agreement with our normal habits of thought) which may produce a decisive advance. A notable example of this is Einstein’s modification of the concept of simultaneity. [55-6]

Kuhn on falsification

A very different approach to this whole network of problems has been developed by Karl R. Popper who denies the existence of any verification procedures at all. Instead, he emphasizes the importance of falsification, i.e., of the test that, because its outcome is negative, necessitates the rejection of an established theory. Clearly, the role thus attri­buted to falsification is much like the one this essay assigns to anomalous experiences, i.e., to experiences that, by evoking crisis, prepare the way for a new theory. Nevertheless, anomalous experiences may not be identified with falsifying ones. Indeed, I doubt that the latter exist. As has repeatedly been emphasized before, no theory ever solves all the puzzles with which it is confronted at a given time; nor are the solutions already achieved often perfect. On the contrary, it is just the incompleteness and imperfection of the existing data-theory fit that, at any time, define many of the puzzles that characterize normal science. If any and every failure to fit were ground for theory rejection, all theories ought to be rejected at all times. On the other hand, if only severe failures to fit justifies theory rejection, then the Popperians will require some criterion of “improbability” or of “degree of falsification.” In developing one they will almost certainly encounter the same network of difficulties that has haunted the advocates of the various probabilistic verifi­cation theories.

Many of the preceding difficulties [Kuhn is referring to inductive reasoning] can be avoided by recognising that both of these prevalent and opposed views about the underlying logic of scientific enquiry have tried to compress two largely separate processes into one. Popper’s anomalous experience is important to science because it invokes competitors for an existing paradigm. But falsification, though it surely occurs, does not happen with, or simply because of, the emergence of an anomaly or falsifying instance. Instead, it is a subsequent and separate process that might equally well be called verification since it consists in the triumph of a new paradigm over the old one. [146-7]

Step-by-step approximations to truth

The degree of corroboration of two statements may not be comparable in all cases, any more than the degree of falsi­fiability: we cannot define a numerically calculable degree of corroboration, but can speak only roughly in terms of positive degree of corroboration, negative degrees of corroboration, and so forth. Yet we can lay down various rules; for instance the rule that we shall not continue to accord a positive degree of corroboration to a theory which has been falsified by an inter-subjectively testable experiment based upon a falsifying hypothesis. (We may, however, under cer­tain circumstances accord a positive degree of corroboration to another theory, even though it follows a kindred line of thought. An example is Einstein’s photon theory, with its kinship to Newton’s corpuscular theory of light.) In general we regard an inter-subjectively testable falsification as final (provided it is well tested): this is the way in which the asymme­try between verification and falsification of theories makes itself felt. Each of these methodological points contributes in its own peculiar way to the historical development of science as a process of step by step approximations. [266-7]

The heart of falsification

We must clearly distinguish between falsifiability and falsification. We have introduced falsifiability solely as a criterion for the empirical character of a system of statements. As to falsification, special rules must be introduced which will de­ter­mine under what conditions a system is to be regarded as falsified.

We say that a theory is falsified only if we have accepted basic statements which contradict it. This condition is neces­sary, but not sufficient; for we have seen that non-reproducible single occurrences are of no significance to science. Thus a few stray basic statements contradicting a theory will hardly induce us to reject it as falsified. We shall take it as falsified only if we discover a reproducible effect which refutes the theory. In other words, we only accept the falsification if a low-level empirical hypothesis which describes such an effect is proposed and corroborated. This kind of hypothe­sis may be called a falsifying hypothesis. The requirement that the falsifying hypothesis must be empirical, and so falsi­fiable, only means that it must stand in a certain logical relationship to possible basic statements; thus this requirement only concerns the logical form of the hypothesis. The rider that the hypothesis should be corroborated refers to tests which it ought to have passed—tests which confront it with accepted basic statements.

Thus the basic statements play two different rôles. On the one hand, we have used the system of all logically possible basic statements in order to obtain with its help the logical characterization for which we were looking—that of the form of empirical statements. On the other hand, the accepted basic statements are the basis for the corroboration of hypo­theses. If accepted basic statements contradict a theory, then we take them as providing sufficient grounds for its falsifi­cation only if they corroborate a falsifying hypothesis at the same time. [66-7]

The testability of facts

Higher level empirical statements have always the character of hypotheses relative to the lower level statements de­ducible from them: they can be falsified by the falsification of these less universal statements. But in any hypothetical deductive system, these less universal statements are themselves still strictly universal statements, in the sense here understood. Thus they too must have the character of hypotheses—a fact which has often been overlooked in the case of lower-level universal statements.

I shall say even of some singular statements that they are hypothetical, seeing that conclusions may be derived from them (with the help of a theoretical system) such that the falsification of these conclusions may falsify the singular statements in question. [55]