Tag: empiricism

Inducing absolute truth

Universal generalisations are certainly testable, even if not provable, in the sense that it is always possible that the experiments we perform or the observations we make should turn out to falsify them. So the substitution of testability for provability allows universal generalisations to be included in science all right. Indeed Karl Popper has built a whole philosophy of science on the principle that what distinguishes science from non-science is its ‘falsifiability’.

This weakening of the empiricist requirements on science does not really solve the problem of induction. Even if the requirement of testability succeeds in picking out what people standardly and intuitively count as proper science, it leaves us with a problem of explaining why such proper science is a good thing. We have still been given no account of why success in past tests should be a good basis for accepting generalisations which predict the future. [21]

The production of unanimity

Adaptation to the power of progress furthers the progress of power, constantly renewing the degenerations which prove successful progress, not failed progress, to be its own antithesis. The curse of irresistible progress is irresistible regres­sion.

This regression is not confined to the experience of the sensuous world, an experience tied to physical proximity, but also affects the auto cratic intellect, which detaches itself from sensuous experience in order to subjugate it. The stan­dardization of the intellectual function through which the mastery of the senses is accomplished, the acquiescence of thought to the production of unanimity, implies an impoverishment of thought no less than of experience; the separation of the two realms leaves both damaged. [28]

The still strong dogma of observationism

Bacon’s observationism and his hostility to all forms of theoretical thought were revolutionary, and were felt to be so. They became the battle cry of the new secularized religion of science, and its most cherished dogma. This dogma had an almost unbelievable influence upon both the practice and the theory of science, and this influence is still strong in our own day. [84]

Science begins with problems

Science begins with observation, says Bacon, and this saying is an integral part of the Baconian religion. It is still widely accepted, and still repeated ad nauseam in the introductions to even some of the best textbooks in the field of the physical and biological sciences.

I propose to replace this Baconian formula by another one.

Science, we may tentatively say, begins with theories, with prejudices, superstitions, and myths. Or rather, it begins when a myth is challenged and breaks down – that is, when some of our expectations are disappointed. But this means that science begins with problems, practical problems or theoretical problems. [95]

The death of induction

It is only after a long course of uniform experiments in any kind, and we attain a firm reliance and security with regard to a particular event. Now where is that process of reasoning which, from one instance, draws a conclusion so different from that which it infers from a hundred instances that are nowise different from that single one? This question I propose as much for the sake of information, as with an intention of raising difficulties. I cannot find, I cannot imagine any such reasoning. But I keep my mind still open to instruction, if any one will vouchsafe to bestow it on me.

… It is impossible, therefore, that any arguments from experience can prove this resemblance of the past to the future, since all these arguments are founded on the supposition of that resemblance. [21-2]

Making democracy more reflective

Here I shall be attempting to identify deliberative democratic methods for evoking more reflective preferences as inputs into the political process. Properly crafted deliberative processes can produce preferences which are more reflective, in the sense of being:

  • more empathetic with the plight of others;
  • more considered, and hence both better informed and more stable; and
  • more far-reaching in both time and space, taking fuller account of distant periods, distant peoples and different interests.

The key innovation I shall be offering is, in the first instance, a theoretical one. What is required is a new way of con­ceptualizing democratic deliberation—as something which occurs internally, within each individual’s head, and not exclusively or even in an interpersonal setting. [7]

Enlightenment needs criticism

The scientific revolution was part of a wider intellectual revolution, the Enlightenment, which also brought progress in other fields, especially moral and political philosophy, and in the institutions of society. Unfortunately, the term ‘the Enlightenment’ is used by historians and philosophers to denote a variety of different trends, some of them violently opposed to each other. What I mean by it will emerge here as we go along. It is one of several aspects of ‘the beginning of infinity’, and is a theme of this book. But one thing that all conceptions of the Enlightenment agree on is that it was a rebellion, and specifically a rebellion against authority in regard to knowledge.

Rejecting authority in regard to knowledge was not just a matter of abstract analysis. It was a necessary condition for progress, because, before the Enlightenment, it was generally believed that everything important that was knowable had already been discovered, and was enshrined in authoritative sources such as ancient writings and traditional assumptions. Some of those sources did contain some genuine knowledge, but it was entrenched in the form of dogmas along with many falsehoods. So the situation was that all the sources from which it was generally believed knowledge came actually knew very little, and were mistaken about most of the things that they claimed to know. And therefore progress depended on learning how to reject their authority. This is why the Royal Society (one of the earliest scientific academies, founded in London in 1660) took as its motto ‘Nullius in verba’, which means something like ‘Take no one’s word for it.’

However, rebellion against authority cannot by itself be what made the difference. Authorities have been rejected many times in history, and only rarely has any lasting good come of it. The usual sequel has merely been that new authorities replaced the old. What was needed for the sustained, rapid growth of knowledge was a tradition of criticism. Before the Enlightenment, that was a very rare sort of tradition: usually the whole point of a tradition was to keep things the same.

Thus the Enlightenment was a revolution in how people sought knowledge: by trying not to rely on authority. That is the context in which empiricism – purporting to rely solely on the senses for knowledge – played such a salutary historical role, despite being fundamentally false and even authoritative in its of conception of how science works.

One consequence of this tradition of criticism was the emergence of a methodological rule that a scientific theory must be testable (though this was not made explicit at first). That is to say, the theory must make predictions which, if the theory were false, could be contradicted by the outcome of some possible observation. Thus, although scientific theo­ries are not derived from expericence, they can be tested by experience – by observation or experiment. [12-13]

Liberating science from authority

Empiricism never did achieve its aim of liberating science from authority. It denied the legitimacy of traditional author­ities, and that was salutary. But unfortunately it did this by setting up two other false authorities: sensory experience and whatever fictitios process of ‘derivation’, such as induction, one imagines is used to extract theories from experience.

The misconception that knowledge needs authority to be genuine or reliable dates back to antiquity, and it still prevails. To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touch­stone of knowledge. Thus ‘how do we know … ?’ is transformed into ‘by what authority do we claim … ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.

The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism. To believers in the justified-true-belief theory of knowledge, this recognition is the occasion for despair or cynicism, because to them it means that knowledge is un­attainable. But to those of us for whom creating knowledge means understanding better what is really there, and how it really behaves and why, fallibilism is part of the very means by which this is achieved. Fallibilists expect even their best und most fundamental explanations to contain misconceptions in addition to truth, and so they are predisposed to try to change them for the better. In contrast, the logic of justificationism is to seek (and typically, to believe that one has found) ways of securing ideas against change. Moreover, the logic of fallibilism is that one not only seeks to correct the misconceptions of the past, but hopes in the future to find and change mistaken ideas that no one today questions or finds problematic. So it is fallibilism, not mere rejection of authority, that is essential for the initiation of unlimited know­ledge growth – the beginning of infinity. [8-9]