SlideShare una empresa de Scribd logo
1 de 43
OHIO UNIVERSITY
HONG KONG PROGRAMME
PHIL 216: Philosophy of Science Survey (3) (2H)
Instructor: Dr. Giuseppe Mario Saccone
LECTURE 1: The relationship between science and philosophy
What is science?
Science is the study of the nature and behaviour of natural things and the knowledge that we
obtain about them through observation and experiments. The aim of science is the discovery
of general truths. Individual facts are critical of course; science is built up with facts as a
house may be built up with stones. But a mere collection of facts no more constitutes a
science than a collection of stones constitutes a house. Scientists seek to understand
phenomena, and to do this they seek to uncover the patterns in which phenomena occur, and
the systematic relations among them. Simply to know the facts is not enough; it is the task of
science to explain them. This, in turn, requires the theories of which Einstein wrote,
incorporating the natural laws that govern events, and the principles that underlie them. So a
scientific explanation is a theoretical account of some fact or event, always subject to
revision, that exhibits certain essential features: relevance, compatibility with previously
well-established hypothesis, predictive power, and simplicity. And the scientific method is a
set of techniques for solving problems involving the construction of preliminary hypotheses,
the testing of the consequences deduced, and the application of the theory thus confirmed to
further problems. In this way, the principle that the meaning of statements should be backed
up by evidence reflects the scientific approach.
What is philosophy?
Philosophy literally means love of wisdom, the Greek words philia meaning love or
friendship, and Sophia meaning wisdom. Philosophy is concerned basically with three areas:
epistemology (the study of knowledge), metaphysics (the study of the nature of reality), and
ethics (the study of morality). Philosophy may be regarded as a search for wisdom and
understanding and it is an evaluative discipline that in the course of time has started to be
seen as becoming more and more concerned with evaluating theories about facts than with
being concerned with facts in themselves. In this sense, philosophy may be regarded as a
second order discipline, in contrast to first order disciplines which deal with empirical
subjects. In other words, philosophy is not so much concerned with revealing truth in the
manner of science, as with asking secondary questions about how knowledge is acquired and
about how understanding is expressed. Unlike the sciences, philosophy does not discover
new empirical facts, but instead reflects on the facts we are already familiar with, or those
given to us by the empirical sciences, to see what they lead to and how they all hang
together, and in doing that philosophy tries to discover the most fundamental, underlying
principles.
What is philosophy of science?
Generally speaking, the philosophy of science is that branch of philosophy that examines the
methods used by science (e.g. the ways in which hypotheses and laws are formulated from
evidence) and the grounds on which scientific claims about the world may be justified.
1
Whereas scientists tend to become more and more specialised in their interests, philosophers
generally stand back from the details of particular research programmes and concentrate on
making sense of the overall principles and establishing how they relate together to give an
overall view of the world.
The key feature of much philosophy of science concerns the nature of scientific theories –
how it is that we can move from observation of natural phenomena to producing general
statements about the world. And, of course, the crucial questions here concern the criteria by
which one can say that a theory is correct, how one can judge between different theories that
purport to explain the same phenomenon and how theories develop and change as science
progresses.
And once we start to look at theories, we are dealing with all the usual philosophical
problems of language and what and how we can know that something is the case. Thus
philosophy of science relates to other major areas of philosophy: metaphysics (the structures
of reality), epistemology (the theory of knowledge) and language (in order to explore the
nature of scientific claims and the logic by which they are formulated). In doing this, it is
intended that the philosophy of science should not act as some kind of intellectual
policeman, but should play an active part in assisting science by clarifying the implications
of its practise.
The relationship between science and philosophy
There are at least three different ways in which we can think of the relationship between
philosophy and science:
1 Science and philosophy can be seen as dealing with different subject matter. Science gives
information about the world; philosophy deals with norms, values and meanings. Philosophy
can clarify the language science uses to make its claims, can check the logic by which those
claims are justified and can explore the implications of the scientific enterprise. This has
been a widely held view and it gives philosophy and science very different roles.
2 It can bee argued that you cannot draw a clear distinction between statements about fact
(‘synthetic’ statements, about which science has its say) and statements about meaning
(‘analytic’ statements, which philosophy can show to be true by definition).
Statements about meaning may often be reduced to the ‘naming’ of things and do not make
sense without some reference to the external world. So, philosophy may be an extension of
the scientific approach, dealing with questions about reality based on the finding of science.
Science is full of concepts and these may be revised or explained in different ways. Science
is not simply the reporting of facts, but the arguing out of theories; hence we should not
expect to draw a clear line between science and philosophy. (This view was developed by the
modern American philosopher W.V. Quine in an important article, published in 1951,
entitled ‘The two dogmas of empiricism’.
3 Philosophy can describe reality, and can come to non-scientific truths about the way the
world is. These truth do not depend on science, but are equally valid. (This reflects an
approach taken by philosophers who are particularly concerned with the nature of language
and how it relates to experience and empirical data, including Moore, Wittgenstein, Austin,
Strawson and Searle.)
The key question here are:
1 Are there aspects of reality with which science cannot deal, but philosophy can?
2 If philosophy and science deal with the same subject matter, in what way does philosophy
add to what science is able to tell us?
2
And then, of course, one could go on to ask if you can actually do science without having
some sort of philosophy. Is physics possible without metaphysics of some sort or language or
logic or all the concepts and presuppositions of the language that the scientist uses to explain
what he or she finds?
Science can never be absolutely ‘pure’. It can never claim to be totally free from the
influences of thought, language and culture within which it takes place. In fact, science
cannot even be free from economic and political structures. If a scientist wants funding for
his or her research, it is necessary to show that it has some value, that it is in response to
some need or that it may potentially give economic benefit. So the philosophy of science
needs to be aware of, and point to, those influences. Scientific evidence or theories are
seldom unambiguous; and those who fund research do so with specific questions and goals in
mind, goals that cannot but influence the way in which that research is conducted.
But apart from all this, there is the more general function of philosophy, which is to analyse
and clarify concepts, to examine ways of argument and to show the presuppositions, logic
and validity of arguments. This is what philosophy does within any sphere – whether we are
considering the philosophy of mind, religion or language. The main point of issue is whether
philosophy also contribute directly to the knowledge of reality. For some time, during the
middle years of the 20th
century, it was assumed that the principal – indeed the only – role of
philosophy was clarification. Since then there has been a broadening out of its function.
Topics
- Explain, in your own words, what is science, what is philosophy, what is philosophy of
science.
- Discuss the relationship and the reciprocal roles of science and philosophy of science.
3
LECTURE 2: Science as an intellectual activity
There is no institutions in the modern world more prestigious then science.
Objections to science and scientific research tend to be partial, to some aspects of the
application of scientific knowledge, leaving unquestioned most of its applications. They also
tend to be (in the bad sense) theoretical, affecting the way people talk rather than the way
they actually live.
At least on the face of it, to some significant degree, science does cut through political
ideology, because its theories are about nature, and made true or false by a non partisan
nature, whatever the race or beliefs of their inventor, and however they conform or fail to
conform to political or religious opinion. In a world in which technological success is crucial
to any regime, no sane leader is going to jeopardize his or her chances by openly interfering,
expect in some cases to be seen as exceptions rather than normal practice, with scientific
research or its applications on ideological grounds.
However, not everything one finds in writings critical of ‘science’ or ‘the scientific
mentality’ is completely misguided. There are certainly areas of human life – the most
important areas, in fact – about which science as such can have nothing to tell us, and where
the application of methods analogous to those of science can only be harmful. But because of
the importance of science and of these questions it is important to be balanced and honest in
what one says about science, and to recognize both our dependence on it and its very real
intellectual and moral merits. For if true knowledge is growing in science, this means that the
theories of science must be giving us more and more truths about the world.
Growth of Knowledge
In a perfectly obvious sense, over the last four hundred years or so there has been progress in
science. Measurement of physical quantities becomes more precise, previously unknown
particles and substances are discovered, new effects are produced and applied.
There is a striking contrast here between the development of modern science and the arts. No
one would say of a work of music or literature that it was better than an earlier work just
because it was later. In contrast to the development of theories in modern science, a later
masterpiece in a given artistic genre is not thereby better than an earlier one, nor does it
necessarily attain the aim of the genre better, or anything of that sort.
The case with scientific theories, though, is quite different. Here we are able to specify a
clear target at which all theories aim, and we often have confidence that theory A has got
closer to the target than theory B. The aim might be characterized as discovering the truth
about the natural world, and when we have theories which aim to describe the bits of the
natural world we can often say that a later theory is better than earlier. Thus, Copernicus’s
heliocentric picture of the universe was better than Aristotle’s geocentric picture, and
Newton provided a better account of the solar system and the universe than either.
Moreover, we can say that in literature we know far more than our predecessors because
what we know is their work. Growth of knowledge in science is not at all like that. Most
workers in a scientific field do not know the history of their field in any depth or detail. They
do not have to know it, because the history of science will consist largely of theories that
have been discarded, and which are regarded as giving far less true information about the
world than their successors. The case is quite different with works of art and literature. Past
writers are part of the soil and tradition in which we live, and we deepen and refresh our
understanding both of ourselves and of art by returning to them and deepening our
acquaintance with them.
4
Objectivity and the external world
The reason why in doing science have no need to return to past science is because the
theories of science are not about human endeavour or human expressiveness. Human self-
expression and understanding is a cumulative, historical process in which where we are now
and what we now think of ourselves is rooted in the forms of life and expression developed
in the past, and will always involve some coming to terms with our history and our past. But
a scientific theory will, by contrast, be dealing with a world independent of human history
and human intervention. The truths science attempts to reveal about atoms and the solar
system and even about microbes and bacteria would still be true even if human beings had
never existed. As we have noted, it is a humanly impartial a-historical nature that decrees the
truth or falsity of scientific theories, and it does so without regard to religious or political
rectitude.
This brings us to one of the distinctive features of scientific activity, which morally and
humanly is one of its great strengths. The impartiality of nature to our feelings, beliefs, and
desires means that the work of testing and developing scientific theories is insensitive to the
ideological background of individual scientists. A scientific theory will characteristically
attempt to explain some natural phenomena by producing some general formula or theory
covering all the phenomena of that particular type. From this general formula, it will be
possible to predict how future phenomena in the class in question will turn out. Whether they
do or not will depend on nature rather than on men, and any scientist can observe whether
they do or not, regardless of his other beliefs.
The case is quite otherwise with some of the grand theories of psychology and the social
sciences, where critics are sometimes told that their criticisms are invalid because their
observations are distorted by their being sexually repressed (as in the case of Freudianism) or
because they are not identifying themselves with the proletariat (as in the case of Marxism).
But, because of the nature of the enterprise, the scientific community is non-sectarian and its
works cuts across all sorts of human divisions. There is no such thing as British science, or
Catholic science, or Communist science, though there are Britons, Catholics, and
Communists who are scientists, and who should, as scientists, be able to communicate fully
with each other. The ideological or religious background of a scientist becomes important
only when, as with a doctrinaire Marxist-Leninist like Lysenko or some fundamentalist
Christians, non-scientific beliefs make disinterested scientific enquiry impossible.
Prediction and Explanation
It is often pointed out that the theories of science characteristically take the form of general
mathematical formulae covering a particular range of types of event, from which it is
possible to deduce predictions of specific events. Newton’s laws, for example, give general
formulae concerning the motions of mutual attraction and repulsion of heavy bodies, from
which we can predict such things as solar eclipses. From the standpoint of modern science,
there is a close connection between the notions of prediction and explanation. If you can
produce general formulae allowing you to make mathematically precise predictions of a class
of specific states of affairs, you will generally have gone a good way to providing an
explanation of those states of affairs.
One reason for not saying here that we have always gone some way to producing an
explanation when we are able to make predictions on the basis of general formulae is that
there are cases discussed in the philosophical literature in which one is enabled to produce a
precise prediction of states of affairs on the basis of a general theory without – it is alleged –
being tempted to say that one has any sort of explanation before one. Thus, for example, by
invoking Pythagoras’s theorem, one can predict the distance of a mouse from an owl, when
all we knew was that the mouse was four feet from a three-foot flag-pole on top of which
was an owl; but, it is said, one would not want to say that the theorem explained the distance
5
of the mouse from the owl. Against this example it might be said that there was no genuine
prediction here, in the sense of an inference from a past state of affairs to a future one, as
opposed to a move from a state of past ignorance to one of future knowledge. It is not clear,
though, that all scientific explanations do involve predictions from past states of affairs to
future ones, rather than predictions about what one will find on the basis of existing
knowledge, for this latter type of reasoning is involved when people deduce conclusions
concerning the nature of the big bang from their cosmological theories and their knowledge
of the current state of the universe. The predictions by which one tests such speculation may
well be predictions about what one will find when one probes traces of past events.
However, given that we are prepared to work with a concept of prediction which is wide
enough to encompass the prediction and discovery of as yet unknown facts, including facts
about the past, it is certainly the case that we now expect scientific explanations to have
predictive power. We can say this even though there may be cases, like that involving the
Pythagoras theorem, when we can make predictions, or at least deduce as yet unknown facts,
on the basis of general theories, without wanting to speak of an explanation of those facts.
The reason why many criticize Freudians and Marxists for being unscientific is precisely
because their theories either lead to no specific predictions at all or to predictions that are
false. Making predictions on the basis of one’s theories is, then, a necessary is not sufficient
condition for a genuine scientific explanation.
The notion of a scientific explanation was not always linked so closely to its
mathematical and predictive power. In the science associated with Aristotle and his
followers, giving an explanation of a phenomenon consisted in delineating its essence,
or essential properties, and in showing why, in order to fulfil its function or nature, it
had to have those properties. Fire rose, for example, in order that it should reach its
natural resting-place, which was taken to be a spherical shell just inside the orbit of the
moon. The essence of fire, being a light body, was to rise. It does so in order to fulfil its
nature.
From the modern scientific viewpoint there are at least two things wrong with this
‘essentialist’ type of explanation. In the first place, we have no justification for imputing
purposes to natural phenomena like fire or planets or heavy bodies. Their activity is
conditioned by the forces that act upon them, their underlying structure, and the interaction
of the two. They do not have any ulterior purposes, or essential nature they are trying to
fulfil. Secondly, there is nothing in a typical Aristotelian explanation about precise quantities
or measurements. They five us reasons (of a sort) for why things happen, but not the precise
amounts or distances or times involved. And these precise measurements are crucial for
modern science, because they are required for the formulation and application of its theories.
It is easy to see why the shift occurred from Aristotelian essentialist explanations to the
mathematical-predictive explanations of modern science. If you want to control and
manipulate phenomena, then what you need to know are the precise conditions in which
effects of given sort occur. If you are working with a piece of metal, you want to know just
how much it will expand under given degrees of heat. You do not want to be told that its
expansion is due to the fact that it ha to expand in order to fulfil its nature. And, as we shall
see in the next chapter, modern science is very much about controlling nature, hence its
tendency to elide prediction and explanation, and the reason why its predictions will
characteristically, if not universally, be predictions about states of affairs which have not yet
happened.
Yet, even at this point, one might feel that there is something to be said for a more meaty
type of explanation than appears to be given in simply producing formulae for prediction.
6
Newton himself gave expression to this feeling when, at the end of his Mathematical
Principles, he said that while he had demonstrated the reality of gravity and its effects – by
precise mathematical methods, we would stress – he had not yet been able to explain the
cause of these effects. It is as if a purely mathematical correlation of events, saying, for
example, that the gravitational force on such and such an object will be so and so in such and
such circumstances, stays too much on the surface of things, and fails to five us insight into
the underlying structure of gravitational phenomena or of the essence of gravity. In other
words, we can say that while it needs to be seriously considered whether a full scientific
explanation is more than a device for predicting effects in the natural world, there are at the
very least very convincing reasons for thinking that it must be at least that.
Topic
Explain in your own words how science is to be understood as an intellectual activity.
Compare the development of modern science and the arts. What do you think about it?
How do you understand the role of the relationship between prediction and explanation of
phenomena in science?
Compare modern and Aristotelian science. What are the differences and similarities between
the two?
7
LECTURE 3: Induction
When Galileo argued in favour of the Copernican view of the universe, in which the Earth
revolved around the Sun rather than vice versa, his work was challenged by the more
conservative thinkers of his day, not because his observations or calculations were found to
be at wrong, but because his argument was based on those observations and calculations,
rather than on a theoretical understanding of the principles that should govern a perfectly
ordered universe.
Galileo struggled against a background of religious authority which gave Aristotelian ideas
of perfection and purpose priority over observations and experimental evidence. He
performed experiments to show that Aristotelian theory was wrong. In other words, the
earlier medieval system of thought was deductive – it deduced what should happen from its
ideas, in contrast to Galileo’s inductive method of getting to a theory from observations,
experiments and calculations. This inductive method is a key feature in the establishment of
the scientific method of gaining knowledge.
The other key difference between the experiments and observations carried out by Galileo
and the older Aristotelian view of reality was that Galileo simply looked at what happened,
not at why it happened.
Most significantly, Aristotle argued that a thing had four different causes:
1 Its material cause is the physical substance out of which it is made.
2 The formal cause is its nature, shape or design – that which distinguishes a statue from the
material block of marble from which it has been sculpted.
3 Its efficient cause is that which brought it about – our usual sense of the word ‘cause’.
4 Its final cause is its purpose or intention.
For Aristotle, all four causes were needed for a full description of an object. It was not
enough to say how it worked and what it was made of, but one needed also to give it some
purpose or meaning – not just ‘What is it?’ or ‘What is it like?’ but also ‘What brought it
about?’ and ‘What is it for?’
For Aristotle, everything had a potential and a goal: its ‘final cause’. Broadly, this implied
that things had a purpose, related to the future, rather than being totally defined by the
efficient causes that brought them about in the first place.
There are two important things to recognize in terms of Aristotle and the development of
science. First, his authority was such that it was very difficult to challenge his views, as we
see happening later in connection with the work of Copernicus and Galileo. But second, the
development of modern science, from about the 17th
century onwards, was based on a view
of the nature of reality in which efficient causation dominated over final causation. In other
words, the Aristotelian world where tings happened in order to achieve a foal was replaced
by one in which they happened because they were part of a mechanism that determined their
every move. The shift is clear and very important for the philosophy of science.
This was also a key feature of the work of Francis Bacon, who rejected Aristotle’s idea of
final causes and insisted that knowledge should be based on evidence. So Bacon in the
Novum Organum tells us that we should get rid of four types of ‘idols’ which have
dominated and distorted men’s minds, delaying the acquisition of true knowledge:
1 The idols of habit (i.e., idols of the tribe or tendency to see things in relations to us rather
than they are in themselves: for Bacon, man is definitely not the measure of all things and we
unthinkingly tend to impose order on phenomena which are not there, in this way not
realizing that if we would command nature, we must first learn to obey her);
8
2 The idols of prejudice (or idols of the cave which are the predispositions of character and
learning with which different individuals approach the facts, rather than seeing them as they
really are);
3 The idols of conformity, or the idols of the market which arise through the use of the
language as when we read back into nature conceptions which have arisen simply through
using words which actually stand for nothing (such as ‘Fortune, the Prime Mover, Planetary
Orbits, the Element of Fire, and like fictions’);
4 The idols of the theatre, which are due to the malign influence of philosophical systems on
our minds which make people come to conclusions before they consult experience, and when
finally they consult experience after having first determined the question according to their
will they resort to bend her into conformity with their decisions and axioms so that they lead
her about like a captive in a procession.
Bacon’s insistence that one should accept evidence even where it did not conform to one’s
expectations mark a clear shift to what became established as scientific method.
Experience and knowledge
A crucial step in appreciating scientific method comes with recognizing, and attempting to
eliminate, those elements in what we see that come from our ways of seeing, rather from the
external reality we are looking at.
The philosopher John Locke (1632-1704) argued that everything we know derives from
sense experience. When we perceive an object, we describe it as having certain qualities.
Locke divided these qualities into two categories:
Primary qualities belonged to the object itself and included its location, its dimensions and
its mass. He considered that these would remain true for the object no matter who perceived
it.
Secondary qualities depended upon the sense faculties of the person perceiving the object
and could vary with circumstances. Thus, for example, the ability to perceive colour, smell
and sound depends upon our senses; if the light changes, we see things as having different
colour.
[So Qualia is the term nowadays used for the basic ‘phenomenal qualities’ of experience –
the taste of something, its colour or texture, the sound of a piece of music, i.e., the
experience of something as having a particular colour, texture, sound. In many ways they are
the building blocks of mental life – the simple elements of experience. However, it is very
difficult to explain qualia, except in terms of other qualia or subjective experience as a
whole. Why should it be that photons entering the eyeball cause me to see this particular
thing? What is the relationship between the information reaching my brain and the
experience of seeing?
Qualia cause a problem for the functionalist approach to mind which stems from positivism.
If the mind is simply a processor that receives inputs and decides the appropriate responses
(which is crudely put, what functionalism claims) then how do we account for this whole
‘qualia’ level of conscious experience? Qualia do not appear as a function, but neither are
they physical.]
Science was therefore concerned with primary qualities. These it could measure, and seemed
to be objective, as opposed to the more subjective secondary ones.
9
Imagine how different the world would be if examined only in terms of primary qualities.
Rather than colours, sounds and tastes, you would have information about dimensions.
Music would be a digital sequence or the pulsing of sound waves in the air. A sunset would
be information about wavelengths of light and the composition of the atmosphere.
In general, science deals with primary qualities. The personal encounter with the world,
taking in a multitude of experiences simultaneously, mixing personal interpretation and the
limitations of sense experience with whatever is encountered as external to the self, is the
stuff of the arts, not of science.
Science analyses, eliminates the irrelevant and the personal and finds the relationship
between the primary qualities of objects.
Setting aside the dominance of secondary qualities in experience, along with any sense of
purpose or goal, was essential for the development of scientific method – but it was not an
easy step to take. The mechanical world of Newtonian physics was a rather abstract and dull
place – far removed from the confusing richness of personal experience.
One thing that becomes clear the more we look at the way in which information is gathered
and the words and images used to describe it, is that there will always be a gap between
reality and description. Just as the world changes depending on whether we are mainly
concerned with primary or secondary qualities, so the picture and models we use to describe
it cannot really be said to be ‘true’, simply because there is no way to make a direct
comparison between the model and the reality to which it points. Our experience cannot be
unambiguous, because it depends on so many personal factors. Scientific method developed
in order to eliminate those personal factors and therefore to achieve knowledge based simply
on reason and evidence.
The recognition that we cannot simply observe and describe came to the fore in the 20th
century, particularly in terms of sub-atomic physics. It seemed impossible to disentangle
what was seen from the action of seeing it.
The philosopher David Hume (1711-1776) pointed out that scientific laws were only
summaries of what had been experienced so far. The more evidence confirmed them, the
greater their degrees of probability, but no amount of evidence could lead to the claim of
absolute certainty.
He argued that the wise man should always proportion his belief to the evidence available;
the more evidence in favour of something (or balanced in favour, where there are examples
to the contrary) the more likely it is to be true.
He also pointed out that, in assessing evidence, one should take into account the reliability of
witness and whether they had a particular interest in the evidence they give. Like Francis
Bacon, therefore, Hume sets out basic rules for the assessment of evidence, with the attempt
to remove all subjective factors or partiality and to achieve as objective a review of evidence
as is possible.
What Hume established (in his Enquiry Concerning Human Understanding, section 4) was
that no amount of evidence could through the logic of induction, ever establish the absolute
validity of a claim. There is always scope for a counter-example, and therefore for the ‘law’
to fail.
This seemed to raise the most profound problems for science – since it cut away its most sure
foundations in experimental method.
With hindsight, that might seem a very reasonable conclusion to draw from the process of
gathering scientific evidence, but in Hume’s day – when scientific method was sought as
something of a replacement for Aristotle in terms of a certainty in life – it was radical. It was
this apparent attack on the rational justification of scientific theories that later ‘awoke’ the
philosopher Kant from his dogmatic slumbers. He accepted the force of Hume’s challenge,
10
but could not bring himself to deny the towering achievements of Newton’s physics, which
appeared to spring from the certainty of established laws of nature. It drove Kant to the
conclusion that the certainty we see in the structures of nature (time, space and causality) are
there because our minds impose such categories on our experience.
In this way, Hume’s challenge, set alongside the manifest success of the scientific method,
led to the conclusion that the process of examining the world is one that involves the
necessary limitations of structures of human reason. This the way we see the world – and it
works! That doesn’t mean that we can know anything with absolute certainty; and it doesn’t
mean that ours is the only way of experiencing it. For Kant, we know only the world of
phenomena. What things are in themselves (noumena) is hidden from us.
In many way, this continues to be the case. I cannot know an electron as it is in itself, but
only as it appears to me through the various models or images by which I try to understand
things at the sub-atomic level. I may understand something in a way that is useful to me, but
that does not mean that my understanding is – or can ever be – definitive.
The early 20th
century philosophical movement called logical positivism, whose view of
language and meaning was greatly influenced by scientific method, argued for using
empirical evidence as the criterion of meaning: in other words, the meaning of a statement
was identical to its method of verification. It made the limitations about certainty, as
suggested by Hume, the norm for all statements that were not definitions or matters of logic
or mathematics (known to be true ‘a priori’), but depended on evidence (therefore known to
be true only ‘a posteriori’).
In an example in his Problems of Philosophy (1952), Bertrand Russell gives a
characteristically clear and entertaining account of the problem of induction. Having
explained that we tend to assume that what has always been experienced in the past will
continue to be the case in the future, he introduces the example of the chicken which, having
been fed regularly every morning, anticipates that this will continue to happen in the future.
But, of course, this need not be so:
“The man who has fed the chicken every day throughout its life at last wrings its neck
instead, showing that more refined views as to the uniformity of nature would have been
useful to the chicken.” (Bertrand Russell, Problems of Philosophy Problems of Philosophy,
p.35)
From what we have said so far, it is clear that the inductive approach to knowledge can yield
no more than a very high degree of probability. There is always going to be the chance that
some new evidence will show that the original hypothesis, upon which a theory is based, was
wrong. Most likely, it is shown that the theory only applies within a limited field and that in
some unusual sets of circumstances it breaks down. Even if it is never disproved, or shown to
be limited in this way, a scientific theory that has been developed using this inductive
method is always going to be open to the possibility of being proved wrong. Without that
possibility, it is not scientific.
A particular feature of scientific theories is that it should be possible, in theory, to falsify
them. If they cannot be falsified – in other words, if there is no possible evidence that could
ever prove them wrong – they are deemed worthless. This is because theories are used to
predict events and if they argue that absolutely anything is predictable, then they have
nothing to contribute. This was the basis of Karl Popper’s criticism of both Marxist and
Freudian thinking, arguing that an irrefutable theory cannot be scientific.
Topic
Discuss inductive method and inductive proof and the fundamental challenges to each
of the two.
11
LECTURE 4: Falsification
Karl Popper (1902-1994), was a philosopher from Vienna who, following some years in
New Zealand, settled in London in 1945, where he became Professor of Logic and Scientific
Method at the London School of Economics. He made significant contributions to political
philosophy as well as the philosophy of science.
Popper’s theory of falsification, although important for the philosophy of science, has much
wider application. In the 1920s and 1930s, logical positivists were arguing that statements
only had meaning is they could be verified by sense data. In other words, if you could not
give evidence for a statement, or say what could count for or against it, then it was
meaningless. (The exception, of course, being statements of logic or mathematics, where the
meaning is already contained within the definition of the words used. You don’t have to out
and point to things in order to show that 2 + 2 = 4.)
In The Logic of Scientific Discovery (1934, translated in 1959) Popper argued that one could
not prove a scientific theory to be true by adding new confirming evidence. Contrariwise, if
some piece of sound evidence goes against a theory, that may be enough to show that the
theory is false.
He therefore pointed out that a scientific theory could not be compatible with all possible
evidence. If it is to be scientific, then it must be possible, in theory, for it to be falsified. In
practice, of course, a theory is not automatically discarded as soon as one piece of contrary
evidence is produced, because it might be equally possible that the evidence is at fault. As
with all experimental evidence, a scientist tries to reproduce this contrary evidence, to show
that it was not a freak result, but a genuine indication of something for which the original
theory cannot account. At the same time, scientists are likely to consider any alternative
theories that can account for both the originally confirming evidence and the new, conflicting
evidence as well. In other words, progress comes by way of finding the limitations of
existing scientific theories.
A key feature of Popper’s claim here is that scientific laws always go beyond experimental
data and experience. The inductive method attempted to show that, by building up a body of
data, inferences can be made to give laws that are regarded as certain, rather than probable.
Popper challenges this on the ground that all sensation involves interpretation of some sort
and that in any series of experiments there will be variations and whether or not such
variations are taken into account is down to the presuppositions of the person conducting
them. Also, of course, the number of experiments done is always finite, whereas the number
of experiments not yet done is infinite. Thus inductive arguments can never achieve the
absolute certainty of a piece of deductive logic.
What was essential, for Popper, was to be able to say what would falsify a claim. If nothing
could be allowed to falsify it, it could not have significant content. Thus he held that all
genuine scientific theories had to be logically self-consistent and also capable of
falsification. No scientific theory can be compatible with all logically possible evidence. An
irrefutable theory is not scientific.
In particular, Popper’s view challenges two popular philosophical ideas:
1 Locke’s idea that the mind is a tabula rasa until it receives experience.
2 Wittgenstein’s idea, propounded in Tractatus, that the task of language is to provide an
image of the external world.
Instead, he saw minds as having a creative role vis à vis experience. In the scientific realm
this means that progress is made when a person makes a creative leap to put forward an
hypothesis that goes beyond what can be known through experience. It does not progress
gradually by the adding up of additional information to confirm what is already known, but
by moving speculatively into the unknown, and testing out hypotheses, probing their weak
points and modifying them accordingly.
12
This view of scientific work parallels the general situation of human thought, for Popper saw
all of human intelligence in terms of the constant solving of problems – that is simply the
way the mind works.
In effect, the goal of science is therefore to produce statements which are high in information
content and low in probability of being true (since the more information contained, the
greater the chance of finding a proposition to be false), but which actually come close to the
truth. It would, of course, be easy to find a statement that never fear being refuted (e.g. The
sun will rise tomorrow), but it offers so little information content that it is difficult to see
how it can be of much practical use.
His approach to scientific method was therefore as follows:
1 Be aware of the problem (e.g. the failure of an earlier theory).
2 Propose a solution (i.e. a new theory).
3 Deduction of testable propositions from that theory.
4 Establish a preference among competing theories.
This means that on Popper’s theory no scientific law can ever be proved, it can at best, be
given only a high degree of probability. There must always remain the possibility that a piece
of evidence will one day be found to prove it wrong.
Therefore, in terms of the results of scientific work, he observes that everything is already
‘theory soaked’. Everything is a matter of using and modifying theories: the basic form of
intellectual work is problem solving.
For Popper, the ideal is a theory which gives the maximum amount of information and which
therefore has quite a low level of probability, but which nevertheless comes close to the
truth. Such a theory may eventually be refuted, but it will extremely useful, because its
content will allow many things to be deduced from it. In other words, to take the opposite
extreme, a theory that says nothing about anything is not going to be proved wrong, but
neither is it going to be of any use!
In general, science works by means of experiments. Results are presented along with detailed
information about the experimental methods by which they were obtained. The task of those
who wish to examine the results is to repeat the experiments and see if they produce identical
results. Now, as it goes on, a theory is going to predict facts, some of which will be verified,
some of which will not. Where it has failed to predict correctly, there is danger that the
theory will therefore be falsified – that is the key to Popper’s approach. However, it is not
quite that simple, for both Popper and Lakatos there is the recognition that falsification and
the discarding of a theory generally only takes place once there is another theory ready to
take its place.
In other word, if there is another theory that can account for all that this theory can account
for, and then go on to account for some situations that this theory is wrong about, then that
other theory is to be preferred. Explanatory power is the ultimate criterion here. Thus it is
possible that, if an experiment seems to falsify a theory, that there is something wrong with
the experiment or that there is some other factor involved that was not considered before. It
is not simply possible to throw out a theory at the first hint of falsification. By the same
token, when that alternative theory becomes available every occasion of falsification leads to
a comparison between the two theories and the one that is confirmed more broadly is the one
to be accepted.
A simplistic view of falsification is that a theory is to be discarded if it is not confirmed by
experimental results.
13
A more sophisticated view is that a theory is discarded if it is not confirmed by experimental
results and there is an alternative theory that can account for them.
In practice, scientists learn from the failures of theories, for it is exactly at those points where
existing theories are shown to be inadequate that the impetus to find a more comprehensive
theory is born.
Topics
Explain the role of falsification in Popper’s view of science.
Give your own opinion about the role of falsification in Popper’s view of science.
Give your own opinion about how to solve the problem of the relation between truth and
scientific theories.
14
LECTURE 5: Science and non-science
Science always requires a healthy measure of scepticism, a willingness to re-examine views
in the light of new evidence and to strive for theories that depend on external facts that can
be checked, rather than on the mind of the person framing them. I was the quest for
objectivity, loyalty to the experimental method and a willingness to set aside established
ideas in favour of reason and evidence, that characterized the work of Bacon and others.
There were disagreements about the extent to which certainty was possible and some (e.g.
Newton) were willing to accept ‘practical certainty’ even though recognizing that ‘absolute
certainty’ was never going to be possible.
In the 20th
century there was considerable debate about the process by which inadequacies in
a theory are examined, as we shall see in the next chapter, and the point at which the theory
should be discarded. No scientific theory can be expected to last for all time. Theories may
be falsified by new and contrary evidence (Popper) or be displaced when there is a general
shift in the matrix of views in which they are embedded (Kuhn), and theories are seen as part
of ongoing research programmes (Lakatos) based on problem solving.
On this basis, we cannot say that genuine science is what is proved true for all time, whereas
pseudo-science has been (or will be) proved false. After all, something that is believed for all
the wrong reasons may eventually be proved correct and the most cherished theories in
science can be displaced by others that are more effective. What distinguished science from
pseudo-science is to do with the nature of the claims that each makes and the methods each
uses to establish them.
One feature of modern philosophy of science that reflects this is probability theory. The
improbable is more significant than the probable. Thus, if an improbable event is predicted
by a theory, and proves to be the case, then the theory is greatly strengthen by it. By way of
contrast, something that is quite normal and expected to happen anyway, is unlikely to be
considered strong evidence of a theory which predicted it.
In other words, for genuine science, there is always the attempt to balance the likelihood of
something being the case against the other possibilities.
If a person persists with the infuriating habit of claiming absolutely everything that he or she
does as a great success, even if to the external observer it may appear a bit of a disaster, one
might well ask ‘What would have to happen for it to be described as a failure?’ If absolutely
nothing counts as a failure, then nothing merits the title ‘success’ either’ both are rendered
meaningless in terms of an objective or scientific approach, the claim of success simply
reflecting the choice to see everything in that positive way.
The claim to be scientific rests on the methods used in setting up appropriate experiments or
in gathering relevant evidence and also on the willingness to submit the results to scrutiny
and to accept the possibility that they may be interpreted in more than one way. The
distinction between science and pseudo-science is therefore essentially of method, rather
than content.
A common feature of pseudo-science is the use of analogies or resemblances to suggest
causal connections, but without being able to specify or give direct evidence for them. One
popular example illustrates this. It has been suggested that the red colour of the planet Mars
resembles blood and that the planet should therefore be associated with the coming of war
and bloodshed. What is not clear is how that planet’s colour could have any possible
connection with warlike tendencies among human beings on Earth.
Kuhn’s position about science and non-science: paradigms and their overthrow
According to Thomas Kuhn, what is clear is that it is a mark of genuine science that
problems with a theory are taken seriously and that, once those problems become
15
overwhelming, an overall paradigm may need to be set aside in favour of one that succeeds
in answering those problems. Thus his view of science is of periods of stability, punctuated
by revolutions. A major feature of those approaches, which we would not call scientific, is
that they are not open to the possibility of such revolutionary changes. If nothing is capable
of changing one’s view, than that view is not scientific.
But this should not be taken as a pejorative comment, as though only scientific views were
worthwhile. There are many areas of life, for example in religion, art or relationships, in
which it is perfectly valid to have a commitment and a particular chosen view which is not
dependent on evidence. We simply need to accept that such things are not scientific and we
should not attempt to justify them on a scientific basis.
For Popper, theories are continually being tested and may be falsified at any time.
For Kuhn, paradigms are not changed on the basis of reason alone, but in a moment of
insight. Change is rare and sudden.
For Lakatos, progress is made through research programmes, which allow peripheral theories
to change, gradually influencing a ‘hard core’ of key theories for that particular programme.
Topics
What do you think of Popper’s distinction between science and non-science? Do you agree
with him or not? To what extent and why?
What do you think of Kuhn’s scientific relativism? Do you think it provides a more accurate
picture of the workings of science than Popper’s falsificationism?
Imre Lakatos, Falsification and the Methodology of Scientific Research Programmes,
C.U.P.,1978.
16
Lecture 6: Observation and theory
If anything has become a received idea in recent philosophy of science, is the thesis that
there is no sharp distinction in science between observation and theory; in other words, that
there is no pure observational level in science which stands free of theoretical baggage.
Some of the reasons for this view are good and some not so good.
Our place in a certain niche of existence gives some point to a distinction between more and
less theoretical levels of observation, and suggests why we do not just dogmatically ‘decide’
to accept some levels of observation as basic. We accept as basic those which relate to our
lived experience of and interaction in the world, and this ramifies out into our sense that the
theories of science, in so far as they are acceptable, will frequently have practical effects in
the world of experience. And this basic level of observation, being related to our genetic
inheritance, and our needs and interests as human beings, provides a common ground for
communication between people from different cultural backgrounds. This sharing of sensory
apparatus and of interests, makes it most unlikely that human beings even from the most
widely different cultures would be completely unable to communicate at the level of the
observations relevant to basic survival. Even less, then, is it likely that there should be a
Kuhnian breakdown of communication between people – such as modern Western scientists
– engaged in recognizably the same enterprise.
However, the testing of claims about unobservable things, states, events or processes is
evidently a complicated affair. In fact the more one considers how observations confirm
hypotheses and how complicated the matter is the more one is struck by a certain inevitable
and quite disturbing “under-determination”. A theory is alleged to be underdetermined by
data in that for any body of observational data, even all the observational data, more than one
theory can be constructed to systematize, predict and explain that data, so that no one
theory’s truth is determined by the data.
As we have sometimes noted, the official epistemology of modern science is empiricism, the
doctrine that our knowledge is justified by experience – observation, data collection,
experiment. The objectivity of science is held to rest on the role which experience plays in
choosing between hypotheses. But if the simplest hypothesis comes face to face with
experience only in combination with other hypotheses, then a negative test may be the fault
of one of the accompanying assumptions; a positive test may reflect compensating mistakes
in two or more of the hypotheses involved in the test that cancel one another out. Moreover,
if two or more hypotheses are always required in any scientific test, then when a test-
prediction is falsified there will always be two or more ways to “correct” the hypotheses
under test. When the hypothesis under test is not a single statement like “all swans are white”
but a system of highly theoretical claims like the kinetic theory of gases, it is open to the
theorist to make one or more of a large number of changes in the theory in light of a
falsifying test, any one of which will reconcile the theory with the data. But the large number
of changes possible introduces a degree of arbitrariness foreign to our picture of science.
Start with a hypothesis constituting a theory that describes the behaviour of unobservable
entities and their properties. Such a hypothesis can be reconciled with falsifying experience
by making changes in it that cannot themselves be tested except through the same process all
over again – one which allows for a large number of further changes in case of falsification.
It thus becomes impossible to establish the correctness or even the reasonableness of one
change over another. Two scientist beginning with the same theory, subjecting it to the same
initial disconfirming test, and repeatedly “improving” their theories in the light of the same
17
set of further tests, will almost certainly end up with completely different theories, both
equally consistent with the data their tests have generated.
Imagine, now, the “end of inquiry” when all the data on every subject are in. Can there still
be two distinct equally simple, elegant, and otherwise satisfying theories equally compatible
with all the data, and incompatible with one another? Given the empirical slack present even
when all the evidence appears to be in, the answers seems to be that such a possibility cannot
be ruled out. Since they are distinct theories, our two total “systems of the world” must be
incompatible, and therefore cannot both be true. We cannot remain either agnostic whether
one is right or ecumenical about embracing both. Yet it appears that observation would not
be able to decide between these theories.
In short, theory is underdetermined by observation. And yet science does not show the sort
of proliferation of theory and the kind of un-resolvable theoretical disputes that the
possibility of this under-determination might lead us to expect. But the more we consider
reasons why this sort of under-determination does not manifest itself, the more problematical
becomes the notion that scientific theory is justified by objective methods that make
experience the final court of appeal in the certification of knowledge. For what else besides
the test of observation and experiment could account for the theoretical consensus
characteristic of most natural sciences? Of course there are disagreements among theorists,
sometimes very great ones, and yet over time these disagreements are settled, to almost
universal satisfaction. If, owing to the ever-present possibility of under-determination, this
theoretical consensus is not achieved through the “official” methods, how is it achieved?
Well, besides the test of observation, theories are also judged on other criteria: simplicity,
economy, consistency with other already adopted theories. A theory’s consistency with other
already well-established theories confirms that theory only because observations have
established the theories it is judged consistent with. Simplicity and economy in theories are
themselves properties that we have observed nature to reflect and other well-confirmed
theories to bear, and we are prepared to surrender them if and when they come into conflict
with our observations and experiments. One alternative source of consensus philosopher of
science are disinclined to accept is the notion that theoretical developments are epistemically
guided by non-experimental, non-observational considerations, such as a priori
philosophical commitments, religious doctrines, political ideologies, aesthetic tastes,
psychological dispositions, social forces or intellectual fashions. Such factors we know will
make for consensus, but not necessarily one that reflects increasing approximation to the
truth, or to objective knowledge. Indeed, these non-epistemic, non-scientific forces and
factors are supposed to deform understanding and lead away from truth and knowledge.
The fact remains that a steady commitment to empiricism coupled with a fair degree of
consensus about the indispensability of scientific theorizing strongly suggests the possibility
of a great deal of slack between theory and observation. But the apparent absence of
arbitrariness fostered by under-determination demands explanation. And if we are to retain
our commitment to science’s status as knowledge par excellence, this explanation had better
be one we can parlay into a justification of science objectivity as well. However, succeeding
in doing this convincingly is a very difficult task and the jury is still out on whether it is
possible at all.
Empiricism is the epistemology which has tried to make sense of the role of observation in
the certification of scientific knowledge, Since the eighteenth century, if not before,
especially British philosophers like Hobbes, Locke, Berkeley and Hume have found
inspiration in science’s successes for their philosophies, and sought philosophical arguments
to ground science’s claim. In so doing, these philosophers and their successors set the agenda
of the philosophy of science and revealed how complex is the apparently simple and
straightforward relation between theory and evidence.
18
In the twentieth century the successors of the British empiricists, the logical positivists or
“logical empiricists” as some of them preferred, sought to combine the empiricist
epistemology of their predecessors with advances in logic, probability theory and statistical
inference, to complete the project initiated by Locke, Berkeley and Hume. What they found
was that some of the problems seventeenth and eighteenth-century empiricism uncovered
were even more resistant to solution when formulated in updated logical and methodological
terms. “Confirmation theory”, as this part of the philosophy of science came to be called, has
greatly increased our understanding of the “logic” of confirmation, but has left as yet
unsolved Hume’s problem of induction, the further problem of when evidence provides a
positive instance of a hypothesis, and the “new riddle of induction” – Goodman’s puzzle of
“grue” and “bleen”.
Positivists and their successors have made the foundations of probability theory central to
their conception of scientific testing. Obviously much formal hypothesis testing employs
probability theory. One attractive late twentieth-century account that reflects this practice is
known as Bayesianism. This view holds that scientific reasoning from evidence to theory
proceed in accordance with Bayes’ theorem about conditional probabilities, under a
distinctive interpretation of the probabilities it employs.
The Bayesians hold that scientists’ probabilities are subjective degrees of belief or
acceptance of a claim – betting odds. By contrast with other interpretations, according to
which probabilities are long-run relative frequencies, or distributions of actualities among all
logical possibilities, this frankly psychological interpretation of probability is said to best fit
the facts of scientific practice and its history.
The Bayesian responds to complaints about the subjective and arbitrary nature of the
probability assignment it tolerates by arguing that, no matter where initial probability
estimates start out, in the long run using Bayes’ theorem on all possible alternative
hypotheses will result in their convergence on the most reasonable probability values, if there
are such values. Bayesianism’s opponents demand that it substantiate the existence of such
“most reasonable” values and show that all alternative hypotheses are being considered. To
satisfy these demands would be tantamount to solving Hume’s problem of induction. Finally,
Bayesianism has no clear answer to the problem which drew our attention to hypothesis-
testing: the apparent tension between science’s need for theory and its reliance on
observation.
This tension expresses itself most pointedly in the problem of under-determination. Given
the role of auxiliary hypotheses in any test of a theory, it follows that no single scientific
claim meets experience for test by itself. It does so only in company of other, perhaps large
numbers of, other hypotheses needed to affect the derivation of some observational
prediction to be checked against experience. But this means that a disconfirmation test, in
which expectations are not fulfilled, cannot point the finger of falsity at one of these
hypotheses and that adjustments in more than one may be equivalent in reconciling the
whole package of hypotheses to observation. As the size of a theory grows, and it
encompasses more and ,ore disparate phenomena, the alternative adjustments possible to
preserve or improve it in the face of recalcitrant data increase. Might it be possible, at the
never-actually-to-be-reached “end of inquiry”, when all the data are in, that there be two
distinct total theories of the world, both equal in evidential support, simplicity, economy,
symmetry, elegance, mathematical expression or any other desideratum of theory choice? A
positive answer to the question may provide powerful support for an instrumentalist account
of theories. For apparently there will be no fact of the matter accessible to inquiry that can
choose between the two theories.
And yet, the odd thing is that under-determination is a mere possibility. In point of fact, it
almost never occurs. This suggests two alternatives. The first alternative, embraced by most
philosopher of science, is that observation really does govern theory choice (or else there
19
would be more competition among theories and models than there is); it’s just that we simply
haven’t figured it all out yet. The second alternative is more radical, and is favoured by a
generation of historians, sociologists of science and a few philosophers who reject both the
detailed teachings of logical empiricism and also its ambitions to underwrite the objectivity
of science. On this alternative, observations underdetermine theory, but it is fixed by other
facts – non-epistemic ones, like bias, faith, prejudice, the desire for fame, or at least security,
and power politics. This is a radical view, that science is a process like other social
processes, and not a matter of objective progress.
We have seen that, once we go beyond the observable world in science, problems arise
as to the existence and nature of the entities and processes our explanations postulate.
These problems arise not simply because we are speaking of things we cannot observe,
but more because there will be ever so many possible ‘mathematical’ hypotheses, all
consistent with whatever data we are taking as our observational basis. At this point in
science, there will be a critical under-determination of theory by data, and this in itself
seems sufficient reason for holding on to some distinction, however rough and ready,
between observation and theory.
Topic for the assignment 2:
Discuss the problem of the relation between observation and theory in science.
20
Lecture 7: Scientific realism and anti-realism
How often have you heard someone’s opinion written off with the statement, ‘that’s just a
theory’. Somehow in ordinary English the term ‘theory’ has come to mean a piece of rank
speculation or at most a hypothesis that is doubtful, or for which there is as yet little
evidence. This usage is oddly at variance with the meaning of the term as scientists use it.
Among scientists, far from suggesting tentativeness or uncertainty, the term is often used to
describe an established sub-discipline in which there are widely accepted laws, methods,
applications and foundations. Thus, economists talk of ‘game theory’ and physicists of
‘quantum theory’, biologists use the term ‘evolutionary theory’ almost synonymously with
evolutionary biology, and ‘learning theory’ among psychologists comports many different
hypotheses about a variety of very well established phenomena. Besides its use to name a
whole area of inquiry, in science ‘theory’ also means a body of explanatory hypotheses for
which there is strong empirical support.
But how exactly a theory provides such explanatory systematization of disparate phenomena
is a question we need to answer. Philosophers of science long held that theories explain
because, like Euclid’s geometry, they are deductively organized systems. It should be no
surprise that an exponent of the deductive-nomological model of explanation should be
attracted by this view. [The deductive-nomological (D-N) model is an explanation of the
concept of explanation which requires that every explanation be a deductive argument
containing at least one law, and be empirically testable.] After all, on the D-N model
explanation is deduction, and theories are more fundamental explanations of general
processes. But unlike deductive systems in mathematics, scientific theories are sets of
hypotheses, which are tested by logically deriving observable consequences from them. If
these consequences are observed, in experiment or other data collection, then the hypotheses
which the observations test are tentatively accepted. This view of the relation between
scientific theorizing and scientific testing is known as ‘hypothetico-deductivism’. It is
closely associated with the treatment of theories as deductive systems. In other words,
hypothetico-deductivism is the thesis that science proceeds by hypothesizing general
statements, deriving observational consequences from them, testing these consequences to
indirectly confirm the hypotheses. When a hypothesis is disconfirmed because its predictions
for observation are not borne out, the scientist seeks a revised or entirely new hypothesis.
This axiomatic conception of theories naturally gives rise to a view of progress in science as
the development of new theories that treat older ones as special cases, or first
approximations, which the newer theories correct and explain. This conception of narrower
theories being ‘reduced’ to broader or more fundamental ones, by deduction, provides an
attractive application of the axiomatic approach to explaining the nature of scientific
progress.
Once we recognize the controlling role of observation and experiment in scientific
theorizing, the reliance of science on concepts and statements that it cannot directly serve to
test by observation becomes a grave problem. Science cannot do without concepts like
‘nucleus’, ‘gene’, ‘molecule’, ‘atom’, ‘electron’, ‘quark’ or ‘quasar’. And we acknowledge
that there are the best of reasons to believe that such things exist. But when scientists try to
articulate their reasons for doing so, difficulties emerge – difficulties borne of science’s
commitment to the sovereign role of experience in choosing among theories.
These difficulties divide scientists and philosophers into at least two camps about the
metaphysics of science – realism and antirealism – and they lead some to give up the view of
science as the search for unifying theories. Instead, these scientists and philosophers often
give pride of place in science to the models we construct as substitutes for a complete
understanding that science may not be able to attain. In other words, the dispute is between
21
those who see science as a sequence of useful models and those who view it as a search for
true theories.
Realism
One can be a realist about many different kinds of thing: numbers, possible worlds,
universals, minds, physical objects, quarks, fields, and so on. To call a philosopher a realist
requires a specification of what the philosopher is a realist about. Usually, there is an
intended contrast with those who deny that the entities in question are real. In the philosophy
of science, realists are aligned against instrumentalists, phenomenalists (Berkeley and the
logical positivists), conventionalists, and others of that ilk.
The scientific realist believes that the theories of science give us knowledge about the
unobservable. If his realism is to have any bite, he will not simply believe that the theories of
science make statements about unobservable things. He will also believe that we sometimes
have good reasons for believing that those statements are true.
In terms of scientific realism, there was a fundamental disagreement between Bohr and
Einstein.
For Bohr (and Heisenberg, who worked with him), the uncertainty that applies to sub-atomic
particles is not just a feature of our observation, but is a fundamental feature of reality itself.
It is not just that we cannot simultaneously know the position and momentum (in physics,
momentum is the mass of a moving object multiplied by its velocity) of a particle, but that
the particle does not have these two qualities simultaneously. Thus physics is really about
what we can talk about – if something cannot be observed cannot be part of our reality. Our
observation creates the reality we are observing.
Einstein, however, took the view that there was indeed a reality that existed prior to our
observation of it. Thus a particle would indeed have a position and momentum at any instant,
the only thing was that it was impossible for us to know both at the same time. Reality is thus
prior to observation. But, of course, it remains essentially an unknown reality, since as soon
as we try to observe it, we are back in Bohr’s world of physics where it is determined by our
observation.
What is actually found in nature is far richer and more untidy than our theories assume, but
we often ignore or regard as irrelevant those aspects of actual states of affairs which do not
match our theories. Mismatches of detail are characteristically attributed to factors
extraneous to what we are attempting to cover with precise theories, and which we have been
unable to control. Even the most refined comparisons of masses and lengths, which far
surpass in accuracy the precision of other physical measurements, fall behind the accuracy of
bank accounts. Our theoretical accounts of nature often apply perfectly only in ideal and
controlled situations.
It seems that we should regard the theories science actually provides us with as far from
complete and precisely accurate representations of reality. They are idealizations and
abstractions which focus on particular properties of natural phenomena and cases of partial
regularity, corresponding no doubt to specific interests and concerns we might have. But in
applying our models, we overlook both their incompleteness and their inaccuracy. They do
well enough for what we want in predicting and controlling effects, but this ‘enough’ could
be quite consistent with a good deal of inaccuracy and a good deal of overlooking of the full
detail of any actual situation. Moreover, we choose our models according to the specific
features of the situation we may be interested in, without worrying too much about whether
22
one model can easily be combined with another model we might use for other purposes.
None of this militates against the idea that science can discover genuine regularities or new
phenomena or new entities. But it does militate against the thought that in science our aim is
always the production of ever more general and comprehensive accounts of the whole of a
given level of existence, which at the same time are ever more accurate. This ideal may be
unattainable. It certainly will be if nature is basically untidy and cannot be divided into
clearly demarcated natural kinds. And it may be that most of what we want from science, in
the way of explanation and of the control of nature, can be assumed without assuming the
validity of the ideal.
We can illustrate that difficulties can and do arise in integrating different parts of our
physical picture of the world, or, perhaps better, in integrating our pictures of different parts
of the physical world. Quantum mechanics, with its assumption of super-positions of states
of a given system, and classical mechanics, with its definiteness and lack of fuzziness would
seem to be a good illustration of the sorts of problems that can arise here. Similar problems
would also arise if, as I have hinted, the laws we have discovered turn out to apply only to
some parts of space and time. What happens at the borders, where different conditions might
apply? I do not pretend to answer this question, but this problem should certainly make us
wary of thinking that we are close to an absolute picture of the world, in which all the
elements mesh smoothly and seamlessly.
Antirealism
A diverse group of doctrines whose common element is their rejection of realism. In the
philosophy of science, antirealism includes instrumentalism, conventionalism, logical
positivism, logical empiricism, and Bas van Fraassen’s constructive empiricism. Some
antirealists (such as instrumentalists) deny that scientific theories that postulate un-
observables should be interpreted realistically. Others (such as van Fraassen) concede that
such theories should be interpreted realistically, but they deny that we should ever accept as
true any theoretical claims about un-observables.
There is an important argument in favour of the anti-realist view of scientific theories,
concerning the very nature of what a theory is. Theories are generalizations, they attempt to
show and to predict across a wide range of actual situations. Indeed, the experimental nature
of most scientific research aims at eliminating irrelevant factors in order to be able to
develop the most general theory possible.
Now in the real world there are no generalities. You cannot isolate an atom from its
surroundings and form a theory about it. Everything interconnects with everything else – all
we have are a very large number of actual situations. Our theories can never represent any of
these, because they try to extract only generalized features. Theories deal with ideal sets of
circumstances, not with actual ones.
Instrumentalism
Strictly speaking, instrumentalism is the doctrine that theories are merely instruments, tools
for the prediction and convenient summary of data. As such, theories are not statements that
are either true or false; they are tools that are more or less useful. But because one has to use
the machinery of logic in order to draw predictions from theories, it is difficult to deny that
theories have truth values. Thus, instrumentalism has come to be used as a general term for
antirealism. Most modern instrumentalists concede that theories have truth values but deny
that every aspect of them should be interpreted realistically or that reasons to accept a theory
as scientifically valuable are reasons to accept the theory as literally true. In this sense, T. S.
Kuhn, who locates the value of scientific theories in their ability to solve puzzles, is an
23
instrumentalist. Theories may have truth values, but their truth or falsity is irrelevant to our
understanding of science. One important feature about the acceptance given to a theory
springs directly from the scientific impetus that leads to its being put forward in the first
place. Theories are there to explain phenomena that do not make sense otherwise. If you
have something that existing laws cannot make sense of, you tend to hunt for an alternative
theory that can do just that.
Thus progress is made through a basic process of problem solving. If existing laws cannot be
used to make sense of what I experience, that presents a problem. This is what leads to the
instrumentalist view of scientific laws. In other words, a law is to be judged by what it does.
In regard to the instrumentalist conception of science, the key thing to remember is that the
pictures and models by which we attempt to understand natural phenomena are not ‘true’ or
‘false’ but ‘adequate’ or ‘inadequate’. You cannot make a direct comparison between the
image you use and reality. You can’t, for example, look at an atom directly and then consider
if your image of it – as a miniature solar system, for example – is true. If you could see it
directly, you wouldn’t need a model in order to understand it. Models only operate as ways
of conceptualizing those things that cannot be known by direct perception.
Conventionalism
A decision is conventional if it involves choosing from among alternatives that are equally
legitimate when judged by objective criteria (such as consistency with observation and
evidence); thus, either the decision is entirely arbitrary or it rests on an appeal to factors
often presumed to be subjective, such as simplicity, economy, and convenience. Radical
conventionalists argue that scientific theories are really definitions (or rules of inference,
pictures, conceptual schemes, paradigms) and hence neither true nor false; moderate
conventionalists disagree, insisting that once the conventional elements in theories have been
isolated, the remaining parts are objectively true or false. Typically, conventionalists appeal
to the under-determination of theories by evidence to bolster their doctrine. Many
philosophers of science have been conventionalists in one respect or another. They include
Henry Poincaré (on high-level scientific laws as definitions), Pierre Duhem (on the
ambiguity of falsification of theories in physics), W.V. Quine (on the decision to retain or
abandon any sentence whatever), Hans Reichenbach (on the choice of a geometry to describe
physical space), Karl Popper (on the decision to accept basic statements), and T.S. Kuhn (on
the decision to switch paradigms).
Topic
Explain what is to be understood as scientific realism and anti-realism.
Explain the importance for the philosophy of science of the debate between realism and anti-
realism.
Weigh the arguments in favour of scientific realism and anti-realism and outline how a
working synthesis of the two positions can be devised.
24
Lecture 8: Probability
With the development of modern science, the experimental method led to the framing of
‘laws of nature’. It is important to recognize what is meant by ‘law’ in this case. A law of
nature does not have to be obeyed. A scientific law cannot dictate how things should be, it
simply describes them. The law of gravity does not require that, having tripped up, I should
adopt a prone position on the pavement – it simply describes the phenomenon that, having
tripped, I fall.
Hence, if I trip and float upwards, I am not disobeying a law, it simply means that I am in an
environment (e.g. in orbit) in which the phenomenon described by the ‘law of gravity’ does
not apply. The ‘law’ cannot be ‘broken’ in these circumstances, only be found to be
inadequate to describe what is happening.
Moreover, if scientific laws are descriptions of what is happening, they are also not simply
true or false but are methods of expressing regularities that have been observed. Scientific
laws are therefore instruments for drawing conclusions within an overall scheme of thought.
However, even though the overall scheme of thought of the observer plays a significant role
in their framing, it is important to keep in mind, that scientific laws are first and foremost
produced by a process of induction, based on experimental evidence and observed facts.
They may therefore be regarded as the best available interpretation of the evidence, but not
as the only possible one. They do not have the absolute certainty of a logical argument, but
only a degree of probability, proportional to the evidence on which they are based. This
leaves open the possibility to deduce laws from statistical evidence.
In other words, scientific laws can operate in terms of statistical averages, rather than on the
ability to predict what will happen on each and every occasion. For instance, statistical
information gives an accurate picture of the actions of a society, but it cannot show the
actions of an individual within that society. In other words, laws can operate at different
levels. What appears to be predictability, even determinism, at one level, can nevertheless
coexist with indeterminism and unpredictability at another.
Philosophical accounts of probability can be broadly divided into subjective accounts, which
regard probability statements in terms of what we are entitled to believe on given evidence,
and objective accounts, which interpret probability statements as referring directly to
tendencies of various sorts existing in the real world.
In order to bring out as starkly as possible the difference between subjective and objective
accounts of probability, we may take the essence of subjective theories to be simply the
belief that a statement of probability does not reflect anything rational, positive or
metaphysical in the world; it is merely a psychological device which we use when we are in
ignorance concerning the full facts of a situation.
Objectivist accounts of probability see probability statements as referring to real tendencies
individuals or sequences have to manifest certain patterns of outcome. Treating probability
statements objectively, however, does not tell us exactly how they are to be understood.
Frequency theory sees probability in terms of the distribution of some property over a
collective, a potentially infinite series of events in which a given property is distributed
randomly.
The frequency theory cannot account for single happenings except in terms of theoretical
classes to which those single happenings are presumed to belong. But the theory actually has
to face an even more serious problem than the invocation of theoretical classes of events.
Precisely because the frequency theory has to analyse probabilities relating to individual
events or objects in terms of classes the individuals belong to, the probability to be ascribed
25
to an individual object or event having a particular property will depend on the relative
frequency of that property throughout the class the individual is seen as belonging to. But
individuals can, of course, be seen as belonging to more than one class, and in cases where
the frequency of the property is different in the different classes, the same individual will be
ascribed more than one probability of having the same property. The frequency theory
always reads statements about an individual’s chances as an elliptical way of saying how
some property is distributed through some reference class, and, provided the probabilities
have been correctly estimated, there are no grounds within the frequency theory for
preferring the choice of one reference class to another. It is just that the choice of different
reference classes yields different information.
But we surely do want to think of probabilities of individuals having certain properties, and,
on occasion, we do think some reference classes provide more useful information than others
for estimates of chances relating to the outcomes of individual events. Where the reference
class in question can be seen in terms of the conditions which generate the outcomes
involved, it might well seem natural to see probabilities in terms of actual propensities rather
than in terms of frequencies. Or so, at least, it did to Popper when he moved from advocacy
of a frequency theory of probability to what has been dubbed the propensity theory.
Probability, then, is a property of the generating conditions of events; this is the basis of the
propensity theory of probability.
The propensity theory amounts to the claim that while certain physical set-ups are random or
unpredictable, as far as their individual outcomes are concerned, repeated experiments or
observations of the set-ups in question will show statistical stability, This stability is seen as
due to propensities inherent in the set-up, and these propensities are regarded, by Popper at
least, as actually existing but unobservable dispositional properties of the physical world.
The objective probability of a single event can and should be seen as a measure of an
objective propensity – of the strength of the tendency, inherent in the specified physical
situation, to realize the event – to make it happen.
While the propensity theory stresses the generating conditions underlying observed
frequencies, the frequency theory remains more epistemological, as it were, emphasizing that
our only evidence for talk of propensities is observed long-run frequency and suggesting that
this talk comes to no more than a reference to actual or theoretical long-run frequencies.
Indeed, to an extent, the difference between the frequency theorist and the propensity theorist
is analogous to the ones between Humeans and anti-Humeans on cause, or between
positivists and realists more generally. As such, it is an aspect of a rather wider dispute
which cannot be finally decided by consideration of probability alone. As far as dealing with
and assessing probability statements goes, the more significant divergence is not that
between frequency theorists and propensity theorists, who both analyse probabilities in terms
of objective tendencies in the real world. It is rather that between objectivists about
probability, which include both frequency and propensity theorists, and subjectivists, who
see probabilities in terms of what we as observers are entitled to believe on given evidence,
and who analyse talk of probabilities as being founded in human ignorance.
What actually emerges from our survey of philosophical interpretations of probability is that
while there is a sense in which some probability statements can be regarded subjectively, in
terms of our ignorance of determining conditions, this is not the case in areas in which there
is genuine indeterminism. Subjectivist approaches to probability have a certain plausibility
when we come to deal with the single case in the typical gambling situation so much
discussed by classical theorists of probability. Though even here we would not be wrong to
see the frequencies engendered by dice and coins as physically real, and to regard statements
about such frequencies as not just confessions of ignorance. When we come to cases of real
indeterminism, though, there is no necessary connection between the use of a probability
statement and human ignorance. In so far as it seems ineradicably in-deterministic, quantum
26
theory most naturally pushes us in the direction of an objective interpretation of probability,
to some version of either frequency or propensity theory. The most natural interpretation of
quantum theory and its probabilistic theories is that what we are dealing with are set-ups
which manifest statistical regularities, and that these regularities are both real and objective,
without necessarily being based in any unknown factors determining single cases. Looked at
in this way, neither quantum theory nor its associated probability statements have to be
analysed subjectively in terms of the knowledge (or ignorance) of the observer.
Indeed, for most standard and scientific uses of probability statements, including those in
quantum mechanics, the natural interpretation is to see the statements as referring to real
frequencies or propensities in populations of particles, molecules, genes, coins, dice, and so
on. On the other hand, there are also cases where we speak of probability of some outcome
on some evidence we have, for example, to the probability of its raining tomorrow, and in
such cases it is plausible to link talk of probability to ignorance of determining conditions. If
this line of thought about two senses of probability is correct, then, we will have to examine
specific cases to see whether the notion of probability is being used in an objective or
subjective sense and, hence, whether an objective or subjective interpretation of probability
is appropriate for the particular use.
Topic
Discuss the role of probability in science.
27
Lecture 9: Scientific reductions
Positivism is an extreme form of empiricism advocated by the French philosopher and
sociologist August Comte (1798-1857). Comte denied that it is possible to know anything
about un-observables (into which category he placed the underlying causes of phenomena),
and he insisted that the sole aim of science is prediction, not explanation. Comte also
believed that each branch of science has its own special laws and methods that cannot be
reduced to those of other branches. Generally regarded as the founder of sociology, Comte’s
empiricism and his hostility towards metaphysics were an important influence on logical
positivism.
Logical positivism is the name for the set of doctrines advocated by the members of the
Vienna Circle from about the 1920 to 1936 (when Moritz Schlick, their leader, was
assassinated). Prominent members of this group were Rudolf Carnap, Herbert Feigl, Hans
Hahn, Otto Neurath, and Friedrich Waismann. Their approach to philosophy relied heavily
on the verifiability criterion well illustrated in A.J. Ayer’s Language, Truth and Logic.
As originally formulated by the logical positivists, the verifiability principle asserts that the
meaning of any contingent statement is given by the observation statements needed to verify
it conclusively. Observation statements are assumed to be directly verifiable by the
experiences they purportedly describe. Unverifiable assertions are declared to be
meaningless (or, at least to lack any cognitive meaning). Criticism by Karl Popper and others
soon led to the abandonment of verifiability as a criterion of meaning in favour of weaker
notions such as confirmability and testability. These, too, are controversial insofar as they are
intended as explications of meaning.
In order to better understand logical positivism it is useful to bear in mind that there were
two general trends by the end of the 19th
century:
1 To see the world as a mechanism, upon which science reflected and produced theories
about how it worked.
2 To recognize that all our knowledge comes through the senses and that the task of science
is to systematize the phenomena of sensation. We cannot know things in themselves,
separate from our experience of them.
The logical positivists argued that the meaning of a statement (scientific or otherwise) was
the method by which it could be verified. Everything depended on sense experience. All
theoretical terms had to show a correspondence with observations.
Discussions about the inductive method in science should be seen against this positivists’
background – the narrow and precise view of language they espoused matched what they
saw as the ideal of scientific language – the means of summarizing perceptions.
The logical positivist of the Vienna Circle, of whom probably the best known are Schlick
and Carnap were generally scientists and mathematicians, influenced by the work of the
early Wittgenstein and also Bertrand Russell. They believed that the task of philosophy was
to determine what constituted valid propositions. They wanted to define correspondence
rules, by which the words we use relate to observations. They also wanted to reduce general
and theoretical terms (e.g. mass or force) to those things that could be perceived. In other
words, the mass of a body is defined in terms of measurements that can be made of it.
28
In general, the position adopted by the logical positivists was that the meaning of a statement
was its method of verification. If I say that something is green, I actually mean that, if you go
and see it, you will see that it is green. If you cannot say what would count for or against a
proposition, or how you could verify it though sense experience, then that proposition is
meaningless.
Now, clearly, this is mainly concerned with the use of language. But for science it had a
particular importance, which enabled it to dominate the first half of the 20th
century.
Basically, it was assumed that the process of induction, by which general statements were
confirmed by experimental evidence, was the correct and only way to do science.
That seemed a logical development of the scientific method, as it had developed since the
17th
century, but it produced problems. What do you do if faced with two alternative theories
to explain a phenomenon? Can they both be right? Can everything science wants to say be
reduced to sensations? Once a law is accepted and established, it seems inconceivable that it
would simply be proved wrong. To make progress, laws that apply to a limited range of
phenomena can be enlarged in order to take into account some wider set of conditions.
Scientific theories are therefore not discarded, but become limited parts of a greater whole.
But even while this view was dominating the philosophy of science, the actual practice of
science, especially in the fields of relativity and quantum physics – was producing ideas that
did not fit this narrow schema.
The clear implication of the whole approach of the logical positivists was that the language
of science should simply offer a convenient summary of what could be investigated directly.
In other words, a scientific theory is simply a convenient way of saying that if you observe a
‘particular’ thing on every occasion that these ‘particular’ conditions occur, then you will
observe this ‘particular’ phenomenon. What scientific statements do is to replace the
‘particulars’ of experience with a general summary.
Clearly, the ultimate test of a statement is therefore the experimental evidence upon which it
is based. Words have to correspond to external experienced facts.
The problem is that you can say, for example, that a mass or a force is measured in a
particular way. That measurement gives justification for saying that the terms ‘mass’ or
‘force’ have meaning. But clearly you cannot go out and measure the mass of everything, or
the totality of forces that are operating in the universe. Hence such general terms always go
beyond the totality of actual observations on which they are based. They sum up and predict
observations. For the logical positivists, the truth of a statement depended in our being able
(at least in theory) to check it against evidence for the physical reality to which it
corresponds. However, scientific theories can never be fully checked out in this way, since
they always go beyond the evidence; that is their purpose, to give general statements that go
beyond what can be said through description.
An example of the direction where the logical positivists take the language of science could
be shown by what happens when a material is scientifically described. In this case, it is
necessary to say more than what it looks like: one needs to say (based on experiments) how
it is likely to behave in particular circumstances. Thus, if I pick up a delicate piece of
glassware, I know that it has the dispositional property to be fragile. The only meaning I can
give to that property is that, were I to drop the glassware, it would break.
Now the term ‘fragile’ is not a physical thing; it is an adjective rather than a noun. I cannot
point and say ‘there is fragile’. I use the word ‘fragile’ as a convenient way of summarizing
the experience of seeing things dropped or otherwise damaged. It is thus possible to have
general terms which are meaningful and which satisfy the requirement of logical positivism
29
that the meaning of a statement is its method of verification. It would be easy (but
expensive!) to verify that all glassware is fragile.
Reductionism and its implications
Wittgenstein and the logical positivists aimed to assess all language in terms of the external
reality to which it pointed and to judge it meaningful if, and only if, it could be verified by
reference to some experience. In other words, to say ‘my car is in front of the house’ means
‘if you go and look in front of the house, you will see my car’. If a statement could not be
verified (at least in theory), then it was meaningless. The only exceptions to this were
statements about logic and mathematics, which were true by definition, and generally termed
‘analytic statements’.
Logical positivists believed that all ‘synthetic statements’ (i.e. those true with reference to
matters of fact, rather than definition) could be ‘reduced’ to basic statements about sense
experience.
Reductionism is the term we use for this process. It ‘reduces’ language to strings of simple
claims about sense experience. However, complex a statement may be, in the end it comes
down to such pieces of sense data, strung together with connectives (e.g., if this … then …
that …; and; but; either/or ). It is one of the two ‘dogmas of empiricism’ attacked by Quine.
Now reductionism is primarily about language, but how we deal with language reflects our
understanding of reality. So reductionism influences the approach that we take to complex
entities and events. There are two ways of examining these:
1 A reductionist approach sees ‘reality’ in the smallest component parts of any complex
entity (e.g. you are ‘nothing but’ the atoms of which you are made up).
2 A holistic view examines the reality of the complex entity itself, rather than in its parts
(e.g. you understand a game of chess in terms of the overall strategy, rather the way in which
individual pieces are being moved.
Science operates in both ways. On the one hand, it can analyse complex entities into their
constitutive parts and, on the other, it can explore how individual things work together in
ways that depend on their complex patterning.
For example, in the early days of computing, every command had to be learned and typed in
order to get a program to work. In basic word processing, one needed to remember the code
for ‘bold’ and enter that before and after the word to be emboldened in the text. In a modern
word processor, one simply highlights the text and clicks on a button labelled ‘bold’. The
more complex the processor, the simpler the action required.
We all know that, beneath the apparently instinctive operations of programs, there is a level
of code in which everything is reduced to simple bits of information in the form of ‘0’ and
‘1’s. The letter you have typed, or design you have drawn is ‘nothing but’ those bits of
information – there is nothing in the computer memory to represent it other than such strings
of machine code. Yet what you see in the design you have drawn, or mean by what you have
written, is of a different order of significance from the basic code into which the computer
reduces it.
In theory, a perfectly programmed computer would respond automatically and one need
never be aware of the program, the commands or the machine code. One would simply
express oneself and it would happen – the software would have become transparent.
Perhaps that is what happens with the human brain. It is so complex that there is no
opportunity to examine the firing of individual neurones – it just ‘thinks’. That does not
mean that the thinking takes place in some other location – that there is some secret ‘ghostly’
30
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s
phil.sci.s

Más contenido relacionado

La actualidad más candente

Logical positivism and Post-positivism
Logical positivism and Post-positivism Logical positivism and Post-positivism
Logical positivism and Post-positivism Fatima Maqbool
 
Approaches of Philosophy of Science in Social Research
Approaches of Philosophy of Science in Social ResearchApproaches of Philosophy of Science in Social Research
Approaches of Philosophy of Science in Social ResearchTahmina Ferdous Tanny
 
Chapter 1 philosophy of science
Chapter 1 philosophy of scienceChapter 1 philosophy of science
Chapter 1 philosophy of sciencestanbridge
 
THE SELF CRITICISM OF SCIENCE - ALEXIS KARPOUZOS
THE SELF CRITICISM OF SCIENCE - ALEXIS KARPOUZOSTHE SELF CRITICISM OF SCIENCE - ALEXIS KARPOUZOS
THE SELF CRITICISM OF SCIENCE - ALEXIS KARPOUZOSalexis karpouzos
 
The self-criticism of science
The self-criticism of scienceThe self-criticism of science
The self-criticism of scienceThink Lab
 
Epistemological development
Epistemological developmentEpistemological development
Epistemological developmentannisafadhilahs
 
Current epistemological theory
Current epistemological theoryCurrent epistemological theory
Current epistemological theoryFarah Ishaq
 
Explanation in science (philosophy of science)
Explanation in science (philosophy of science)Explanation in science (philosophy of science)
Explanation in science (philosophy of science)Anuj Bhatia
 
Positivism and scientific research
Positivism and scientific researchPositivism and scientific research
Positivism and scientific researchAmeer Al-Labban
 
The self-criticism of science: Alexis Karpouzos
The self-criticism of science: Alexis KarpouzosThe self-criticism of science: Alexis Karpouzos
The self-criticism of science: Alexis Karpouzosalexis karpouzos
 
On Pragmatism and Scientific Freedom
On Pragmatism and Scientific FreedomOn Pragmatism and Scientific Freedom
On Pragmatism and Scientific FreedomAntonio Severien
 
A phenomenological hermeneutical method for researching lived experience.pdf
A phenomenological hermeneutical method for researching lived experience.pdfA phenomenological hermeneutical method for researching lived experience.pdf
A phenomenological hermeneutical method for researching lived experience.pdfDaliaTalaatWehedi
 
A quest for depth and breadth of insight through combination of positivism an...
A quest for depth and breadth of insight through combination of positivism an...A quest for depth and breadth of insight through combination of positivism an...
A quest for depth and breadth of insight through combination of positivism an...Awais e Siraj
 
Science and Objectivity
Science and ObjectivityScience and Objectivity
Science and ObjectivityTyler York
 

La actualidad más candente (20)

Logical positivism and Post-positivism
Logical positivism and Post-positivism Logical positivism and Post-positivism
Logical positivism and Post-positivism
 
po.theo.1
po.theo.1po.theo.1
po.theo.1
 
Philosophy
PhilosophyPhilosophy
Philosophy
 
Philosophy of science
Philosophy of  sciencePhilosophy of  science
Philosophy of science
 
Approaches of Philosophy of Science in Social Research
Approaches of Philosophy of Science in Social ResearchApproaches of Philosophy of Science in Social Research
Approaches of Philosophy of Science in Social Research
 
Chapter 1 philosophy of science
Chapter 1 philosophy of scienceChapter 1 philosophy of science
Chapter 1 philosophy of science
 
THE SELF CRITICISM OF SCIENCE - ALEXIS KARPOUZOS
THE SELF CRITICISM OF SCIENCE - ALEXIS KARPOUZOSTHE SELF CRITICISM OF SCIENCE - ALEXIS KARPOUZOS
THE SELF CRITICISM OF SCIENCE - ALEXIS KARPOUZOS
 
The self-criticism of science
The self-criticism of scienceThe self-criticism of science
The self-criticism of science
 
Logical positivism
Logical positivismLogical positivism
Logical positivism
 
Epistemological development
Epistemological developmentEpistemological development
Epistemological development
 
Current epistemological theory
Current epistemological theoryCurrent epistemological theory
Current epistemological theory
 
Conceptofphilosophy
ConceptofphilosophyConceptofphilosophy
Conceptofphilosophy
 
Explanation in science (philosophy of science)
Explanation in science (philosophy of science)Explanation in science (philosophy of science)
Explanation in science (philosophy of science)
 
Positivism and scientific research
Positivism and scientific researchPositivism and scientific research
Positivism and scientific research
 
The self-criticism of science: Alexis Karpouzos
The self-criticism of science: Alexis KarpouzosThe self-criticism of science: Alexis Karpouzos
The self-criticism of science: Alexis Karpouzos
 
On Pragmatism and Scientific Freedom
On Pragmatism and Scientific FreedomOn Pragmatism and Scientific Freedom
On Pragmatism and Scientific Freedom
 
A phenomenological hermeneutical method for researching lived experience.pdf
A phenomenological hermeneutical method for researching lived experience.pdfA phenomenological hermeneutical method for researching lived experience.pdf
A phenomenological hermeneutical method for researching lived experience.pdf
 
A quest for depth and breadth of insight through combination of positivism an...
A quest for depth and breadth of insight through combination of positivism an...A quest for depth and breadth of insight through combination of positivism an...
A quest for depth and breadth of insight through combination of positivism an...
 
Science and Objectivity
Science and ObjectivityScience and Objectivity
Science and Objectivity
 
Epistemology
EpistemologyEpistemology
Epistemology
 

Similar a phil.sci.s

What is philosophy presentation
What is philosophy presentationWhat is philosophy presentation
What is philosophy presentationWilliam Kapambwe
 
L1 philosophy-130628222719-phpapp02
L1 philosophy-130628222719-phpapp02L1 philosophy-130628222719-phpapp02
L1 philosophy-130628222719-phpapp02EsOj Soberano
 
Lecture 1 Introduction to Philosophy
Lecture 1 Introduction to PhilosophyLecture 1 Introduction to Philosophy
Lecture 1 Introduction to PhilosophyArnel Rivera
 
Philosophy Lecture 1.pptx
Philosophy Lecture 1.pptxPhilosophy Lecture 1.pptx
Philosophy Lecture 1.pptxabhishekraja19
 
Introduction to Social Science and Philosophy
Introduction to Social Science and PhilosophyIntroduction to Social Science and Philosophy
Introduction to Social Science and PhilosophyChristine Camingan
 
Metodologia Investigacion
Metodologia InvestigacionMetodologia Investigacion
Metodologia InvestigacionEuler
 
Introduction to the Philosophy of the Human Person_Module 1.pdf
Introduction to the Philosophy of the Human Person_Module 1.pdfIntroduction to the Philosophy of the Human Person_Module 1.pdf
Introduction to the Philosophy of the Human Person_Module 1.pdfJonathanSalon
 
Weaponising Philosophy in Systematics
Weaponising Philosophy in SystematicsWeaponising Philosophy in Systematics
Weaponising Philosophy in SystematicsJohn Wilkins
 
Philosophy-Lecture-1.pptx
Philosophy-Lecture-1.pptxPhilosophy-Lecture-1.pptx
Philosophy-Lecture-1.pptxJessaSiares
 
Meaning and nature of philosophy -.pptx
Meaning and nature of philosophy -.pptxMeaning and nature of philosophy -.pptx
Meaning and nature of philosophy -.pptxIdrisMammadov
 
Unit 1 Tutorials Great PhilosophersINSIDE UNIT 1Intro
Unit 1 Tutorials Great PhilosophersINSIDE UNIT 1IntroUnit 1 Tutorials Great PhilosophersINSIDE UNIT 1Intro
Unit 1 Tutorials Great PhilosophersINSIDE UNIT 1Introcorbing9ttj
 
RPE notes (2).pdf
RPE notes (2).pdfRPE notes (2).pdf
RPE notes (2).pdfmanian4
 
Toleukhan A. MIW №4.pptx
Toleukhan A. MIW №4.pptxToleukhan A. MIW №4.pptx
Toleukhan A. MIW №4.pptxssuserb54793
 
SESIOmetodologia_N1 _2024_1 EPISTEMOLOGY._123pdf
SESIOmetodologia_N1 _2024_1 EPISTEMOLOGY._123pdfSESIOmetodologia_N1 _2024_1 EPISTEMOLOGY._123pdf
SESIOmetodologia_N1 _2024_1 EPISTEMOLOGY._123pdfandersonmelgar2021
 

Similar a phil.sci.s (20)

What is philosophy presentation
What is philosophy presentationWhat is philosophy presentation
What is philosophy presentation
 
Knowledge
KnowledgeKnowledge
Knowledge
 
L1 philosophy-130628222719-phpapp02
L1 philosophy-130628222719-phpapp02L1 philosophy-130628222719-phpapp02
L1 philosophy-130628222719-phpapp02
 
Itp.pptx
Itp.pptxItp.pptx
Itp.pptx
 
Lecture 1 Introduction to Philosophy
Lecture 1 Introduction to PhilosophyLecture 1 Introduction to Philosophy
Lecture 1 Introduction to Philosophy
 
Philosophy Lecture 1.pptx
Philosophy Lecture 1.pptxPhilosophy Lecture 1.pptx
Philosophy Lecture 1.pptx
 
Introduction to Social Science and Philosophy
Introduction to Social Science and PhilosophyIntroduction to Social Science and Philosophy
Introduction to Social Science and Philosophy
 
Metodologia Investigacion
Metodologia InvestigacionMetodologia Investigacion
Metodologia Investigacion
 
Introduction to the Philosophy of the Human Person_Module 1.pdf
Introduction to the Philosophy of the Human Person_Module 1.pdfIntroduction to the Philosophy of the Human Person_Module 1.pdf
Introduction to the Philosophy of the Human Person_Module 1.pdf
 
Weaponising Philosophy in Systematics
Weaponising Philosophy in SystematicsWeaponising Philosophy in Systematics
Weaponising Philosophy in Systematics
 
Philosophy-Lecture-1.pptx
Philosophy-Lecture-1.pptxPhilosophy-Lecture-1.pptx
Philosophy-Lecture-1.pptx
 
Meaning and nature of philosophy -.pptx
Meaning and nature of philosophy -.pptxMeaning and nature of philosophy -.pptx
Meaning and nature of philosophy -.pptx
 
Unit 1 Tutorials Great PhilosophersINSIDE UNIT 1Intro
Unit 1 Tutorials Great PhilosophersINSIDE UNIT 1IntroUnit 1 Tutorials Great PhilosophersINSIDE UNIT 1Intro
Unit 1 Tutorials Great PhilosophersINSIDE UNIT 1Intro
 
Philosophy of science
Philosophy of sciencePhilosophy of science
Philosophy of science
 
RPE notes (2).pdf
RPE notes (2).pdfRPE notes (2).pdf
RPE notes (2).pdf
 
Toleukhan A. MIW №4.pptx
Toleukhan A. MIW №4.pptxToleukhan A. MIW №4.pptx
Toleukhan A. MIW №4.pptx
 
SESIOmetodologia_N1 _2024_1 EPISTEMOLOGY._123pdf
SESIOmetodologia_N1 _2024_1 EPISTEMOLOGY._123pdfSESIOmetodologia_N1 _2024_1 EPISTEMOLOGY._123pdf
SESIOmetodologia_N1 _2024_1 EPISTEMOLOGY._123pdf
 
Philosophy and ethics
Philosophy and ethics Philosophy and ethics
Philosophy and ethics
 
Philosophy
PhilosophyPhilosophy
Philosophy
 
What is philosophy1
What is philosophy1What is philosophy1
What is philosophy1
 

Más de Giuseppe Mario Saccone (20)

Logic
LogicLogic
Logic
 
syllabus psychology2013
syllabus psychology2013syllabus psychology2013
syllabus psychology2013
 
seminar.rights
seminar.rightsseminar.rights
seminar.rights
 
History as rhetoric in Hobbes
History as rhetoric in HobbesHistory as rhetoric in Hobbes
History as rhetoric in Hobbes
 
Democracy and obstacles to Democracy
Democracy and obstacles to DemocracyDemocracy and obstacles to Democracy
Democracy and obstacles to Democracy
 
How to understand the world we live in
How to understand the world we live inHow to understand the world we live in
How to understand the world we live in
 
intro.ethic1
intro.ethic1intro.ethic1
intro.ethic1
 
s.pol.phil.1
s.pol.phil.1s.pol.phil.1
s.pol.phil.1
 
ethics.2
ethics.2ethics.2
ethics.2
 
businessethics
businessethicsbusinessethics
businessethics
 
co.pol
co.polco.pol
co.pol
 
Pol.dev.a
Pol.dev.aPol.dev.a
Pol.dev.a
 
fund.phil
fund.philfund.phil
fund.phil
 
P.reason
P.reasonP.reason
P.reason
 
phil.rel
phil.relphil.rel
phil.rel
 
Syllabus People, organization and society
Syllabus People, organization and societySyllabus People, organization and society
Syllabus People, organization and society
 
syl.int.rel.asianust
syl.int.rel.asianustsyl.int.rel.asianust
syl.int.rel.asianust
 
phil.mind
phil.mindphil.mind
phil.mind
 
MODULE 4
MODULE 4 MODULE 4
MODULE 4
 
Copy of MODULE 3
Copy of MODULE 3Copy of MODULE 3
Copy of MODULE 3
 

phil.sci.s

  • 1. OHIO UNIVERSITY HONG KONG PROGRAMME PHIL 216: Philosophy of Science Survey (3) (2H) Instructor: Dr. Giuseppe Mario Saccone LECTURE 1: The relationship between science and philosophy What is science? Science is the study of the nature and behaviour of natural things and the knowledge that we obtain about them through observation and experiments. The aim of science is the discovery of general truths. Individual facts are critical of course; science is built up with facts as a house may be built up with stones. But a mere collection of facts no more constitutes a science than a collection of stones constitutes a house. Scientists seek to understand phenomena, and to do this they seek to uncover the patterns in which phenomena occur, and the systematic relations among them. Simply to know the facts is not enough; it is the task of science to explain them. This, in turn, requires the theories of which Einstein wrote, incorporating the natural laws that govern events, and the principles that underlie them. So a scientific explanation is a theoretical account of some fact or event, always subject to revision, that exhibits certain essential features: relevance, compatibility with previously well-established hypothesis, predictive power, and simplicity. And the scientific method is a set of techniques for solving problems involving the construction of preliminary hypotheses, the testing of the consequences deduced, and the application of the theory thus confirmed to further problems. In this way, the principle that the meaning of statements should be backed up by evidence reflects the scientific approach. What is philosophy? Philosophy literally means love of wisdom, the Greek words philia meaning love or friendship, and Sophia meaning wisdom. Philosophy is concerned basically with three areas: epistemology (the study of knowledge), metaphysics (the study of the nature of reality), and ethics (the study of morality). Philosophy may be regarded as a search for wisdom and understanding and it is an evaluative discipline that in the course of time has started to be seen as becoming more and more concerned with evaluating theories about facts than with being concerned with facts in themselves. In this sense, philosophy may be regarded as a second order discipline, in contrast to first order disciplines which deal with empirical subjects. In other words, philosophy is not so much concerned with revealing truth in the manner of science, as with asking secondary questions about how knowledge is acquired and about how understanding is expressed. Unlike the sciences, philosophy does not discover new empirical facts, but instead reflects on the facts we are already familiar with, or those given to us by the empirical sciences, to see what they lead to and how they all hang together, and in doing that philosophy tries to discover the most fundamental, underlying principles. What is philosophy of science? Generally speaking, the philosophy of science is that branch of philosophy that examines the methods used by science (e.g. the ways in which hypotheses and laws are formulated from evidence) and the grounds on which scientific claims about the world may be justified. 1
  • 2. Whereas scientists tend to become more and more specialised in their interests, philosophers generally stand back from the details of particular research programmes and concentrate on making sense of the overall principles and establishing how they relate together to give an overall view of the world. The key feature of much philosophy of science concerns the nature of scientific theories – how it is that we can move from observation of natural phenomena to producing general statements about the world. And, of course, the crucial questions here concern the criteria by which one can say that a theory is correct, how one can judge between different theories that purport to explain the same phenomenon and how theories develop and change as science progresses. And once we start to look at theories, we are dealing with all the usual philosophical problems of language and what and how we can know that something is the case. Thus philosophy of science relates to other major areas of philosophy: metaphysics (the structures of reality), epistemology (the theory of knowledge) and language (in order to explore the nature of scientific claims and the logic by which they are formulated). In doing this, it is intended that the philosophy of science should not act as some kind of intellectual policeman, but should play an active part in assisting science by clarifying the implications of its practise. The relationship between science and philosophy There are at least three different ways in which we can think of the relationship between philosophy and science: 1 Science and philosophy can be seen as dealing with different subject matter. Science gives information about the world; philosophy deals with norms, values and meanings. Philosophy can clarify the language science uses to make its claims, can check the logic by which those claims are justified and can explore the implications of the scientific enterprise. This has been a widely held view and it gives philosophy and science very different roles. 2 It can bee argued that you cannot draw a clear distinction between statements about fact (‘synthetic’ statements, about which science has its say) and statements about meaning (‘analytic’ statements, which philosophy can show to be true by definition). Statements about meaning may often be reduced to the ‘naming’ of things and do not make sense without some reference to the external world. So, philosophy may be an extension of the scientific approach, dealing with questions about reality based on the finding of science. Science is full of concepts and these may be revised or explained in different ways. Science is not simply the reporting of facts, but the arguing out of theories; hence we should not expect to draw a clear line between science and philosophy. (This view was developed by the modern American philosopher W.V. Quine in an important article, published in 1951, entitled ‘The two dogmas of empiricism’. 3 Philosophy can describe reality, and can come to non-scientific truths about the way the world is. These truth do not depend on science, but are equally valid. (This reflects an approach taken by philosophers who are particularly concerned with the nature of language and how it relates to experience and empirical data, including Moore, Wittgenstein, Austin, Strawson and Searle.) The key question here are: 1 Are there aspects of reality with which science cannot deal, but philosophy can? 2 If philosophy and science deal with the same subject matter, in what way does philosophy add to what science is able to tell us? 2
  • 3. And then, of course, one could go on to ask if you can actually do science without having some sort of philosophy. Is physics possible without metaphysics of some sort or language or logic or all the concepts and presuppositions of the language that the scientist uses to explain what he or she finds? Science can never be absolutely ‘pure’. It can never claim to be totally free from the influences of thought, language and culture within which it takes place. In fact, science cannot even be free from economic and political structures. If a scientist wants funding for his or her research, it is necessary to show that it has some value, that it is in response to some need or that it may potentially give economic benefit. So the philosophy of science needs to be aware of, and point to, those influences. Scientific evidence or theories are seldom unambiguous; and those who fund research do so with specific questions and goals in mind, goals that cannot but influence the way in which that research is conducted. But apart from all this, there is the more general function of philosophy, which is to analyse and clarify concepts, to examine ways of argument and to show the presuppositions, logic and validity of arguments. This is what philosophy does within any sphere – whether we are considering the philosophy of mind, religion or language. The main point of issue is whether philosophy also contribute directly to the knowledge of reality. For some time, during the middle years of the 20th century, it was assumed that the principal – indeed the only – role of philosophy was clarification. Since then there has been a broadening out of its function. Topics - Explain, in your own words, what is science, what is philosophy, what is philosophy of science. - Discuss the relationship and the reciprocal roles of science and philosophy of science. 3
  • 4. LECTURE 2: Science as an intellectual activity There is no institutions in the modern world more prestigious then science. Objections to science and scientific research tend to be partial, to some aspects of the application of scientific knowledge, leaving unquestioned most of its applications. They also tend to be (in the bad sense) theoretical, affecting the way people talk rather than the way they actually live. At least on the face of it, to some significant degree, science does cut through political ideology, because its theories are about nature, and made true or false by a non partisan nature, whatever the race or beliefs of their inventor, and however they conform or fail to conform to political or religious opinion. In a world in which technological success is crucial to any regime, no sane leader is going to jeopardize his or her chances by openly interfering, expect in some cases to be seen as exceptions rather than normal practice, with scientific research or its applications on ideological grounds. However, not everything one finds in writings critical of ‘science’ or ‘the scientific mentality’ is completely misguided. There are certainly areas of human life – the most important areas, in fact – about which science as such can have nothing to tell us, and where the application of methods analogous to those of science can only be harmful. But because of the importance of science and of these questions it is important to be balanced and honest in what one says about science, and to recognize both our dependence on it and its very real intellectual and moral merits. For if true knowledge is growing in science, this means that the theories of science must be giving us more and more truths about the world. Growth of Knowledge In a perfectly obvious sense, over the last four hundred years or so there has been progress in science. Measurement of physical quantities becomes more precise, previously unknown particles and substances are discovered, new effects are produced and applied. There is a striking contrast here between the development of modern science and the arts. No one would say of a work of music or literature that it was better than an earlier work just because it was later. In contrast to the development of theories in modern science, a later masterpiece in a given artistic genre is not thereby better than an earlier one, nor does it necessarily attain the aim of the genre better, or anything of that sort. The case with scientific theories, though, is quite different. Here we are able to specify a clear target at which all theories aim, and we often have confidence that theory A has got closer to the target than theory B. The aim might be characterized as discovering the truth about the natural world, and when we have theories which aim to describe the bits of the natural world we can often say that a later theory is better than earlier. Thus, Copernicus’s heliocentric picture of the universe was better than Aristotle’s geocentric picture, and Newton provided a better account of the solar system and the universe than either. Moreover, we can say that in literature we know far more than our predecessors because what we know is their work. Growth of knowledge in science is not at all like that. Most workers in a scientific field do not know the history of their field in any depth or detail. They do not have to know it, because the history of science will consist largely of theories that have been discarded, and which are regarded as giving far less true information about the world than their successors. The case is quite different with works of art and literature. Past writers are part of the soil and tradition in which we live, and we deepen and refresh our understanding both of ourselves and of art by returning to them and deepening our acquaintance with them. 4
  • 5. Objectivity and the external world The reason why in doing science have no need to return to past science is because the theories of science are not about human endeavour or human expressiveness. Human self- expression and understanding is a cumulative, historical process in which where we are now and what we now think of ourselves is rooted in the forms of life and expression developed in the past, and will always involve some coming to terms with our history and our past. But a scientific theory will, by contrast, be dealing with a world independent of human history and human intervention. The truths science attempts to reveal about atoms and the solar system and even about microbes and bacteria would still be true even if human beings had never existed. As we have noted, it is a humanly impartial a-historical nature that decrees the truth or falsity of scientific theories, and it does so without regard to religious or political rectitude. This brings us to one of the distinctive features of scientific activity, which morally and humanly is one of its great strengths. The impartiality of nature to our feelings, beliefs, and desires means that the work of testing and developing scientific theories is insensitive to the ideological background of individual scientists. A scientific theory will characteristically attempt to explain some natural phenomena by producing some general formula or theory covering all the phenomena of that particular type. From this general formula, it will be possible to predict how future phenomena in the class in question will turn out. Whether they do or not will depend on nature rather than on men, and any scientist can observe whether they do or not, regardless of his other beliefs. The case is quite otherwise with some of the grand theories of psychology and the social sciences, where critics are sometimes told that their criticisms are invalid because their observations are distorted by their being sexually repressed (as in the case of Freudianism) or because they are not identifying themselves with the proletariat (as in the case of Marxism). But, because of the nature of the enterprise, the scientific community is non-sectarian and its works cuts across all sorts of human divisions. There is no such thing as British science, or Catholic science, or Communist science, though there are Britons, Catholics, and Communists who are scientists, and who should, as scientists, be able to communicate fully with each other. The ideological or religious background of a scientist becomes important only when, as with a doctrinaire Marxist-Leninist like Lysenko or some fundamentalist Christians, non-scientific beliefs make disinterested scientific enquiry impossible. Prediction and Explanation It is often pointed out that the theories of science characteristically take the form of general mathematical formulae covering a particular range of types of event, from which it is possible to deduce predictions of specific events. Newton’s laws, for example, give general formulae concerning the motions of mutual attraction and repulsion of heavy bodies, from which we can predict such things as solar eclipses. From the standpoint of modern science, there is a close connection between the notions of prediction and explanation. If you can produce general formulae allowing you to make mathematically precise predictions of a class of specific states of affairs, you will generally have gone a good way to providing an explanation of those states of affairs. One reason for not saying here that we have always gone some way to producing an explanation when we are able to make predictions on the basis of general formulae is that there are cases discussed in the philosophical literature in which one is enabled to produce a precise prediction of states of affairs on the basis of a general theory without – it is alleged – being tempted to say that one has any sort of explanation before one. Thus, for example, by invoking Pythagoras’s theorem, one can predict the distance of a mouse from an owl, when all we knew was that the mouse was four feet from a three-foot flag-pole on top of which was an owl; but, it is said, one would not want to say that the theorem explained the distance 5
  • 6. of the mouse from the owl. Against this example it might be said that there was no genuine prediction here, in the sense of an inference from a past state of affairs to a future one, as opposed to a move from a state of past ignorance to one of future knowledge. It is not clear, though, that all scientific explanations do involve predictions from past states of affairs to future ones, rather than predictions about what one will find on the basis of existing knowledge, for this latter type of reasoning is involved when people deduce conclusions concerning the nature of the big bang from their cosmological theories and their knowledge of the current state of the universe. The predictions by which one tests such speculation may well be predictions about what one will find when one probes traces of past events. However, given that we are prepared to work with a concept of prediction which is wide enough to encompass the prediction and discovery of as yet unknown facts, including facts about the past, it is certainly the case that we now expect scientific explanations to have predictive power. We can say this even though there may be cases, like that involving the Pythagoras theorem, when we can make predictions, or at least deduce as yet unknown facts, on the basis of general theories, without wanting to speak of an explanation of those facts. The reason why many criticize Freudians and Marxists for being unscientific is precisely because their theories either lead to no specific predictions at all or to predictions that are false. Making predictions on the basis of one’s theories is, then, a necessary is not sufficient condition for a genuine scientific explanation. The notion of a scientific explanation was not always linked so closely to its mathematical and predictive power. In the science associated with Aristotle and his followers, giving an explanation of a phenomenon consisted in delineating its essence, or essential properties, and in showing why, in order to fulfil its function or nature, it had to have those properties. Fire rose, for example, in order that it should reach its natural resting-place, which was taken to be a spherical shell just inside the orbit of the moon. The essence of fire, being a light body, was to rise. It does so in order to fulfil its nature. From the modern scientific viewpoint there are at least two things wrong with this ‘essentialist’ type of explanation. In the first place, we have no justification for imputing purposes to natural phenomena like fire or planets or heavy bodies. Their activity is conditioned by the forces that act upon them, their underlying structure, and the interaction of the two. They do not have any ulterior purposes, or essential nature they are trying to fulfil. Secondly, there is nothing in a typical Aristotelian explanation about precise quantities or measurements. They five us reasons (of a sort) for why things happen, but not the precise amounts or distances or times involved. And these precise measurements are crucial for modern science, because they are required for the formulation and application of its theories. It is easy to see why the shift occurred from Aristotelian essentialist explanations to the mathematical-predictive explanations of modern science. If you want to control and manipulate phenomena, then what you need to know are the precise conditions in which effects of given sort occur. If you are working with a piece of metal, you want to know just how much it will expand under given degrees of heat. You do not want to be told that its expansion is due to the fact that it ha to expand in order to fulfil its nature. And, as we shall see in the next chapter, modern science is very much about controlling nature, hence its tendency to elide prediction and explanation, and the reason why its predictions will characteristically, if not universally, be predictions about states of affairs which have not yet happened. Yet, even at this point, one might feel that there is something to be said for a more meaty type of explanation than appears to be given in simply producing formulae for prediction. 6
  • 7. Newton himself gave expression to this feeling when, at the end of his Mathematical Principles, he said that while he had demonstrated the reality of gravity and its effects – by precise mathematical methods, we would stress – he had not yet been able to explain the cause of these effects. It is as if a purely mathematical correlation of events, saying, for example, that the gravitational force on such and such an object will be so and so in such and such circumstances, stays too much on the surface of things, and fails to five us insight into the underlying structure of gravitational phenomena or of the essence of gravity. In other words, we can say that while it needs to be seriously considered whether a full scientific explanation is more than a device for predicting effects in the natural world, there are at the very least very convincing reasons for thinking that it must be at least that. Topic Explain in your own words how science is to be understood as an intellectual activity. Compare the development of modern science and the arts. What do you think about it? How do you understand the role of the relationship between prediction and explanation of phenomena in science? Compare modern and Aristotelian science. What are the differences and similarities between the two? 7
  • 8. LECTURE 3: Induction When Galileo argued in favour of the Copernican view of the universe, in which the Earth revolved around the Sun rather than vice versa, his work was challenged by the more conservative thinkers of his day, not because his observations or calculations were found to be at wrong, but because his argument was based on those observations and calculations, rather than on a theoretical understanding of the principles that should govern a perfectly ordered universe. Galileo struggled against a background of religious authority which gave Aristotelian ideas of perfection and purpose priority over observations and experimental evidence. He performed experiments to show that Aristotelian theory was wrong. In other words, the earlier medieval system of thought was deductive – it deduced what should happen from its ideas, in contrast to Galileo’s inductive method of getting to a theory from observations, experiments and calculations. This inductive method is a key feature in the establishment of the scientific method of gaining knowledge. The other key difference between the experiments and observations carried out by Galileo and the older Aristotelian view of reality was that Galileo simply looked at what happened, not at why it happened. Most significantly, Aristotle argued that a thing had four different causes: 1 Its material cause is the physical substance out of which it is made. 2 The formal cause is its nature, shape or design – that which distinguishes a statue from the material block of marble from which it has been sculpted. 3 Its efficient cause is that which brought it about – our usual sense of the word ‘cause’. 4 Its final cause is its purpose or intention. For Aristotle, all four causes were needed for a full description of an object. It was not enough to say how it worked and what it was made of, but one needed also to give it some purpose or meaning – not just ‘What is it?’ or ‘What is it like?’ but also ‘What brought it about?’ and ‘What is it for?’ For Aristotle, everything had a potential and a goal: its ‘final cause’. Broadly, this implied that things had a purpose, related to the future, rather than being totally defined by the efficient causes that brought them about in the first place. There are two important things to recognize in terms of Aristotle and the development of science. First, his authority was such that it was very difficult to challenge his views, as we see happening later in connection with the work of Copernicus and Galileo. But second, the development of modern science, from about the 17th century onwards, was based on a view of the nature of reality in which efficient causation dominated over final causation. In other words, the Aristotelian world where tings happened in order to achieve a foal was replaced by one in which they happened because they were part of a mechanism that determined their every move. The shift is clear and very important for the philosophy of science. This was also a key feature of the work of Francis Bacon, who rejected Aristotle’s idea of final causes and insisted that knowledge should be based on evidence. So Bacon in the Novum Organum tells us that we should get rid of four types of ‘idols’ which have dominated and distorted men’s minds, delaying the acquisition of true knowledge: 1 The idols of habit (i.e., idols of the tribe or tendency to see things in relations to us rather than they are in themselves: for Bacon, man is definitely not the measure of all things and we unthinkingly tend to impose order on phenomena which are not there, in this way not realizing that if we would command nature, we must first learn to obey her); 8
  • 9. 2 The idols of prejudice (or idols of the cave which are the predispositions of character and learning with which different individuals approach the facts, rather than seeing them as they really are); 3 The idols of conformity, or the idols of the market which arise through the use of the language as when we read back into nature conceptions which have arisen simply through using words which actually stand for nothing (such as ‘Fortune, the Prime Mover, Planetary Orbits, the Element of Fire, and like fictions’); 4 The idols of the theatre, which are due to the malign influence of philosophical systems on our minds which make people come to conclusions before they consult experience, and when finally they consult experience after having first determined the question according to their will they resort to bend her into conformity with their decisions and axioms so that they lead her about like a captive in a procession. Bacon’s insistence that one should accept evidence even where it did not conform to one’s expectations mark a clear shift to what became established as scientific method. Experience and knowledge A crucial step in appreciating scientific method comes with recognizing, and attempting to eliminate, those elements in what we see that come from our ways of seeing, rather from the external reality we are looking at. The philosopher John Locke (1632-1704) argued that everything we know derives from sense experience. When we perceive an object, we describe it as having certain qualities. Locke divided these qualities into two categories: Primary qualities belonged to the object itself and included its location, its dimensions and its mass. He considered that these would remain true for the object no matter who perceived it. Secondary qualities depended upon the sense faculties of the person perceiving the object and could vary with circumstances. Thus, for example, the ability to perceive colour, smell and sound depends upon our senses; if the light changes, we see things as having different colour. [So Qualia is the term nowadays used for the basic ‘phenomenal qualities’ of experience – the taste of something, its colour or texture, the sound of a piece of music, i.e., the experience of something as having a particular colour, texture, sound. In many ways they are the building blocks of mental life – the simple elements of experience. However, it is very difficult to explain qualia, except in terms of other qualia or subjective experience as a whole. Why should it be that photons entering the eyeball cause me to see this particular thing? What is the relationship between the information reaching my brain and the experience of seeing? Qualia cause a problem for the functionalist approach to mind which stems from positivism. If the mind is simply a processor that receives inputs and decides the appropriate responses (which is crudely put, what functionalism claims) then how do we account for this whole ‘qualia’ level of conscious experience? Qualia do not appear as a function, but neither are they physical.] Science was therefore concerned with primary qualities. These it could measure, and seemed to be objective, as opposed to the more subjective secondary ones. 9
  • 10. Imagine how different the world would be if examined only in terms of primary qualities. Rather than colours, sounds and tastes, you would have information about dimensions. Music would be a digital sequence or the pulsing of sound waves in the air. A sunset would be information about wavelengths of light and the composition of the atmosphere. In general, science deals with primary qualities. The personal encounter with the world, taking in a multitude of experiences simultaneously, mixing personal interpretation and the limitations of sense experience with whatever is encountered as external to the self, is the stuff of the arts, not of science. Science analyses, eliminates the irrelevant and the personal and finds the relationship between the primary qualities of objects. Setting aside the dominance of secondary qualities in experience, along with any sense of purpose or goal, was essential for the development of scientific method – but it was not an easy step to take. The mechanical world of Newtonian physics was a rather abstract and dull place – far removed from the confusing richness of personal experience. One thing that becomes clear the more we look at the way in which information is gathered and the words and images used to describe it, is that there will always be a gap between reality and description. Just as the world changes depending on whether we are mainly concerned with primary or secondary qualities, so the picture and models we use to describe it cannot really be said to be ‘true’, simply because there is no way to make a direct comparison between the model and the reality to which it points. Our experience cannot be unambiguous, because it depends on so many personal factors. Scientific method developed in order to eliminate those personal factors and therefore to achieve knowledge based simply on reason and evidence. The recognition that we cannot simply observe and describe came to the fore in the 20th century, particularly in terms of sub-atomic physics. It seemed impossible to disentangle what was seen from the action of seeing it. The philosopher David Hume (1711-1776) pointed out that scientific laws were only summaries of what had been experienced so far. The more evidence confirmed them, the greater their degrees of probability, but no amount of evidence could lead to the claim of absolute certainty. He argued that the wise man should always proportion his belief to the evidence available; the more evidence in favour of something (or balanced in favour, where there are examples to the contrary) the more likely it is to be true. He also pointed out that, in assessing evidence, one should take into account the reliability of witness and whether they had a particular interest in the evidence they give. Like Francis Bacon, therefore, Hume sets out basic rules for the assessment of evidence, with the attempt to remove all subjective factors or partiality and to achieve as objective a review of evidence as is possible. What Hume established (in his Enquiry Concerning Human Understanding, section 4) was that no amount of evidence could through the logic of induction, ever establish the absolute validity of a claim. There is always scope for a counter-example, and therefore for the ‘law’ to fail. This seemed to raise the most profound problems for science – since it cut away its most sure foundations in experimental method. With hindsight, that might seem a very reasonable conclusion to draw from the process of gathering scientific evidence, but in Hume’s day – when scientific method was sought as something of a replacement for Aristotle in terms of a certainty in life – it was radical. It was this apparent attack on the rational justification of scientific theories that later ‘awoke’ the philosopher Kant from his dogmatic slumbers. He accepted the force of Hume’s challenge, 10
  • 11. but could not bring himself to deny the towering achievements of Newton’s physics, which appeared to spring from the certainty of established laws of nature. It drove Kant to the conclusion that the certainty we see in the structures of nature (time, space and causality) are there because our minds impose such categories on our experience. In this way, Hume’s challenge, set alongside the manifest success of the scientific method, led to the conclusion that the process of examining the world is one that involves the necessary limitations of structures of human reason. This the way we see the world – and it works! That doesn’t mean that we can know anything with absolute certainty; and it doesn’t mean that ours is the only way of experiencing it. For Kant, we know only the world of phenomena. What things are in themselves (noumena) is hidden from us. In many way, this continues to be the case. I cannot know an electron as it is in itself, but only as it appears to me through the various models or images by which I try to understand things at the sub-atomic level. I may understand something in a way that is useful to me, but that does not mean that my understanding is – or can ever be – definitive. The early 20th century philosophical movement called logical positivism, whose view of language and meaning was greatly influenced by scientific method, argued for using empirical evidence as the criterion of meaning: in other words, the meaning of a statement was identical to its method of verification. It made the limitations about certainty, as suggested by Hume, the norm for all statements that were not definitions or matters of logic or mathematics (known to be true ‘a priori’), but depended on evidence (therefore known to be true only ‘a posteriori’). In an example in his Problems of Philosophy (1952), Bertrand Russell gives a characteristically clear and entertaining account of the problem of induction. Having explained that we tend to assume that what has always been experienced in the past will continue to be the case in the future, he introduces the example of the chicken which, having been fed regularly every morning, anticipates that this will continue to happen in the future. But, of course, this need not be so: “The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.” (Bertrand Russell, Problems of Philosophy Problems of Philosophy, p.35) From what we have said so far, it is clear that the inductive approach to knowledge can yield no more than a very high degree of probability. There is always going to be the chance that some new evidence will show that the original hypothesis, upon which a theory is based, was wrong. Most likely, it is shown that the theory only applies within a limited field and that in some unusual sets of circumstances it breaks down. Even if it is never disproved, or shown to be limited in this way, a scientific theory that has been developed using this inductive method is always going to be open to the possibility of being proved wrong. Without that possibility, it is not scientific. A particular feature of scientific theories is that it should be possible, in theory, to falsify them. If they cannot be falsified – in other words, if there is no possible evidence that could ever prove them wrong – they are deemed worthless. This is because theories are used to predict events and if they argue that absolutely anything is predictable, then they have nothing to contribute. This was the basis of Karl Popper’s criticism of both Marxist and Freudian thinking, arguing that an irrefutable theory cannot be scientific. Topic Discuss inductive method and inductive proof and the fundamental challenges to each of the two. 11
  • 12. LECTURE 4: Falsification Karl Popper (1902-1994), was a philosopher from Vienna who, following some years in New Zealand, settled in London in 1945, where he became Professor of Logic and Scientific Method at the London School of Economics. He made significant contributions to political philosophy as well as the philosophy of science. Popper’s theory of falsification, although important for the philosophy of science, has much wider application. In the 1920s and 1930s, logical positivists were arguing that statements only had meaning is they could be verified by sense data. In other words, if you could not give evidence for a statement, or say what could count for or against it, then it was meaningless. (The exception, of course, being statements of logic or mathematics, where the meaning is already contained within the definition of the words used. You don’t have to out and point to things in order to show that 2 + 2 = 4.) In The Logic of Scientific Discovery (1934, translated in 1959) Popper argued that one could not prove a scientific theory to be true by adding new confirming evidence. Contrariwise, if some piece of sound evidence goes against a theory, that may be enough to show that the theory is false. He therefore pointed out that a scientific theory could not be compatible with all possible evidence. If it is to be scientific, then it must be possible, in theory, for it to be falsified. In practice, of course, a theory is not automatically discarded as soon as one piece of contrary evidence is produced, because it might be equally possible that the evidence is at fault. As with all experimental evidence, a scientist tries to reproduce this contrary evidence, to show that it was not a freak result, but a genuine indication of something for which the original theory cannot account. At the same time, scientists are likely to consider any alternative theories that can account for both the originally confirming evidence and the new, conflicting evidence as well. In other words, progress comes by way of finding the limitations of existing scientific theories. A key feature of Popper’s claim here is that scientific laws always go beyond experimental data and experience. The inductive method attempted to show that, by building up a body of data, inferences can be made to give laws that are regarded as certain, rather than probable. Popper challenges this on the ground that all sensation involves interpretation of some sort and that in any series of experiments there will be variations and whether or not such variations are taken into account is down to the presuppositions of the person conducting them. Also, of course, the number of experiments done is always finite, whereas the number of experiments not yet done is infinite. Thus inductive arguments can never achieve the absolute certainty of a piece of deductive logic. What was essential, for Popper, was to be able to say what would falsify a claim. If nothing could be allowed to falsify it, it could not have significant content. Thus he held that all genuine scientific theories had to be logically self-consistent and also capable of falsification. No scientific theory can be compatible with all logically possible evidence. An irrefutable theory is not scientific. In particular, Popper’s view challenges two popular philosophical ideas: 1 Locke’s idea that the mind is a tabula rasa until it receives experience. 2 Wittgenstein’s idea, propounded in Tractatus, that the task of language is to provide an image of the external world. Instead, he saw minds as having a creative role vis à vis experience. In the scientific realm this means that progress is made when a person makes a creative leap to put forward an hypothesis that goes beyond what can be known through experience. It does not progress gradually by the adding up of additional information to confirm what is already known, but by moving speculatively into the unknown, and testing out hypotheses, probing their weak points and modifying them accordingly. 12
  • 13. This view of scientific work parallels the general situation of human thought, for Popper saw all of human intelligence in terms of the constant solving of problems – that is simply the way the mind works. In effect, the goal of science is therefore to produce statements which are high in information content and low in probability of being true (since the more information contained, the greater the chance of finding a proposition to be false), but which actually come close to the truth. It would, of course, be easy to find a statement that never fear being refuted (e.g. The sun will rise tomorrow), but it offers so little information content that it is difficult to see how it can be of much practical use. His approach to scientific method was therefore as follows: 1 Be aware of the problem (e.g. the failure of an earlier theory). 2 Propose a solution (i.e. a new theory). 3 Deduction of testable propositions from that theory. 4 Establish a preference among competing theories. This means that on Popper’s theory no scientific law can ever be proved, it can at best, be given only a high degree of probability. There must always remain the possibility that a piece of evidence will one day be found to prove it wrong. Therefore, in terms of the results of scientific work, he observes that everything is already ‘theory soaked’. Everything is a matter of using and modifying theories: the basic form of intellectual work is problem solving. For Popper, the ideal is a theory which gives the maximum amount of information and which therefore has quite a low level of probability, but which nevertheless comes close to the truth. Such a theory may eventually be refuted, but it will extremely useful, because its content will allow many things to be deduced from it. In other words, to take the opposite extreme, a theory that says nothing about anything is not going to be proved wrong, but neither is it going to be of any use! In general, science works by means of experiments. Results are presented along with detailed information about the experimental methods by which they were obtained. The task of those who wish to examine the results is to repeat the experiments and see if they produce identical results. Now, as it goes on, a theory is going to predict facts, some of which will be verified, some of which will not. Where it has failed to predict correctly, there is danger that the theory will therefore be falsified – that is the key to Popper’s approach. However, it is not quite that simple, for both Popper and Lakatos there is the recognition that falsification and the discarding of a theory generally only takes place once there is another theory ready to take its place. In other word, if there is another theory that can account for all that this theory can account for, and then go on to account for some situations that this theory is wrong about, then that other theory is to be preferred. Explanatory power is the ultimate criterion here. Thus it is possible that, if an experiment seems to falsify a theory, that there is something wrong with the experiment or that there is some other factor involved that was not considered before. It is not simply possible to throw out a theory at the first hint of falsification. By the same token, when that alternative theory becomes available every occasion of falsification leads to a comparison between the two theories and the one that is confirmed more broadly is the one to be accepted. A simplistic view of falsification is that a theory is to be discarded if it is not confirmed by experimental results. 13
  • 14. A more sophisticated view is that a theory is discarded if it is not confirmed by experimental results and there is an alternative theory that can account for them. In practice, scientists learn from the failures of theories, for it is exactly at those points where existing theories are shown to be inadequate that the impetus to find a more comprehensive theory is born. Topics Explain the role of falsification in Popper’s view of science. Give your own opinion about the role of falsification in Popper’s view of science. Give your own opinion about how to solve the problem of the relation between truth and scientific theories. 14
  • 15. LECTURE 5: Science and non-science Science always requires a healthy measure of scepticism, a willingness to re-examine views in the light of new evidence and to strive for theories that depend on external facts that can be checked, rather than on the mind of the person framing them. I was the quest for objectivity, loyalty to the experimental method and a willingness to set aside established ideas in favour of reason and evidence, that characterized the work of Bacon and others. There were disagreements about the extent to which certainty was possible and some (e.g. Newton) were willing to accept ‘practical certainty’ even though recognizing that ‘absolute certainty’ was never going to be possible. In the 20th century there was considerable debate about the process by which inadequacies in a theory are examined, as we shall see in the next chapter, and the point at which the theory should be discarded. No scientific theory can be expected to last for all time. Theories may be falsified by new and contrary evidence (Popper) or be displaced when there is a general shift in the matrix of views in which they are embedded (Kuhn), and theories are seen as part of ongoing research programmes (Lakatos) based on problem solving. On this basis, we cannot say that genuine science is what is proved true for all time, whereas pseudo-science has been (or will be) proved false. After all, something that is believed for all the wrong reasons may eventually be proved correct and the most cherished theories in science can be displaced by others that are more effective. What distinguished science from pseudo-science is to do with the nature of the claims that each makes and the methods each uses to establish them. One feature of modern philosophy of science that reflects this is probability theory. The improbable is more significant than the probable. Thus, if an improbable event is predicted by a theory, and proves to be the case, then the theory is greatly strengthen by it. By way of contrast, something that is quite normal and expected to happen anyway, is unlikely to be considered strong evidence of a theory which predicted it. In other words, for genuine science, there is always the attempt to balance the likelihood of something being the case against the other possibilities. If a person persists with the infuriating habit of claiming absolutely everything that he or she does as a great success, even if to the external observer it may appear a bit of a disaster, one might well ask ‘What would have to happen for it to be described as a failure?’ If absolutely nothing counts as a failure, then nothing merits the title ‘success’ either’ both are rendered meaningless in terms of an objective or scientific approach, the claim of success simply reflecting the choice to see everything in that positive way. The claim to be scientific rests on the methods used in setting up appropriate experiments or in gathering relevant evidence and also on the willingness to submit the results to scrutiny and to accept the possibility that they may be interpreted in more than one way. The distinction between science and pseudo-science is therefore essentially of method, rather than content. A common feature of pseudo-science is the use of analogies or resemblances to suggest causal connections, but without being able to specify or give direct evidence for them. One popular example illustrates this. It has been suggested that the red colour of the planet Mars resembles blood and that the planet should therefore be associated with the coming of war and bloodshed. What is not clear is how that planet’s colour could have any possible connection with warlike tendencies among human beings on Earth. Kuhn’s position about science and non-science: paradigms and their overthrow According to Thomas Kuhn, what is clear is that it is a mark of genuine science that problems with a theory are taken seriously and that, once those problems become 15
  • 16. overwhelming, an overall paradigm may need to be set aside in favour of one that succeeds in answering those problems. Thus his view of science is of periods of stability, punctuated by revolutions. A major feature of those approaches, which we would not call scientific, is that they are not open to the possibility of such revolutionary changes. If nothing is capable of changing one’s view, than that view is not scientific. But this should not be taken as a pejorative comment, as though only scientific views were worthwhile. There are many areas of life, for example in religion, art or relationships, in which it is perfectly valid to have a commitment and a particular chosen view which is not dependent on evidence. We simply need to accept that such things are not scientific and we should not attempt to justify them on a scientific basis. For Popper, theories are continually being tested and may be falsified at any time. For Kuhn, paradigms are not changed on the basis of reason alone, but in a moment of insight. Change is rare and sudden. For Lakatos, progress is made through research programmes, which allow peripheral theories to change, gradually influencing a ‘hard core’ of key theories for that particular programme. Topics What do you think of Popper’s distinction between science and non-science? Do you agree with him or not? To what extent and why? What do you think of Kuhn’s scientific relativism? Do you think it provides a more accurate picture of the workings of science than Popper’s falsificationism? Imre Lakatos, Falsification and the Methodology of Scientific Research Programmes, C.U.P.,1978. 16
  • 17. Lecture 6: Observation and theory If anything has become a received idea in recent philosophy of science, is the thesis that there is no sharp distinction in science between observation and theory; in other words, that there is no pure observational level in science which stands free of theoretical baggage. Some of the reasons for this view are good and some not so good. Our place in a certain niche of existence gives some point to a distinction between more and less theoretical levels of observation, and suggests why we do not just dogmatically ‘decide’ to accept some levels of observation as basic. We accept as basic those which relate to our lived experience of and interaction in the world, and this ramifies out into our sense that the theories of science, in so far as they are acceptable, will frequently have practical effects in the world of experience. And this basic level of observation, being related to our genetic inheritance, and our needs and interests as human beings, provides a common ground for communication between people from different cultural backgrounds. This sharing of sensory apparatus and of interests, makes it most unlikely that human beings even from the most widely different cultures would be completely unable to communicate at the level of the observations relevant to basic survival. Even less, then, is it likely that there should be a Kuhnian breakdown of communication between people – such as modern Western scientists – engaged in recognizably the same enterprise. However, the testing of claims about unobservable things, states, events or processes is evidently a complicated affair. In fact the more one considers how observations confirm hypotheses and how complicated the matter is the more one is struck by a certain inevitable and quite disturbing “under-determination”. A theory is alleged to be underdetermined by data in that for any body of observational data, even all the observational data, more than one theory can be constructed to systematize, predict and explain that data, so that no one theory’s truth is determined by the data. As we have sometimes noted, the official epistemology of modern science is empiricism, the doctrine that our knowledge is justified by experience – observation, data collection, experiment. The objectivity of science is held to rest on the role which experience plays in choosing between hypotheses. But if the simplest hypothesis comes face to face with experience only in combination with other hypotheses, then a negative test may be the fault of one of the accompanying assumptions; a positive test may reflect compensating mistakes in two or more of the hypotheses involved in the test that cancel one another out. Moreover, if two or more hypotheses are always required in any scientific test, then when a test- prediction is falsified there will always be two or more ways to “correct” the hypotheses under test. When the hypothesis under test is not a single statement like “all swans are white” but a system of highly theoretical claims like the kinetic theory of gases, it is open to the theorist to make one or more of a large number of changes in the theory in light of a falsifying test, any one of which will reconcile the theory with the data. But the large number of changes possible introduces a degree of arbitrariness foreign to our picture of science. Start with a hypothesis constituting a theory that describes the behaviour of unobservable entities and their properties. Such a hypothesis can be reconciled with falsifying experience by making changes in it that cannot themselves be tested except through the same process all over again – one which allows for a large number of further changes in case of falsification. It thus becomes impossible to establish the correctness or even the reasonableness of one change over another. Two scientist beginning with the same theory, subjecting it to the same initial disconfirming test, and repeatedly “improving” their theories in the light of the same 17
  • 18. set of further tests, will almost certainly end up with completely different theories, both equally consistent with the data their tests have generated. Imagine, now, the “end of inquiry” when all the data on every subject are in. Can there still be two distinct equally simple, elegant, and otherwise satisfying theories equally compatible with all the data, and incompatible with one another? Given the empirical slack present even when all the evidence appears to be in, the answers seems to be that such a possibility cannot be ruled out. Since they are distinct theories, our two total “systems of the world” must be incompatible, and therefore cannot both be true. We cannot remain either agnostic whether one is right or ecumenical about embracing both. Yet it appears that observation would not be able to decide between these theories. In short, theory is underdetermined by observation. And yet science does not show the sort of proliferation of theory and the kind of un-resolvable theoretical disputes that the possibility of this under-determination might lead us to expect. But the more we consider reasons why this sort of under-determination does not manifest itself, the more problematical becomes the notion that scientific theory is justified by objective methods that make experience the final court of appeal in the certification of knowledge. For what else besides the test of observation and experiment could account for the theoretical consensus characteristic of most natural sciences? Of course there are disagreements among theorists, sometimes very great ones, and yet over time these disagreements are settled, to almost universal satisfaction. If, owing to the ever-present possibility of under-determination, this theoretical consensus is not achieved through the “official” methods, how is it achieved? Well, besides the test of observation, theories are also judged on other criteria: simplicity, economy, consistency with other already adopted theories. A theory’s consistency with other already well-established theories confirms that theory only because observations have established the theories it is judged consistent with. Simplicity and economy in theories are themselves properties that we have observed nature to reflect and other well-confirmed theories to bear, and we are prepared to surrender them if and when they come into conflict with our observations and experiments. One alternative source of consensus philosopher of science are disinclined to accept is the notion that theoretical developments are epistemically guided by non-experimental, non-observational considerations, such as a priori philosophical commitments, religious doctrines, political ideologies, aesthetic tastes, psychological dispositions, social forces or intellectual fashions. Such factors we know will make for consensus, but not necessarily one that reflects increasing approximation to the truth, or to objective knowledge. Indeed, these non-epistemic, non-scientific forces and factors are supposed to deform understanding and lead away from truth and knowledge. The fact remains that a steady commitment to empiricism coupled with a fair degree of consensus about the indispensability of scientific theorizing strongly suggests the possibility of a great deal of slack between theory and observation. But the apparent absence of arbitrariness fostered by under-determination demands explanation. And if we are to retain our commitment to science’s status as knowledge par excellence, this explanation had better be one we can parlay into a justification of science objectivity as well. However, succeeding in doing this convincingly is a very difficult task and the jury is still out on whether it is possible at all. Empiricism is the epistemology which has tried to make sense of the role of observation in the certification of scientific knowledge, Since the eighteenth century, if not before, especially British philosophers like Hobbes, Locke, Berkeley and Hume have found inspiration in science’s successes for their philosophies, and sought philosophical arguments to ground science’s claim. In so doing, these philosophers and their successors set the agenda of the philosophy of science and revealed how complex is the apparently simple and straightforward relation between theory and evidence. 18
  • 19. In the twentieth century the successors of the British empiricists, the logical positivists or “logical empiricists” as some of them preferred, sought to combine the empiricist epistemology of their predecessors with advances in logic, probability theory and statistical inference, to complete the project initiated by Locke, Berkeley and Hume. What they found was that some of the problems seventeenth and eighteenth-century empiricism uncovered were even more resistant to solution when formulated in updated logical and methodological terms. “Confirmation theory”, as this part of the philosophy of science came to be called, has greatly increased our understanding of the “logic” of confirmation, but has left as yet unsolved Hume’s problem of induction, the further problem of when evidence provides a positive instance of a hypothesis, and the “new riddle of induction” – Goodman’s puzzle of “grue” and “bleen”. Positivists and their successors have made the foundations of probability theory central to their conception of scientific testing. Obviously much formal hypothesis testing employs probability theory. One attractive late twentieth-century account that reflects this practice is known as Bayesianism. This view holds that scientific reasoning from evidence to theory proceed in accordance with Bayes’ theorem about conditional probabilities, under a distinctive interpretation of the probabilities it employs. The Bayesians hold that scientists’ probabilities are subjective degrees of belief or acceptance of a claim – betting odds. By contrast with other interpretations, according to which probabilities are long-run relative frequencies, or distributions of actualities among all logical possibilities, this frankly psychological interpretation of probability is said to best fit the facts of scientific practice and its history. The Bayesian responds to complaints about the subjective and arbitrary nature of the probability assignment it tolerates by arguing that, no matter where initial probability estimates start out, in the long run using Bayes’ theorem on all possible alternative hypotheses will result in their convergence on the most reasonable probability values, if there are such values. Bayesianism’s opponents demand that it substantiate the existence of such “most reasonable” values and show that all alternative hypotheses are being considered. To satisfy these demands would be tantamount to solving Hume’s problem of induction. Finally, Bayesianism has no clear answer to the problem which drew our attention to hypothesis- testing: the apparent tension between science’s need for theory and its reliance on observation. This tension expresses itself most pointedly in the problem of under-determination. Given the role of auxiliary hypotheses in any test of a theory, it follows that no single scientific claim meets experience for test by itself. It does so only in company of other, perhaps large numbers of, other hypotheses needed to affect the derivation of some observational prediction to be checked against experience. But this means that a disconfirmation test, in which expectations are not fulfilled, cannot point the finger of falsity at one of these hypotheses and that adjustments in more than one may be equivalent in reconciling the whole package of hypotheses to observation. As the size of a theory grows, and it encompasses more and ,ore disparate phenomena, the alternative adjustments possible to preserve or improve it in the face of recalcitrant data increase. Might it be possible, at the never-actually-to-be-reached “end of inquiry”, when all the data are in, that there be two distinct total theories of the world, both equal in evidential support, simplicity, economy, symmetry, elegance, mathematical expression or any other desideratum of theory choice? A positive answer to the question may provide powerful support for an instrumentalist account of theories. For apparently there will be no fact of the matter accessible to inquiry that can choose between the two theories. And yet, the odd thing is that under-determination is a mere possibility. In point of fact, it almost never occurs. This suggests two alternatives. The first alternative, embraced by most philosopher of science, is that observation really does govern theory choice (or else there 19
  • 20. would be more competition among theories and models than there is); it’s just that we simply haven’t figured it all out yet. The second alternative is more radical, and is favoured by a generation of historians, sociologists of science and a few philosophers who reject both the detailed teachings of logical empiricism and also its ambitions to underwrite the objectivity of science. On this alternative, observations underdetermine theory, but it is fixed by other facts – non-epistemic ones, like bias, faith, prejudice, the desire for fame, or at least security, and power politics. This is a radical view, that science is a process like other social processes, and not a matter of objective progress. We have seen that, once we go beyond the observable world in science, problems arise as to the existence and nature of the entities and processes our explanations postulate. These problems arise not simply because we are speaking of things we cannot observe, but more because there will be ever so many possible ‘mathematical’ hypotheses, all consistent with whatever data we are taking as our observational basis. At this point in science, there will be a critical under-determination of theory by data, and this in itself seems sufficient reason for holding on to some distinction, however rough and ready, between observation and theory. Topic for the assignment 2: Discuss the problem of the relation between observation and theory in science. 20
  • 21. Lecture 7: Scientific realism and anti-realism How often have you heard someone’s opinion written off with the statement, ‘that’s just a theory’. Somehow in ordinary English the term ‘theory’ has come to mean a piece of rank speculation or at most a hypothesis that is doubtful, or for which there is as yet little evidence. This usage is oddly at variance with the meaning of the term as scientists use it. Among scientists, far from suggesting tentativeness or uncertainty, the term is often used to describe an established sub-discipline in which there are widely accepted laws, methods, applications and foundations. Thus, economists talk of ‘game theory’ and physicists of ‘quantum theory’, biologists use the term ‘evolutionary theory’ almost synonymously with evolutionary biology, and ‘learning theory’ among psychologists comports many different hypotheses about a variety of very well established phenomena. Besides its use to name a whole area of inquiry, in science ‘theory’ also means a body of explanatory hypotheses for which there is strong empirical support. But how exactly a theory provides such explanatory systematization of disparate phenomena is a question we need to answer. Philosophers of science long held that theories explain because, like Euclid’s geometry, they are deductively organized systems. It should be no surprise that an exponent of the deductive-nomological model of explanation should be attracted by this view. [The deductive-nomological (D-N) model is an explanation of the concept of explanation which requires that every explanation be a deductive argument containing at least one law, and be empirically testable.] After all, on the D-N model explanation is deduction, and theories are more fundamental explanations of general processes. But unlike deductive systems in mathematics, scientific theories are sets of hypotheses, which are tested by logically deriving observable consequences from them. If these consequences are observed, in experiment or other data collection, then the hypotheses which the observations test are tentatively accepted. This view of the relation between scientific theorizing and scientific testing is known as ‘hypothetico-deductivism’. It is closely associated with the treatment of theories as deductive systems. In other words, hypothetico-deductivism is the thesis that science proceeds by hypothesizing general statements, deriving observational consequences from them, testing these consequences to indirectly confirm the hypotheses. When a hypothesis is disconfirmed because its predictions for observation are not borne out, the scientist seeks a revised or entirely new hypothesis. This axiomatic conception of theories naturally gives rise to a view of progress in science as the development of new theories that treat older ones as special cases, or first approximations, which the newer theories correct and explain. This conception of narrower theories being ‘reduced’ to broader or more fundamental ones, by deduction, provides an attractive application of the axiomatic approach to explaining the nature of scientific progress. Once we recognize the controlling role of observation and experiment in scientific theorizing, the reliance of science on concepts and statements that it cannot directly serve to test by observation becomes a grave problem. Science cannot do without concepts like ‘nucleus’, ‘gene’, ‘molecule’, ‘atom’, ‘electron’, ‘quark’ or ‘quasar’. And we acknowledge that there are the best of reasons to believe that such things exist. But when scientists try to articulate their reasons for doing so, difficulties emerge – difficulties borne of science’s commitment to the sovereign role of experience in choosing among theories. These difficulties divide scientists and philosophers into at least two camps about the metaphysics of science – realism and antirealism – and they lead some to give up the view of science as the search for unifying theories. Instead, these scientists and philosophers often give pride of place in science to the models we construct as substitutes for a complete understanding that science may not be able to attain. In other words, the dispute is between 21
  • 22. those who see science as a sequence of useful models and those who view it as a search for true theories. Realism One can be a realist about many different kinds of thing: numbers, possible worlds, universals, minds, physical objects, quarks, fields, and so on. To call a philosopher a realist requires a specification of what the philosopher is a realist about. Usually, there is an intended contrast with those who deny that the entities in question are real. In the philosophy of science, realists are aligned against instrumentalists, phenomenalists (Berkeley and the logical positivists), conventionalists, and others of that ilk. The scientific realist believes that the theories of science give us knowledge about the unobservable. If his realism is to have any bite, he will not simply believe that the theories of science make statements about unobservable things. He will also believe that we sometimes have good reasons for believing that those statements are true. In terms of scientific realism, there was a fundamental disagreement between Bohr and Einstein. For Bohr (and Heisenberg, who worked with him), the uncertainty that applies to sub-atomic particles is not just a feature of our observation, but is a fundamental feature of reality itself. It is not just that we cannot simultaneously know the position and momentum (in physics, momentum is the mass of a moving object multiplied by its velocity) of a particle, but that the particle does not have these two qualities simultaneously. Thus physics is really about what we can talk about – if something cannot be observed cannot be part of our reality. Our observation creates the reality we are observing. Einstein, however, took the view that there was indeed a reality that existed prior to our observation of it. Thus a particle would indeed have a position and momentum at any instant, the only thing was that it was impossible for us to know both at the same time. Reality is thus prior to observation. But, of course, it remains essentially an unknown reality, since as soon as we try to observe it, we are back in Bohr’s world of physics where it is determined by our observation. What is actually found in nature is far richer and more untidy than our theories assume, but we often ignore or regard as irrelevant those aspects of actual states of affairs which do not match our theories. Mismatches of detail are characteristically attributed to factors extraneous to what we are attempting to cover with precise theories, and which we have been unable to control. Even the most refined comparisons of masses and lengths, which far surpass in accuracy the precision of other physical measurements, fall behind the accuracy of bank accounts. Our theoretical accounts of nature often apply perfectly only in ideal and controlled situations. It seems that we should regard the theories science actually provides us with as far from complete and precisely accurate representations of reality. They are idealizations and abstractions which focus on particular properties of natural phenomena and cases of partial regularity, corresponding no doubt to specific interests and concerns we might have. But in applying our models, we overlook both their incompleteness and their inaccuracy. They do well enough for what we want in predicting and controlling effects, but this ‘enough’ could be quite consistent with a good deal of inaccuracy and a good deal of overlooking of the full detail of any actual situation. Moreover, we choose our models according to the specific features of the situation we may be interested in, without worrying too much about whether 22
  • 23. one model can easily be combined with another model we might use for other purposes. None of this militates against the idea that science can discover genuine regularities or new phenomena or new entities. But it does militate against the thought that in science our aim is always the production of ever more general and comprehensive accounts of the whole of a given level of existence, which at the same time are ever more accurate. This ideal may be unattainable. It certainly will be if nature is basically untidy and cannot be divided into clearly demarcated natural kinds. And it may be that most of what we want from science, in the way of explanation and of the control of nature, can be assumed without assuming the validity of the ideal. We can illustrate that difficulties can and do arise in integrating different parts of our physical picture of the world, or, perhaps better, in integrating our pictures of different parts of the physical world. Quantum mechanics, with its assumption of super-positions of states of a given system, and classical mechanics, with its definiteness and lack of fuzziness would seem to be a good illustration of the sorts of problems that can arise here. Similar problems would also arise if, as I have hinted, the laws we have discovered turn out to apply only to some parts of space and time. What happens at the borders, where different conditions might apply? I do not pretend to answer this question, but this problem should certainly make us wary of thinking that we are close to an absolute picture of the world, in which all the elements mesh smoothly and seamlessly. Antirealism A diverse group of doctrines whose common element is their rejection of realism. In the philosophy of science, antirealism includes instrumentalism, conventionalism, logical positivism, logical empiricism, and Bas van Fraassen’s constructive empiricism. Some antirealists (such as instrumentalists) deny that scientific theories that postulate un- observables should be interpreted realistically. Others (such as van Fraassen) concede that such theories should be interpreted realistically, but they deny that we should ever accept as true any theoretical claims about un-observables. There is an important argument in favour of the anti-realist view of scientific theories, concerning the very nature of what a theory is. Theories are generalizations, they attempt to show and to predict across a wide range of actual situations. Indeed, the experimental nature of most scientific research aims at eliminating irrelevant factors in order to be able to develop the most general theory possible. Now in the real world there are no generalities. You cannot isolate an atom from its surroundings and form a theory about it. Everything interconnects with everything else – all we have are a very large number of actual situations. Our theories can never represent any of these, because they try to extract only generalized features. Theories deal with ideal sets of circumstances, not with actual ones. Instrumentalism Strictly speaking, instrumentalism is the doctrine that theories are merely instruments, tools for the prediction and convenient summary of data. As such, theories are not statements that are either true or false; they are tools that are more or less useful. But because one has to use the machinery of logic in order to draw predictions from theories, it is difficult to deny that theories have truth values. Thus, instrumentalism has come to be used as a general term for antirealism. Most modern instrumentalists concede that theories have truth values but deny that every aspect of them should be interpreted realistically or that reasons to accept a theory as scientifically valuable are reasons to accept the theory as literally true. In this sense, T. S. Kuhn, who locates the value of scientific theories in their ability to solve puzzles, is an 23
  • 24. instrumentalist. Theories may have truth values, but their truth or falsity is irrelevant to our understanding of science. One important feature about the acceptance given to a theory springs directly from the scientific impetus that leads to its being put forward in the first place. Theories are there to explain phenomena that do not make sense otherwise. If you have something that existing laws cannot make sense of, you tend to hunt for an alternative theory that can do just that. Thus progress is made through a basic process of problem solving. If existing laws cannot be used to make sense of what I experience, that presents a problem. This is what leads to the instrumentalist view of scientific laws. In other words, a law is to be judged by what it does. In regard to the instrumentalist conception of science, the key thing to remember is that the pictures and models by which we attempt to understand natural phenomena are not ‘true’ or ‘false’ but ‘adequate’ or ‘inadequate’. You cannot make a direct comparison between the image you use and reality. You can’t, for example, look at an atom directly and then consider if your image of it – as a miniature solar system, for example – is true. If you could see it directly, you wouldn’t need a model in order to understand it. Models only operate as ways of conceptualizing those things that cannot be known by direct perception. Conventionalism A decision is conventional if it involves choosing from among alternatives that are equally legitimate when judged by objective criteria (such as consistency with observation and evidence); thus, either the decision is entirely arbitrary or it rests on an appeal to factors often presumed to be subjective, such as simplicity, economy, and convenience. Radical conventionalists argue that scientific theories are really definitions (or rules of inference, pictures, conceptual schemes, paradigms) and hence neither true nor false; moderate conventionalists disagree, insisting that once the conventional elements in theories have been isolated, the remaining parts are objectively true or false. Typically, conventionalists appeal to the under-determination of theories by evidence to bolster their doctrine. Many philosophers of science have been conventionalists in one respect or another. They include Henry Poincaré (on high-level scientific laws as definitions), Pierre Duhem (on the ambiguity of falsification of theories in physics), W.V. Quine (on the decision to retain or abandon any sentence whatever), Hans Reichenbach (on the choice of a geometry to describe physical space), Karl Popper (on the decision to accept basic statements), and T.S. Kuhn (on the decision to switch paradigms). Topic Explain what is to be understood as scientific realism and anti-realism. Explain the importance for the philosophy of science of the debate between realism and anti- realism. Weigh the arguments in favour of scientific realism and anti-realism and outline how a working synthesis of the two positions can be devised. 24
  • 25. Lecture 8: Probability With the development of modern science, the experimental method led to the framing of ‘laws of nature’. It is important to recognize what is meant by ‘law’ in this case. A law of nature does not have to be obeyed. A scientific law cannot dictate how things should be, it simply describes them. The law of gravity does not require that, having tripped up, I should adopt a prone position on the pavement – it simply describes the phenomenon that, having tripped, I fall. Hence, if I trip and float upwards, I am not disobeying a law, it simply means that I am in an environment (e.g. in orbit) in which the phenomenon described by the ‘law of gravity’ does not apply. The ‘law’ cannot be ‘broken’ in these circumstances, only be found to be inadequate to describe what is happening. Moreover, if scientific laws are descriptions of what is happening, they are also not simply true or false but are methods of expressing regularities that have been observed. Scientific laws are therefore instruments for drawing conclusions within an overall scheme of thought. However, even though the overall scheme of thought of the observer plays a significant role in their framing, it is important to keep in mind, that scientific laws are first and foremost produced by a process of induction, based on experimental evidence and observed facts. They may therefore be regarded as the best available interpretation of the evidence, but not as the only possible one. They do not have the absolute certainty of a logical argument, but only a degree of probability, proportional to the evidence on which they are based. This leaves open the possibility to deduce laws from statistical evidence. In other words, scientific laws can operate in terms of statistical averages, rather than on the ability to predict what will happen on each and every occasion. For instance, statistical information gives an accurate picture of the actions of a society, but it cannot show the actions of an individual within that society. In other words, laws can operate at different levels. What appears to be predictability, even determinism, at one level, can nevertheless coexist with indeterminism and unpredictability at another. Philosophical accounts of probability can be broadly divided into subjective accounts, which regard probability statements in terms of what we are entitled to believe on given evidence, and objective accounts, which interpret probability statements as referring directly to tendencies of various sorts existing in the real world. In order to bring out as starkly as possible the difference between subjective and objective accounts of probability, we may take the essence of subjective theories to be simply the belief that a statement of probability does not reflect anything rational, positive or metaphysical in the world; it is merely a psychological device which we use when we are in ignorance concerning the full facts of a situation. Objectivist accounts of probability see probability statements as referring to real tendencies individuals or sequences have to manifest certain patterns of outcome. Treating probability statements objectively, however, does not tell us exactly how they are to be understood. Frequency theory sees probability in terms of the distribution of some property over a collective, a potentially infinite series of events in which a given property is distributed randomly. The frequency theory cannot account for single happenings except in terms of theoretical classes to which those single happenings are presumed to belong. But the theory actually has to face an even more serious problem than the invocation of theoretical classes of events. Precisely because the frequency theory has to analyse probabilities relating to individual events or objects in terms of classes the individuals belong to, the probability to be ascribed 25
  • 26. to an individual object or event having a particular property will depend on the relative frequency of that property throughout the class the individual is seen as belonging to. But individuals can, of course, be seen as belonging to more than one class, and in cases where the frequency of the property is different in the different classes, the same individual will be ascribed more than one probability of having the same property. The frequency theory always reads statements about an individual’s chances as an elliptical way of saying how some property is distributed through some reference class, and, provided the probabilities have been correctly estimated, there are no grounds within the frequency theory for preferring the choice of one reference class to another. It is just that the choice of different reference classes yields different information. But we surely do want to think of probabilities of individuals having certain properties, and, on occasion, we do think some reference classes provide more useful information than others for estimates of chances relating to the outcomes of individual events. Where the reference class in question can be seen in terms of the conditions which generate the outcomes involved, it might well seem natural to see probabilities in terms of actual propensities rather than in terms of frequencies. Or so, at least, it did to Popper when he moved from advocacy of a frequency theory of probability to what has been dubbed the propensity theory. Probability, then, is a property of the generating conditions of events; this is the basis of the propensity theory of probability. The propensity theory amounts to the claim that while certain physical set-ups are random or unpredictable, as far as their individual outcomes are concerned, repeated experiments or observations of the set-ups in question will show statistical stability, This stability is seen as due to propensities inherent in the set-up, and these propensities are regarded, by Popper at least, as actually existing but unobservable dispositional properties of the physical world. The objective probability of a single event can and should be seen as a measure of an objective propensity – of the strength of the tendency, inherent in the specified physical situation, to realize the event – to make it happen. While the propensity theory stresses the generating conditions underlying observed frequencies, the frequency theory remains more epistemological, as it were, emphasizing that our only evidence for talk of propensities is observed long-run frequency and suggesting that this talk comes to no more than a reference to actual or theoretical long-run frequencies. Indeed, to an extent, the difference between the frequency theorist and the propensity theorist is analogous to the ones between Humeans and anti-Humeans on cause, or between positivists and realists more generally. As such, it is an aspect of a rather wider dispute which cannot be finally decided by consideration of probability alone. As far as dealing with and assessing probability statements goes, the more significant divergence is not that between frequency theorists and propensity theorists, who both analyse probabilities in terms of objective tendencies in the real world. It is rather that between objectivists about probability, which include both frequency and propensity theorists, and subjectivists, who see probabilities in terms of what we as observers are entitled to believe on given evidence, and who analyse talk of probabilities as being founded in human ignorance. What actually emerges from our survey of philosophical interpretations of probability is that while there is a sense in which some probability statements can be regarded subjectively, in terms of our ignorance of determining conditions, this is not the case in areas in which there is genuine indeterminism. Subjectivist approaches to probability have a certain plausibility when we come to deal with the single case in the typical gambling situation so much discussed by classical theorists of probability. Though even here we would not be wrong to see the frequencies engendered by dice and coins as physically real, and to regard statements about such frequencies as not just confessions of ignorance. When we come to cases of real indeterminism, though, there is no necessary connection between the use of a probability statement and human ignorance. In so far as it seems ineradicably in-deterministic, quantum 26
  • 27. theory most naturally pushes us in the direction of an objective interpretation of probability, to some version of either frequency or propensity theory. The most natural interpretation of quantum theory and its probabilistic theories is that what we are dealing with are set-ups which manifest statistical regularities, and that these regularities are both real and objective, without necessarily being based in any unknown factors determining single cases. Looked at in this way, neither quantum theory nor its associated probability statements have to be analysed subjectively in terms of the knowledge (or ignorance) of the observer. Indeed, for most standard and scientific uses of probability statements, including those in quantum mechanics, the natural interpretation is to see the statements as referring to real frequencies or propensities in populations of particles, molecules, genes, coins, dice, and so on. On the other hand, there are also cases where we speak of probability of some outcome on some evidence we have, for example, to the probability of its raining tomorrow, and in such cases it is plausible to link talk of probability to ignorance of determining conditions. If this line of thought about two senses of probability is correct, then, we will have to examine specific cases to see whether the notion of probability is being used in an objective or subjective sense and, hence, whether an objective or subjective interpretation of probability is appropriate for the particular use. Topic Discuss the role of probability in science. 27
  • 28. Lecture 9: Scientific reductions Positivism is an extreme form of empiricism advocated by the French philosopher and sociologist August Comte (1798-1857). Comte denied that it is possible to know anything about un-observables (into which category he placed the underlying causes of phenomena), and he insisted that the sole aim of science is prediction, not explanation. Comte also believed that each branch of science has its own special laws and methods that cannot be reduced to those of other branches. Generally regarded as the founder of sociology, Comte’s empiricism and his hostility towards metaphysics were an important influence on logical positivism. Logical positivism is the name for the set of doctrines advocated by the members of the Vienna Circle from about the 1920 to 1936 (when Moritz Schlick, their leader, was assassinated). Prominent members of this group were Rudolf Carnap, Herbert Feigl, Hans Hahn, Otto Neurath, and Friedrich Waismann. Their approach to philosophy relied heavily on the verifiability criterion well illustrated in A.J. Ayer’s Language, Truth and Logic. As originally formulated by the logical positivists, the verifiability principle asserts that the meaning of any contingent statement is given by the observation statements needed to verify it conclusively. Observation statements are assumed to be directly verifiable by the experiences they purportedly describe. Unverifiable assertions are declared to be meaningless (or, at least to lack any cognitive meaning). Criticism by Karl Popper and others soon led to the abandonment of verifiability as a criterion of meaning in favour of weaker notions such as confirmability and testability. These, too, are controversial insofar as they are intended as explications of meaning. In order to better understand logical positivism it is useful to bear in mind that there were two general trends by the end of the 19th century: 1 To see the world as a mechanism, upon which science reflected and produced theories about how it worked. 2 To recognize that all our knowledge comes through the senses and that the task of science is to systematize the phenomena of sensation. We cannot know things in themselves, separate from our experience of them. The logical positivists argued that the meaning of a statement (scientific or otherwise) was the method by which it could be verified. Everything depended on sense experience. All theoretical terms had to show a correspondence with observations. Discussions about the inductive method in science should be seen against this positivists’ background – the narrow and precise view of language they espoused matched what they saw as the ideal of scientific language – the means of summarizing perceptions. The logical positivist of the Vienna Circle, of whom probably the best known are Schlick and Carnap were generally scientists and mathematicians, influenced by the work of the early Wittgenstein and also Bertrand Russell. They believed that the task of philosophy was to determine what constituted valid propositions. They wanted to define correspondence rules, by which the words we use relate to observations. They also wanted to reduce general and theoretical terms (e.g. mass or force) to those things that could be perceived. In other words, the mass of a body is defined in terms of measurements that can be made of it. 28
  • 29. In general, the position adopted by the logical positivists was that the meaning of a statement was its method of verification. If I say that something is green, I actually mean that, if you go and see it, you will see that it is green. If you cannot say what would count for or against a proposition, or how you could verify it though sense experience, then that proposition is meaningless. Now, clearly, this is mainly concerned with the use of language. But for science it had a particular importance, which enabled it to dominate the first half of the 20th century. Basically, it was assumed that the process of induction, by which general statements were confirmed by experimental evidence, was the correct and only way to do science. That seemed a logical development of the scientific method, as it had developed since the 17th century, but it produced problems. What do you do if faced with two alternative theories to explain a phenomenon? Can they both be right? Can everything science wants to say be reduced to sensations? Once a law is accepted and established, it seems inconceivable that it would simply be proved wrong. To make progress, laws that apply to a limited range of phenomena can be enlarged in order to take into account some wider set of conditions. Scientific theories are therefore not discarded, but become limited parts of a greater whole. But even while this view was dominating the philosophy of science, the actual practice of science, especially in the fields of relativity and quantum physics – was producing ideas that did not fit this narrow schema. The clear implication of the whole approach of the logical positivists was that the language of science should simply offer a convenient summary of what could be investigated directly. In other words, a scientific theory is simply a convenient way of saying that if you observe a ‘particular’ thing on every occasion that these ‘particular’ conditions occur, then you will observe this ‘particular’ phenomenon. What scientific statements do is to replace the ‘particulars’ of experience with a general summary. Clearly, the ultimate test of a statement is therefore the experimental evidence upon which it is based. Words have to correspond to external experienced facts. The problem is that you can say, for example, that a mass or a force is measured in a particular way. That measurement gives justification for saying that the terms ‘mass’ or ‘force’ have meaning. But clearly you cannot go out and measure the mass of everything, or the totality of forces that are operating in the universe. Hence such general terms always go beyond the totality of actual observations on which they are based. They sum up and predict observations. For the logical positivists, the truth of a statement depended in our being able (at least in theory) to check it against evidence for the physical reality to which it corresponds. However, scientific theories can never be fully checked out in this way, since they always go beyond the evidence; that is their purpose, to give general statements that go beyond what can be said through description. An example of the direction where the logical positivists take the language of science could be shown by what happens when a material is scientifically described. In this case, it is necessary to say more than what it looks like: one needs to say (based on experiments) how it is likely to behave in particular circumstances. Thus, if I pick up a delicate piece of glassware, I know that it has the dispositional property to be fragile. The only meaning I can give to that property is that, were I to drop the glassware, it would break. Now the term ‘fragile’ is not a physical thing; it is an adjective rather than a noun. I cannot point and say ‘there is fragile’. I use the word ‘fragile’ as a convenient way of summarizing the experience of seeing things dropped or otherwise damaged. It is thus possible to have general terms which are meaningful and which satisfy the requirement of logical positivism 29
  • 30. that the meaning of a statement is its method of verification. It would be easy (but expensive!) to verify that all glassware is fragile. Reductionism and its implications Wittgenstein and the logical positivists aimed to assess all language in terms of the external reality to which it pointed and to judge it meaningful if, and only if, it could be verified by reference to some experience. In other words, to say ‘my car is in front of the house’ means ‘if you go and look in front of the house, you will see my car’. If a statement could not be verified (at least in theory), then it was meaningless. The only exceptions to this were statements about logic and mathematics, which were true by definition, and generally termed ‘analytic statements’. Logical positivists believed that all ‘synthetic statements’ (i.e. those true with reference to matters of fact, rather than definition) could be ‘reduced’ to basic statements about sense experience. Reductionism is the term we use for this process. It ‘reduces’ language to strings of simple claims about sense experience. However, complex a statement may be, in the end it comes down to such pieces of sense data, strung together with connectives (e.g., if this … then … that …; and; but; either/or ). It is one of the two ‘dogmas of empiricism’ attacked by Quine. Now reductionism is primarily about language, but how we deal with language reflects our understanding of reality. So reductionism influences the approach that we take to complex entities and events. There are two ways of examining these: 1 A reductionist approach sees ‘reality’ in the smallest component parts of any complex entity (e.g. you are ‘nothing but’ the atoms of which you are made up). 2 A holistic view examines the reality of the complex entity itself, rather than in its parts (e.g. you understand a game of chess in terms of the overall strategy, rather the way in which individual pieces are being moved. Science operates in both ways. On the one hand, it can analyse complex entities into their constitutive parts and, on the other, it can explore how individual things work together in ways that depend on their complex patterning. For example, in the early days of computing, every command had to be learned and typed in order to get a program to work. In basic word processing, one needed to remember the code for ‘bold’ and enter that before and after the word to be emboldened in the text. In a modern word processor, one simply highlights the text and clicks on a button labelled ‘bold’. The more complex the processor, the simpler the action required. We all know that, beneath the apparently instinctive operations of programs, there is a level of code in which everything is reduced to simple bits of information in the form of ‘0’ and ‘1’s. The letter you have typed, or design you have drawn is ‘nothing but’ those bits of information – there is nothing in the computer memory to represent it other than such strings of machine code. Yet what you see in the design you have drawn, or mean by what you have written, is of a different order of significance from the basic code into which the computer reduces it. In theory, a perfectly programmed computer would respond automatically and one need never be aware of the program, the commands or the machine code. One would simply express oneself and it would happen – the software would have become transparent. Perhaps that is what happens with the human brain. It is so complex that there is no opportunity to examine the firing of individual neurones – it just ‘thinks’. That does not mean that the thinking takes place in some other location – that there is some secret ‘ghostly’ 30