Humanism and Nonsense

It should be fairly obvious to everyone that “humanism” is a religion-substitute.

Humanists are, after all, human beings, and find that they cannot get through life without religious beliefs and commitments, and being ideologically crippled in their ability to have genuine religious beliefs and commitments, they find it necessary to invent some fake one ad hoc. Let’s take a look at the first three “affirmations” of the humanist manifesto. They are really quite funny, given that the sort of people—like Richard Dawkins—who go in for this kind of religion-substitute generally would identify themselves as some kind of “rationalist.” How rational is humanism? Not very. But let’s see why.

Here are the first three “affirmations” of The Humanist Manifesto, as seen through my philosopher’s lens:

1. Knowledge of the world is derived by observation, experimentation, and rational analysis. Humanists find that science is the best method for determining this knowledge as well as for solving problems and developing beneficial technologies. We also recognize the value of new departures in thought, the arts, and inner experience—each subject to analysis by critical intelligence.

My readers will know that one of the first questions professional philosophers pose in their investigation of claims is that of retortion.  That is, we ask, for any given attempt to formulate or state a universal principle, what happens when the principle is applied to itself?  Very frequently, a purportedly universal principle, when applied to itself, defeats itself, that is, ether directly annihilates its own content or else invalidates the epistemic ground of the principle put forward.

We have such a case here. This principle makes a basic, universal claim about the nature of knowledge, and is thus being put forward as very important knowledge about the nature of knowledge—and yet, this principle, while claiming to be knowledge about the nature of knowledge, fails to count as legitimate knowledge on the grounds of itself.

According to this principle, knowledge is derived  from observation, experience, and rational analysis.  But this principle is not derived from observation, experience, and rational analysis. So this principle is not knowledge. Not being knowledge, this principle can only be an irrational faith commitment, that is, a believing-something-to-be-true-without-knowledge, there being no other alternative.

2. Humans are an integral part of nature, the result of unguided evolutionary change. Humanists recognize nature as self-existing. We accept our life as all and enough, distinguishing things as they are from things as we might wish or imagine them to be. We welcome the challenges of the future, and are drawn to and undaunted by the yet to be known.

Here we encounter another pair of claims that would go far beyond anything authorized by science. First is the belief that evolution is “unguided.” By what possible means could one determine this?  Certainly not be any means set forth, say, in Affirmation 1 above. No, what we have here is simply a blunt atheistical blind-faith belief.  Now, atheists are given to claiming that atheism involves no beliefs—this is not true, of course, but only a rhetorical maneuver which attempts to saddle theists with the so-called ‘burden of proof’—but this particular dodge is not available to the humanist, since this actually a statement of humanist beliefs.  So here we have one clear case of a blind-faith belief, a belief-without-evidence, held to be true by an act of will, on the basis of wanting-it-to-be-true.  That is, a case of the thing that most theists do not do in most cases, but which atheists are forever claiming that they do.

Or rather not one, but two. The belief about the unguidedness of evolution is immediately followed by the belief-without-evidence that “nature is self-existing.” This is, frankly, a metaphysical howler, since nature is composite entity composed of contingent beings, and thus cannot possibly be a candidate for a necessary being.  No scientist, no cosmologist, for example, holds that the laws of nature or the fundamental constants of nature have any intrinsic necessity to them.  Worst still, it is very obvious that a “self-existing” being could not not exist, and thus could not be a being that began to be at any time—and yet our best science indicates that the physical universe is NOT, in fact, eternal, but has a beginning at what we call the Big Bang.  Thus it is clear that our universe is NOT self-existing.

To counter this, an atheist or a humanist might postulate that “nature” is not limited to our universe, that there is “more” out there beyond the bounds of our universe, perhaps many or an infinite number of universes—a multiverse—but a multiverse hypothesis suffers from three fatal problems:

  1. It is entirely speculative. There is no evidence whatever of such a thing, nor can there be, at least by any scientific means, since our scientific knowledge is necessarily limited to the universe. The “multiverse” is an ad hoc, made-up,  just-so story.
  2. On our best understanding of the implications of our inflationary universe, any possible multiverse that could fit with what we know about the universe would have to be an inflationary multiverse, and thus, necessarily, would have to have an absolute beginning, and thus, necessarily could not be a self-existing being either.  The multiverse hypothesis cannot “solve” the problem an absolute beginning without directly, unless it posits that the multiverse is radically incongruous with our universe and thus not part of the same system of nature.
  3. If it is then posited that the multiverse need not be “natural” but has the metaphysical characteristics which would enable it to be self-sufficient—this does solve the problem, but at a fairly high cost for the humanist; namely, you have just posited God under another name. Welcome to theism.

3. Ethical values are derived from human need and interest as tested by experience. Humanists ground values in human welfare shaped by human circumstances, interests, and concerns and extended to the global ecosystem and beyond. We are committed to treating each person as having inherent worth and dignity, and to making informed choices in a context of freedom consonant with responsibility.

And finally, to bring this humanist farce to a close, we find the humanists—after blithely asserting a bunch of things they do not and cannot know as true—blithely asserting that they can somehow solve the problem of the naturalistic fallacy, that is, of logically getting an OUGHT out of an IS.  They say that “ethical values” are “derived” from “human need and interest.”  But like all naturalists, they hit an eternally insoluble problem.  Let’s put it in syllogistic  form:

  1. X needs Y.
  2. X ought to have/be given/be allowed to obtain Y.

or

  1. X has an interest in Y.
  2. X ought to have/be given/be allowed to obtain/attain Y.

What’s premise 2?

Or if more premises than one are needed, what are they? At some point in the chain there must be a premise that says “if such-and-such IS the case, then such-and-such OUGHT to follow.”  But no such premise will ever be available to one who remains in the realm of “is’s” about human “needs and interests.”

As always, naturalism makes ethics incoherent, and since humanism is a naturalism, humanism makes ethics incoherent.

Humanism is, I conclude, not only a religion-substitute, but a particularly bad one, at least for rational people, since it requires a number of acts of blind faith, that is, of sheerly believing things on the basis of no evidence other than wanting them to be true.

Since this is NOT true of classical theism, it seems that rational persons will elect to become classical theists rather than joining the humanist cult. As far as I can tell, the only advantage of humanism over, say, Scientology, is that it doesn’t really require anything of you. And while that sounds appealing to many moderns, they will eventually find to their sorrow that this is not a strength but a weakness.

Genuine religious faith requires much of a person, indeed, requires all, but gives much, indeed, all in return.

The Definition of Evolution

I have very little interest in debating evolution. If I found the subject deeply interesting, I would have become a biologist. I do not, so I did not. I am largely agnostic on most matters pertaining to evolutionary biology, because it is very evident that from the first “evolution” was never merely a scientific theory, but was always used as a mythological basis in support of a naturalistic worldview.  The long and short of the matter is, too many scientists are dishonest about the matter for me to trust what they say, and absent years of study on my own, of a subject I care little about, I cannot  reasonably judge for myself.  Thus, for the most part, I suspend judgment.

I do maintain, as I have elsewhere, that to the extent that evolution is a legitimate scientific account of nature, that it does not and cannot, even in principle, conflict with Christianity.  Truth cannot conflict with truth—and a scientific model of a naturalistic mechanism is not even, properly understood, a competitor with truth.

But since atheists never seem to tire of discussing evolution, I am going to present what I have found most helpful as a philosopher, which is, of course, DISTINCTIONS.  Specifically, distinctions in how the concept of “evolution” may be used.

This is an excerpt from John Lennox’ God’s Undertaker: Has Science Buried God?, which I highly recommend.

The nature and scope of evolution

‘Nothing makes sense in biology except in light of evolution.’

– Theodosius Dobzhansky

‘Large evolutionary innovations are not well understood. None has ever been observed, and we have no idea whether any may be in progress. There is no good fossil record of any.’

– Robert Wesson

‘Well, as common sense would suggest, the Darwinian theory is correct in the small, but not in the large. Rabbits come from other slightly different rabbits, not from either [primeval] soup or potatoes. Where they come from in the first place is a problem yet to be solved, like much else of a cosmic scale.’

– Sir Fred Hoyle

The definition of evolution

Thus far we have been using this term as if it had a single, agreed meaning. But this is manifestly not the case. Discussion of evolution is frequently confused by failure to recognize that the term is used in several different ways, some of which are so completely non-controversial that rejection of them might indeed evidence some kind of ignorance or stupidity (but, even then, scarcely wickedness).

What, then, is evolution? Here are some of the ideas for which the term ‘evolution’ is used:

1. Change, Development, Variation. Here the word is used to describe change, without any implication for the kind of mechanism or intelligent input (or lack of it) involved in bringing about the change. In this sense we speak of the ‘evolution of the motor car’, where, of course, a great deal of intelligent input is necessary. We speak of the ‘evolution of a coastline’, where the natural processes of sea and wind, flora and fauna shape the coastline over time, plus possibly steps taken by engineers to prevent erosion. When people speak of the ‘evolution of life’ in this sense, all they mean is that life arose and has developed (by whatever means). Used in this way, the term ‘evolution’ is neutral, innocuous and uncontroversial.

2. Microevolution: variation within prescribed limits of complexity, quantitative variation of already existing organs or structures Such processes were observed by Darwin in connection with the Galapagos finch species (see also Jonathan Weiner’s detailed study1). This aspect of the theory is scarcely controversial as such effects of natural selection, mutation, genetic drift etc. are constantly being recorded.2 One classic example with which we are, sadly, all too familiar right round the world is the way in which bacteria develop resistance to antibiotics.

It is worth recording that the changes in average finch beak lengths, which had been observed during the drought period of 1977, were reversed by the rains of 1983; so that this research is more an illustration of cyclical change due to natural selection than it is of permanent improvement (or even change). This reversal is, however, not always mentioned in textbooks.3 However, one of the main studies that has been copied from textbook to textbook and heralded as one of the main proofs of evolution has come in for very serious criticism in recent years. It concerns the occurrence of industrial melanism in the peppered moth (Biston betularia). The claim is that natural selection produced a variation of the relative numbers of light moths to dark moths in a population. Light moths were more easily seen by predators than dark ones, against the dark, polluted surfaces of tree trunks, and so eventually the population would become dominated by dark moths. Of course, if this account were true, it would at best only be an example of microevolution and that only in the sense of cyclical change (no new moths were created in the process since both kinds existed to start with). Therefore it would not be controversial except insofar as examples of microevolution are frequently cited as sufficient evidence for macroevolution.

However, according to Michael Majerus, a Cambridge expert on moths, ‘the basic peppered moth story is wrong, inaccurate or incomplete, with respect to most of the story’s component parts’.4 In addition, there appears to be no evidence that peppered moths rest on tree trunks in the wild. Many photographs in textbooks, showing them doing so, have apparently been staged. In the Times Higher Educational Supplement,5 biologist Lynn Margulis is puzzled by the fact that Steve Jones still used the peppered moth in his book update of Darwin, entitled Almost like a whale,6 even though, according to her, he must know of the dubious nature of the research. When University of Chicago biologist Jerry Coyne learned of the difficulties with the peppered moth story, he wrote: ‘My own reaction resembles the dismay attending my discovery, at the age of six, that it was my father and not Santa who brought the presents on Christmas Eve.’7,8

3. Macroevolution This refers to large-scale innovation, the coming into existence of new organs, structures, body-plans, of qualitatively new genetic material; for example, the evolution of multicellular from single-celled structures. Macroevolution thus involves a marked increase in complexity. This distinction between micro and macroevolution is the subject of considerable dispute since the gradualist thesis is that macroevolution is to be accounted for simply by extrapolating the processes that drive microevolution over time, as we shall see below.

4. Artificial selection, for example, in plant and animal breeding Breeders have produced many different kinds of roses and sheep from basic stocks, by very careful selective breeding methods. This process involves a high degree of intelligent input; and so, although often cited, in particular by Darwin himself, who argued that what humans can do in a relatively short time nature could do in a long time, provides in itself no real evidence for evolution by unguided processes.

5. Molecular evolution Some scientists argue that, strictly speaking, evolution presupposes the existence of self-replicating genetic material. For instance, Dobzhansky’s view was that, since natural selection needed mutating replicators, it followed clearly that ‘prebiological natural selection is a contradiction in terms’.9 However, the term ‘molecular evolution’ is now commonly used to describe the emergence of the living cell from non-living materials.10 This language usage can easily obscure the fact that the word ‘evolution’ here cannot mean a Darwinian process in the strict sense.

Of course, the term ‘evolution’ also covers the theories about how these things happened; the most widespread being the neo-Darwinian synthesis, according to which natural selection operates on the basis of variations that arise through mutation, genetic drift, and so on.

In light of these ambiguities in the meaning of evolution, Lewontin’s and Dawkins’ accusations become more understandable. If ‘questioning evolution’ means questioning it in senses 1, 2 or 4, then an accusation of stupidity or ignorance might be understandable. As we have already said, no one seriously doubts the validity of microevolution and cyclic change as examples of the operation of natural selection.

Confusion can easily arise, therefore, particularly when evolution is defined as microevolution. Take, for instance, the following statement about evolution by E.O. Wilson: ‘Evolution by natural selection is perhaps the only one true law unique to biological systems, as opposed to non-living physical systems, and in recent decades it has taken on the solidity of a mathematical theorem. It states simply that if a population of organisms contains multiple hereditary variants in some trait (say, red versus blue eyes in a bird population), and if one of those variants succeeds in contributing more offspring to the next generation than the other variants, the overall composition of the population changes, and evolution has occurred. Further, if new genetic variants appear regularly in the population (by mutation or immigration), evolution never ends. Think of red-eyed and blue-eyed birds in a breeding population, and let the red-eyed birds be better adapted to the environment. The population will in time come to consist mostly or entirely of red-eyed birds. Now let green-eyed mutants appear that are even better adapted to the environment than the red-eyed form. As a consequence the species eventually becomes green-eyed. Evolution has thus taken two more small steps’ [italics original].11

Quite so. But this seems to be no more than a description of microevolution – indeed, since we have red-eyed birds and blue-eyed birds in the initial population, Wilson is only describing the kind of uncontroversial cyclic change mentioned above in connection with Darwin’s finches. Thus Wilson completely bypasses the question as to whether the mechanism described can bear all the extra weight that is put upon it in any full-blown understanding of evolution – for example, answering the question, ‘Where did the birds come from in the first place?’ Yet he claims elsewhere in his article that natural selection does bear that weight. For instance he says, ‘all biological processes arose through evolution of these12 physicochemical systems through natural selection’ or again, humans are ‘descended from animals by the same blind force that created those animals’.

Furthermore, it has been repeatedly noted that, at the level discussed in Wilson’s definition, natural selection itself is essentially self-evident. Colin Patterson, FRS, in his standard text on evolution,13 presents it in the form of the following deductive argument:

  • all organisms must reproduce
  • all organisms exhibit hereditary variations
  • hereditary variations differ in their effect on reproduction
  • therefore variations with favourable effects on reproduction will succeed, those with unfavourable effects will fail, and organisms will change.

Thus natural selection is a description of the process by which the strain in a population that produces the weaker progeny eventually gets weeded out, leaving the stronger to thrive.

Patterson argues that, formulated this way, natural selection is, strictly speaking, not a scientific theory, but a truism. That is, if we grant the first three points, then the fourth follows as a matter of logic, an argument similar to that advanced by Darwin himself in the last chapter of The Origin of Species. Patterson observes that ‘this shows that natural selection must occur but it does not say that natural selection is the only cause of evolution,14 and when natural selection is generalized to be the explanation of all evolutionary change or of every feature of every organism, it becomes so all-embracing that it is in much the same class as Freudian psychology and astrology’.15 By this Patterson seems to be suggesting that it fails to satisfy Popper’s criterion of falsifiability, just as the Freudian statement that adult behaviour is due to trauma in childhood is not falsifiable. Patterson is warning us of the danger of simply slapping the label ‘natural selection’ in this generalized sense on some process, and thinking that we have thereby explained the process.

Patterson’s description highlights something very easily overlooked – the fact that natural selection is not creative. As he says, it is a ‘weeding out process’ that leaves the stronger progeny. The stronger progeny must be already there: it is not produced by natural selection. Indeed the very word ‘selection’ ought to alert our attention to this: selection is made from already existing entities. This is an exceedingly important point because the words ‘natural selection’ are often used as if they were describing a creative process, for instance, by capitalizing their initial letters. This is highly misleading as we see from the following illuminating statement by Gerd Müller, an expert on EvoDevo, an increasingly influential theory integrating evolutionary theory and developmental biology that aims to fill some of the gaps in standard neo-Darwinism. Müller writes: ‘Only a few of the processes listed above are addressed by the canonical neo-Darwinian theory, which is chiefly concerned with gene frequencies in populations and with the factors responsible for their variation and fixation. Although, at the phenotypic level, it deals with the modification of existing parts, the theory is intended to explain neither the origin of parts, nor morphological organization, nor innovation. In the neo-Darwinian world the motive factor of morphological change is natural selection, which can account for the modification and loss of parts. But selection has no innovative capacity: it eliminates or maintains what exists. The generative and ordering aspects of morphological evolution are thus absent from evolutionary theory’ [italics mine].17

Müller thus confirms what logic and even language would tell us: natural selection, by its very nature, does not create novelty. This flatly contradicts Richard Dawkins’ bold assertion cited earlier that natural selection accounts for the form and existence of all living things. Such polar opposition of views on the central thesis of neo-Darwinism raises disturbing questions as to the solidity of its scientific basis and prompts us to explore a bit further.

We now turn to the fact that the hereditable variations on which natural selection acts are random mutations in the genetic material of organisms. However, Dawkins and others are careful to inform us that evolution itself is not a purely random process. He is sufficiently impressed by calculations of mathematical probabilities to reject any notion that, say, the human eye evolved by pure chance in the time available. In his inimitable way he writes: ‘It is grindingly, creakingly, crashingly obvious that, if Darwinism were really a theory of chance, it couldn’t work. You don’t need to be a mathematician or a physicist to calculate that an eye or a haemoglobin molecule would take from here to infinity to self-assemble by sheer higgledy-piggledy luck.’18 What then is the answer? That natural selection is a law-like process that sifts the random mutations so that evolution is a combination of necessity and chance. Natural selection, we are told, will find a faster pathway through the space of possibilities. The idea here is, therefore, that the law-like process of natural selection increases the probabilities to acceptable levels over geological time.

Putting it simply, the essence of the argument is this. Natural selection favours the strong progeny over the weak in a situation where resources are limited. It helps preserve any beneficial mutation. Organisms with that mutation survive and others do not. But natural selection does not cause the mutation. That occurs by chance. The quantity of resources (food) available is one of the variable parameters in the situation. It occurred to me as a mathematician that it would be interesting to see what happens if this parameter is allowed to increase. I invite you to do a thought experiment. Imagine a situation in which resources increase so that, in the limiting case, there is food for all, the strong and the weak. As resources increase, there would seem to be less and less for natural selection to do, since most progeny would survive. What would neo-Darwinists say to this? Would they say on the basis of their chance arguments that evolution would now be less and less likely? For it would now seem that chance is doing all the work: and the neo-Darwinists have ruled that possibility out of court.

When I thought of this I was sure that it must have occurred to someone earlier, and not surprisingly it has. Indeed, in 1966, British chemist R.E.D. Clark drew attention to the fact that Darwin had been disturbed by a letter from the eminent botanist Joseph Hooker in 1862 in which Hooker argued that natural selection was in no sense a creative process.19 However, Clark had to reconstruct Hooker’s argument from Darwin’s reply as he thought Hooker’s original letter had been lost. Hooker’s letter has not been lost, however. It reads: ‘I am still very strong in holding to impotence of crossing with respect to [the] origin of species. I regard Variation as so illimitable in {animals}. You must remember that it is neither crossing nor natural selection that has made so many divergent human individuals, but simply Variation [Hooker’s emphasis]. Natural selection, no doubt has hastened the process, intensified it (so to speak), has regulated the lines, places etc., etc., etc., in which, and to which, the races have run and led, and the number of each and so forth; but, given a pair of individuals with power to propagate, and [an] infinite [time] span to procreate in, so that not one be lost, or that, in short, Natural Selection is not called upon to play a part at all, and I maintain that after n generations you will have extreme individuals as totally unlike one another as if Natural Selection had extinguished half.

‘If once you hold that natural selection can make a difference, i.e. create a character, your whole doctrine tumbles to the ground. Natural Selection is as powerless as physical causes to make a variation; the law that “like shall not produce like” is at the bottom of [it] all, and is as inscrutable as life itself. This it is that Lyell and I feel you have failed to convey with force enough to us and the public: and this is the bottom of half the infidelity of the scientific world to your doctrine. You have not, as you ought, begun by attacking old false doctrines, that “like does produce like”. The first chapter of your book should have been devoted to this and nothing else. But there is some truth I now see in the objection to you, that you make natural selection the Deus ex machina for you do somehow seem to do it by neglecting to dwell on the facts of infinite incessant variation. Your eight children are really all totally unlike one another: they agree exactly in no one property. How is this? You answer that they display the inherited differences of different progenitors – well – but go back, and back and back in time and you are driven at last to your original pair for origin of differences, and logically you must grant, that the differences between the original [MALE] & [FEMALE] of your species were equal to the sum of the extreme differences between the most dissimilar existing individuals of your species, or that the latter varied from some inherent law that had them. Now am not I a cool fish to lecture you so glibly?’20

It is interesting to note the force with which Hooker writes in ascribing ‘half the infidelity of the scientific world’ against Darwin to his failure to deal with this argument. Darwin’s reaction came in a letter (after 26 November but actually dated 20 November 1862). ‘But the part of your letter which fairly pitched me head over heels with astonishment; is that where you state that every single difference which we see might have occurred without any selection. I do and have always fully agreed; but you have got right round the subject and viewed it from an entirely opposite and new side, and when you took me there, I was astounded. When I say I agree, I must make proviso, that under your view, as now, each form long remains adapted to certain fixed conditions and that the conditions of life are in [the] long run changeable; and second, which is more important that each individual form is a self-fertilizing hermaphrodite, so that each hairbreadth variation is not lost by intercrossing. Your manner of putting [the] case would be even more striking than it is, if the mind could grapple with such numbers – it is grappling with eternity – think of each of a thousand seeds bringing forth its plant, and then each a thousand. A globe stretching to furthest fixed star would very soon be covered. I cannot even grapple with the idea even with races of dogs, cattle, pigeons or fowls; and here all must admit and see the accurate strictness of your illustration. Such men, as you and Lyell thinking that I make too much of a Deus of natural selection is conclusive against me. Yet I hardly know how I could have put in, in all parts of my Book, stronger sentences. The title, as you once pointed out, might have been better. No one ever objects to agriculturalists using the strongest language about their selection; yet every breeder knows that he does not produce the modification which he selects. My enormous difficulty for years was to understand adaptation, and this made me, I cannot but think rightly, insist so much on natural selection. God forgive me for writing at such length; but you cannot tell how much your letter has interested me, and how important it is for me with my present Book in hand to try and get clear ideas.’21

Darwin clearly feels the force of Hooker’s argument to the extent of agreeing with it though astonished at the way in which it was put. The argument seems rather important because it raises very serious questions about the kind of argument that purports to render probabilities of macro (or molecular) evolution acceptable within the timescale constraints supplied by contemporary cosmology.

Lennox, John. God’s Undertaker: Has Science Buried God? Lion Hudson. Kindle Edition.

 

Falsificationism

One problem with professional philosophy—and this holds for some of the sciences too, like physics and biology—is that the subject matter is difficult to master and require a great deal of time and technical training.

This does not, however, stop philosophical concepts from spilling over into popular discourse, where they are usually poorly understood, or even more commonly, completely misunderstood.

When I hear the terms “falsification” or “falsificationism” thrown around wildly, I experience something much like what I imagine a biologist experiences when he hears the term “evolution” being wildly and recklessly misapplied in contexts where it is misleading or meaningless.

What is falsificationism? It is a specific answer to a specific philosophical question given by a specific philosopher to solve a specific problem—and it is failed attempt at that, which left in its wake a sometimes useful methodological tool, and an unfortunate extra-philosophical cult following.

The philosopher in question is Sir Karl Popper, and the problem he was trying to solve was known in his day as “the demarcation criterion problem.”  At the time, it was taken for granted that some knowledge is scientific and some knowledge is not scientific, that some human disciplines of inquiry are sciences and some are not.  So, the question arose By what criterion can one demarcate a real or genuine science from a non-science, or more urgently, a pseudo-science claiming to be a science? In other words, How do you tell a science from what isn’t science?

Popper’s thinking was informed by the intellectual milieu of fin-de-siècle Vienna, in which many exciting intellectual revolutions were underway simultaneously. Einstein was threatening to shake the heretofore seemingly unshakable foundations of Newtonian physics, mathematics seemed, as per Hilbert’s famous speech, about to be completed (before Gödel put an end to that idea once and for all), Freudians claimed to have discovered the key to the depths of the human psychē, and Marxists to have unlocked the inner mechanisms of human history.  It was a heady time, in which people were not afraid to make bold, often reckless, claims.

At the time, the universally accepted answer to the ‘demarcation problem’ was verificationism. Something would count as scientific knowledge if its claims could be empirically verified, and the more verifications, the more scientifically sound the view was supposed to be.  This theory is very easy to understand and has a direct appeal to common sense: if a theory keeps being proven correct, it is probably a correct theory, right?

Unfortunately, Popper noticed something, something which today we would call “confirmation bias.”  He noticed it particularly among the Marxists and the Freudians, both groups of which insisted that their respective disciplines of dialectical materialism and psychoanalysis were indeed SCIENCES.  He noticed it personally in his friend Alfred Adler, the Freudian heretic who had ‘discovered,’ contra Freud, that sexual repression was not the key to the human psychē, but the inferiority complex instead.

What Popper noticed is that when a Marxist or a Freudian attempted to give a scientific account of a historical event in terms of class conflict or a psychological phenomenon in terms of sexual repression and neurosis, they were able to do this, no matter what the circumstances were.  A Marxist could, and did, explain anything and everything by means of class conflict. A Freudian could, and did, explain anything and everything by means of sexual repression and neurosis.  And each “explanation” which was given was considered by each as a “verification” of either Marxism or Freudianism, thus further proving each theory correct and scientific.  Nor could one criticize Marxism or Freudianism: it was obvious that the critic of Marxism was a bourgeoise apologist, which proved Marxism’s class conflict theory, and the critic of Freudianism was sublimating his sexual repression into a neurotic attack on Freudianism, which proved Freudianism. (As you can imagine, when a Marxist got into a head to head argument with a Freudian, it was like a perpetual motion machine of pointless back and forth!)

Popper regularly dined with his friend Adler, with whom he often quarreled about this issue. One time, Popper discovered a psychological case that seemed clear to him could not be explained by means of Adler’s go-to explanation for everything, the inferiority complex (which had the same self-confirming structure as the Freudianism from which it sprang; e.g. A man sees a drowning child: if he fails to jump in the water to save her, it was due to his feelings of inferiority; if he does jump in to save her, he was overcompensating due to his feelings of inferiority, etc.).

Popper waited for his moment and sprang his example case on Adler, hoping to baffle him. Instead, to his astonishment, Adler immediately analyzed the case in terms of an inferiority complex.  Popper managed to sputter “But … how do you know this?”  To which Adler replied, in a magisterial tone, “Based on my thousandfold experience.” To which Popper, having recovered, replied “And I suppose now your experience is a thousand-and-one-fold!”

So, the scene is half set. We need one more piece of the puzzle.  Einstein’s theory of special relativity was being hotly debated by physicists.  Some were intrigued by it, some thought it was complete lunacy.  As it happened, an eclipse of the sun was due to happen on a certain day, which would allow something not usually possible: the direct observation of the planet Mercury (Mercury is usually overwhelmed by the sun’s radiance and cannot be directly observed).  Einstein’s theory predicts that light will be affected by sufficiently massive bodies (such as the sun) whereas Newton’s theory does not; that is, according to Einstein, light should bend around the sun due to its mass.  This allowed a very interesting test case. Since we cannot normally see Mercury, we do not know exactly where it is (or more accurately, where it appears to be).  But at the time of the eclipse, it would be possible to look and see exactly where Mercury appears to be: in essence, Einstein’s theory predicted Mercury would be at point E, due to the gravitational bending of light and Newton’s theory predicted Mercury would be seen at point N.  All that was needed was to wait for the eclipse and see whether Mercury was at E or N.   And as I suppose you all know, the eclipse came, and Mercury was observed to be at point E, just as Einstein had predicted, and the relativistic revolution began in earnest.

What impressed Popper so much about this was that Einstein was willing to “go out on a limb” so to speak and make a PREDICTION, which, if it didn’t pan out, would have significantly undermined the credibility of his theory; but at the same time, if it did pan out (as it did) would significantly boost the credibility of the theory. In other words, Einstein was willing to make a prediction which, if it failed to hold, would FALSIFY HIS THEORY.

Marxists and Freudians and Adlerians were NOT willing to make predictions which would, if they failed to happen, cause them to abandon their theories.  NOTHING can make a convinced Marxist or Freudian (or feminist) abandon his or her theory.

So Popper hit on the idea of REVERSING the demarcation criterion: instead of VERIFICATION (he argued) truly scientific reasoning risks FALSIFICATION.  It makes CONJECTURES and risks REFUTATIONS.  A theory that cannot under any circumstances be falsified is not, according to Popper, a scientific theory.

Thus was born the idea of falsificationism, namely, the idea that what makes a discipline scientific is its willingness to risk being refuted by making risky predictions.

As with verificationism, there is a kind of common sense appeal to falsificationism.  It is not a worthless idea—but in the event, it can’t do the work Popper hoped it could, and it certainly can’t do the work that Popper candidly admitted it could not: namely, serve as a criterion for all legitimate knowledge as such.

To start with the latter, as an epistemological principle, falsificationism is not a scientific statement. It is a philosophical statement (or a “meta-scientific statement” as Popper would have said).  It was a statement about science, but not a scientific statement.  Part of the whole problem it is meant to solve is to mark off scientific knowledge from other sorts of knowledge, such as philosophical knowledge.  If one were to attempt to apply the falsification criterion beyond its intended scope of scientific knowledge, applying it to, say, all knowledge, the result (of which Popper was well-aware) would be disastrous:

  1. The principle of falsification: no statement which cannot be empirically falsified can count as legitimate knowledge, and must therefore be rejected.
  2. Statement 1, the principle of falsification, is a statement which cannot be empirically falsified.
  3. Therefore, the principle of falsification does not count as knowledge, and must be rejected.

Any attempt to take Popper’s principle of falsification as a universal standard applying to all knowledge fails on its face, because the principle is SELF-DEFEATING.  Applied to itself, it fails its own test.  And Popper was well-aware of this, since the same holds true for its predecessor, verificationism.  Verificationism cannot be verified and falsificationism cannot be falsified. So any attempt to extend either principle beyond its narrower, scientific scope fails.

And neither principle succeeds in its more limited purpose of demarcating “scientific knowledge” from non-scientific knowledge either (to give the game away, most philosophers today are content to say that “scientific knowledge” is a Wittgensteinian family resemblance concept and not to think there even IS anything specific that exactly demarcates science from non-science).

Verificationism fails for the reasons that so upset Popper. Countless pseudo-verifications can be generated almost at will for any theory. This is why Popper wanted falsification to be clear, cut and dried. IF THIS PREDICTION DOESN’T HAPPEN, THE THEORY IS FALSIFIED AND SHOULD BE ABANDONED.

Why doesn’t this work for science at least? Didn’t it do a good job distinguishing Einstein’s theories from Marx’s or Freud’s? Yes, in that interesting and unusual case, it did.  But unfortunately, that isn’t how science actually works nearly all the time.

Since the Einstein example that so impressed Popper involved planets, let’s talk about those.  Consider the planet Uranus. When you look for it in the night sky, it isn’t where it is supposed to be, according to Newton’s theory.

BAM! Clear, cut and dried, remember? Newtonian physics is FUCKING FALSIFIED.  Stop using it! Right now! It’s wrong! No, really, quit it!

Except … that’s now how things played out. Newtonian physics was too damned useful to throw out as Popper demanded. Instead of regarding Newtonian physics as falsified by Uranus not being where it should be according to the theory, the scientists decided that there was probably some other factor X which was causing the (correct) theory not to match up with the (would-be falsifying) observation.  What could it be? Well .. if there were another planet out beyond Uranus … of THIS mass … and right about HERE … THAT would explain Uranus not being where it is supposed to be according to the theory.

Lo and behold, telescopes were pointed and Neptune was discovered.  Far from being a falsification of Newtonian physics, the failed prediction of the location of Uranus led to the discovery of Neptune, which was then regarded as a stupendous confirmation of Newtonian physics!

Well … except for one little problem.  Now that we know about Neptune .. it turns out that it keeps not being where Newton’s theory says it should be. So, once again, Newtonian physics is falsified, right? We abandoned it? Nope. Once again, scientists did what scientists actually do (and not what Karl Popper would later say that SHOULD do), and they decided that the mismatch between the theory and the observation was due not to a defect in either one, but due to some complicating factor X.  What could it be? Well .. if there were yet another planet beyond Neptune .. of THIS mass .. right about HERE .. and so we discovered Pluto.  And another crucial scientific discovery was made due to the fortunate fact that Popper hadn’t written yet, and no scientist in his right mind considered Newton “falsified.”

Now it had been known for a long time that Mercury also doesn’t do what it should do, according to Newton’s theory.  And once again, scientists hypothesized that neither the theory nor the observations were wrong, but that there was a complicating factor X which was the cause of the discrepancy.  If there were another planet HERE, of THIS mass … well, it’s too close too the sun to actually SEE (yet) but let’s assume it’s there.  They even named it: Vulcan, the planet closer to the sun than Mercury, which disturbs Mercury’s orbit.

Alas for planet Vulcan! Sorry, Mr. Spock! We mourn for thee. It turns out that in this case, the theory was wrong.  Newton wasn’t right, and Einstein’s theory explains perfectly well why Mercury moves as it does without the need to postulate a planet Vulcan.

But I hope you see the problem already from the examples given. Assume there is a mismatch between a THEORETICAL PREDICTION and an EMPIRICAL OBSERVATION.  Then one of the following is true:

  1. The theory is wrong / falsified.
  2. The observation is erroneous in some way.
  3. Neither the theory nor the observation are erroneous, but there is an unknown factor X causing the discrepancy.

In brief, the problem with Popper’s theory is that it requires us to know in advance that 2 and 3 can be safely ruled out.  Now, sometimes 2 can be ruled out to a high degree of probability, sometimes not.  But 3 is the real sticking point.  3 can NEVER be ruled out as a possibility since, by definition, it is an UNKNOWN FACTOR X.  It is simply not possible to say “I know with certainty that there is not something I do not know about interfering in this case.”

This alone would be enough to sink Popper’s falsificationism as a comprehensive theory of scientific knowledge (let alone all knowledge, which it never was nor could have been), but matters get even worse for falsificationism.

Again, I’m not going to give you the detailed philosophical arguments behind all this, but two philosophers advanced very similar ideas, one earlier and one better, but similar to the extent that the general view is known as the Quine-Duhem Thesis … or the Duhem-Quine Thesis … whichever sounds better to you, I guess (I like the first one, even though Duhem was before Quine in making the case).

Basically the Quine-Duhem Thesis states that it is impossible to test any scientific idea or theory or hypothesis in isolation, because any possible test one can devise already requires implicit and explicit embedded background assumptions.  In something like Quine’s terminology, knowledge, including scientific knowledge, forms a wholistic system or web of beliefs and assumptions, and there is simply no way to ask IN ISOLATION OF THE SYSTEM AS A WHOLE whether one particular belief or idea is true or not. It is a kind of Einsteinian relativity applied to epistemology.  What Quine argued, in effect, is that one can, with rational consistency, continue to hold any belief whatever, so long as one is willing to sufficiently adjust one’s entire conceptual web.

I have some personal experience with this, since I have on more than one occasion backed an atheist into a logical corner in which reason required him to admit the existence of God. To my amazement (and horror) I have found that many atheists are willing to take the leap into absurdity and reject reason outright rather than admit God.  I have actually been told “If reason proves there is a God, then reason isn’t reliable after all.” And how could I persuade them not reject reason at this point? Not with reasons, obviously. Non credo, quia absurdum.

And I’m sure that many sciency-types have encountered fundamentalist Christians who are more than willing to sacrifice their belief in science to preserve their literal reading of the Bible.  I imagine this drives them (the sciency-types) to distraction, because it seems so willfully irrational.  The problem is, of course, the Quine-Duhem problem: it seems willfully irrational on the basis of certain background assumptions that your interlocutor does not share; what seems manifestly irrational to you, is not necessarily self-inconsistent: it is only inconsistent with some of your assumptions your interlocutor does not share.

So where does all this leave us? Well, it leaves us without any cut-and-dried test of what is or is not scientific knowledge, or more generally, what is or is not valid knowledge as such.  We may know a lot of things, but we do not necessarily know what or how or why we know.  And yet, the world endures, somehow—even if we cannot with 100% certainty say what is or isn’t “science.”

Maybe it suggests we should cultivate a bit more Socratic humility and Socratic ignorance.

And it certainly tells us that idiots on the internet should stop invoking falsification as if it were some kind of Holy Scripture or Harry Potter-esque magic spell: Wingardium leviosa! Expecto patronum! Falsificatio popperium! 

It was never a theory of knowledge as a whole. It was a limited theory meant to explain how science is different from non-science.  And it didn’t even WORK.  So WHY are you bringing it up, as if it were some sort of trump card, you idiots?

I’ll close this with the most appropriate thing I can think of, a pair of quotes from the philosopher of science Paul Feyerabend:

FeyerabendPopper

And even more succinctly

FeyerabendEveryTheory

Two Particles Moving Apart Each at the Speed of Light Move Apart at the Speed of Light

Einstein’s theory special relativity holds that no physical entity can exceed the speed of light, C.

This produces a counterintuitive result when one considers the case of two particles moving apart from one another in opposite directions, each with a velocity of C.

Common sense seems to suggest that the particles would be separating at a velocity of 2C.

This is because the ordinary formula for calculating the relative velocity of two objects moving apart, first given as an equation by Galileo (who was a great physicist, but only a mediocre astronomer), is

s = v + u

s for “speed” or “net relative velocity” and since u and v are the same letter, originally, they stand for the two velocities of the things moving apart.

So, by the common-sensical Galileo formula, two trains moving apart from each other, each at 5 meters per second, move apart at 10 m/s.  Human experience and careful measurement seem to confirm this.

And by the same common-sensical Galilean equation, two particles moving apart from one another each at C should move apart at 2C,  since obviously C + C = 2C.

But Einstein’s theory predicts they will move apart at C, contrary to the common-sensical Galilean equation.

What happened?

Very simply, Galileo’s equation is incorrect.  The actual equation is this:

Screen Shot 2017-12-14 at 10.40.22 PM

As we can easily calculate, where v and u both equal C, s = (C + C)/(1 + (CC/C²) = 2C/1+(C²/C²) = 2C/(1+1) = 2C/2 = C.

What about our two trains moving apart each at 5 m/s?

The answer is that they do not move apart at 10 m/s, but at 9.999999999999996 m/s.

And the difference between 9.999999999999996 m/s and 10 m/s is so small (in human terms) that it is impossible to directly measure in almost all cases, and makes no practical difference, let us say, 99.99999999999996% of the time.

In other words, the Galilean equation is slightly inaccurate but produces error so slight as to make no practical difference and thus is completely reasonable to use in almost every calculation human beings make—except of course for those human beings that are theoretical physicists who are dealing with things moving at a significant fraction of the speed of light.  When we are dealing in meters and seconds, or kilometers and hours, angstroms and picoseconds don’t make any real difference.  As Aristotle correctly point out 2400 years ago, it is as unreasonable to demand far too much precision as to demand far too little precision; what is reasonable is to be as precise as the subject matter requires.  Building a house does not require us to make measurements at the subatomic level, and to demand this of the house builder is unreasonable: it’s outside his power, and it would result in no houses being built.

I wrote this just in case anyone wondered about how it is possible for two particles moving apart each at C to move at C.  I wondered about this, a long time ago, and took the trouble to find out, so I thought I’d share.

Our common-sensical intuitions about things are not always correct, although as in this case, they are often “in the neighborhood of truth.”

Does the Law of Inertia Disprove the Argument from Motion?

The argument from motion for the existence of God dates originally to Aristotle.  It is also presented by St. Thomas Aquinas as the First Way in his famous “Five Ways” [quinque viae] to demonstrate the existence of God. He calls it “the most manifest way.”

The demonstration proceeds from the evident premise that “some things are in motion” or “there is motion” to the necessity of a first or unmoved mover, something that can be shown to have the minimal attributes of what we call God. As with all proofs of the existence of God, the argument from motion proves only the bare existence of such a being: what God is like is something that still needs to be developed, and Thomas spends hundreds and hundreds of pages of careful argument doing this.  “You can’t prove anything about something unless you prove everything about it as part of the same argument” is an absurd principle.

Now, some have argued that Newton’s First Law of Motion, also known as the Law of Inertia, serves as a counterexample to, and therefore a refutation of, the argument from motion.  I’m going to show you why this is not the case.

Newton’s First Law states

Every body perseveres in its state of being at rest or of moving uniformly straight forward except insofar as it is compelled to change its state by forces impressed.

[NOTE 1: I had to go get my copy of Newton’s Principa to find this.  If you try to find it online, you’ll get nothing but paraphrases, of which there seems to be no established “correct” form. It’s worth noting that what is taken to be one of the basic laws of physics isn’t even stated clearly by anyone, anywhere, in a definite form!]

[NOTE 2: It’s also worth remarking that this basic law of nature applies to literally nothing. At first it seems to be about everything: “every body.”  But it turns out to be about “every body which is not acted upon by an outside force” which is no body at all—something which Newton’s own law of universal gravitation is enough to show. Every body is always being acted upon by every other body.  So this law of nature states what would happen in a situation that never happens.]

Fair enough. We do get the gist of the law. Aristotle thought, very reasonably but incorrectly, that things have a natural tendency to slow down and stop, because that is what we all observe things to do on earth.  The law of inertia states that they don’t have a natural tendency to change their velocity.  The word change is going to be important.

Now, the argument against the argument from motion would go:

  1. The argument from motion uses as a premise “Everything in motion requires a constant action upon it by something else to keep it in motion.”
  2. The law of inertia states that physical bodies in motion do not require a constant action upon them to keep them in motion.
  3. The law of inertia is true.
  4. So by the law of inertia, one premise of the argument from motion is false.
  5. So the argument is unsound.

Here we need to use Thomas’ favorite word: distinguo, “I distinguish” or “I make a distinction.”  A general rule of logic is that one needs to be working with clear and distinct terms.  If a term is ambiguous or equivocal, then one is in danger of the fallacy of equivocation, of using the ambiguous term now in sense, now in another.  When you encounter an ambiguity, it is necessary to make a distinction, or as many distinctions as necessary.  A good general rule of logical method is: to unmake an ambiguity, make a distinction.

It turns out that “motion” as used in the law of inertia isn’t “motion” as used in the argument from motion.  The argument against the argument from motion turns out to be invalid because of an equivocation on the term “motion.”

Here’s what Thomas says

Now whatever is in motion is put in motion by another, for nothing can be in motion except it is in potentiality to that towards which it is in motion; whereas a thing moves inasmuch as it is in act. For motion is nothing else than the reduction of something from potentiality to actuality. But nothing can be reduced from potentiality to actuality, except by something in a state of actuality.

Helpfully, he gives us definition of the term “motion,” namely, “the reduction of something from potentiality to actuality.”

For the ancients and mediaevals, the word “motion” is synonymous with our word “change.”  “Change of place” or “local motion” was only one very common type of motion, so common in fact that the word “motion” itself eventually came to mean just “local motion” or “change of place.”

So the word “motion” originally meant “change” but came to mean “local motion” or “change of location.”  Wouldn’t it be odd if change of location turned out not even to be a kind of change? This seems to be the what happened.  The most obvious example of “change” turned out not to be a case of change!

We need another distinction, a very important one, drawn by the 20th century philosopher Peter Geach, between real properties  and real changes on the one hand and what he called “Cambridge properties” and “Cambridge changes,” on the other. (Calling them “Cambridge properties” is really a very funny jab at the state of philosophy at Cambridge in Geach’s day, but I won’t get into that here.)  Helpfully, Edward Feser has a paragraph about Cambridge properties and Cambridge changes, which saves me the trouble of writing one myself:

Here, building on a distinction famously made by Peter Geach, we need to differentiate between real properties and mere “Cambridge properties.” For example, for Socrates to grow hair is a real change in him, the acquisition by him of a real property. But for Socrates to become shorter than Plato, not because Socrates’ height has changed but only because Plato has grown taller, is not a real change in Socrates but what Geach called a mere “Cambridge change,” and therefore involves the acquisition of a mere “Cambridge property.” The doctrine of divine simplicity does not entail that God has no accidental properties of any sort; He can have accidental Cambridge properties.

A “Cambridge change” is a change in a “Cambridge property”, which is a property something has not by virtue of having a property (!) but by being in relation to something else, such that a change in the other thing causes a change in the relation, and so a merely relative change in a thing, insofar as it is a-thing-in-relation-with-something-that-changed, but not in itself.  Hard to say, but fairly easy to understand. Feser’s example is a good one: Plato was a very tall man. But it is true that at one time “Plato is shorter than Socrates” was true and later “Plato is taller than Socrates” was true—even though Socrates may have been completely unchanged in height during this time. Socrates’ height is a real property of Socrates. Socrates’ being shorter than Plato is not a real property of Socrates.  It is neither a quantity nor a quality, but a relative.  Such examples could be multiplied indefinitely: “Socrates is to the left of Plato” could be true, and then “Socrates is to the right of Plato” could be, if Plato walks to Socrates’ other side. Again, this could be done without Socrates’ changing position at all.

Why is this important? Well, it turns out that change of place, or local motion, or just “motion” in the modern sense isn’t a case of real change, but only a relative change.  “Motion” is not a substantial entity that is conserved, as Descartes thought.  As it turns out, momentum is the substantial entity that is conserved, lurking in the neighborhood of velocity as p = mv.

I could get into a lengthy discussion of Newtonian physics, but I need not. I’m just going to fast forward to Einstein and relativity theory.  We don’t even have to dive deep into Einstein, either.  It is sufficient to note that Einstein demonstrated that whether or not something is in motion or not is relative to one’s observational frame of reference (hence, relativity).   In other words, there seems to be no truth of the matter about whether something is in local motion or not; it depends on one’s frame of reference and the relation of the thing in question to other things.  You cannot say “X is moving” but only “X is moving relative to Y” or “X is stationary relative to Y.”  It seems like “X is moving” can be said in our, but it is really a statement comparable to “X is taller.” “X is taller” is a nonsense statement, because taller is not a quality, but a relative, and there can be no relatives with multiple relata—at least two.  “X is taller than Y,” on the other hand, makes perfect sense.  “X is moving” seems to make sense, I think, because we almost always make use of a “default frame of reference,” must likely the earth and everything that is generally affixed to the earth and therefore stationary (although most of us are conceptually aware that the earth is or could be moving—if we take the sun or the stars as our fixed frame of reference).  If I see a bird flying, it is very nearly impossible not to analyze this in my perception as the bird being in motion—although conceptually, I should be able to understand that I could make the bird the fixed thing which is stationary, and the earth and everything else in the universe would then move around the bird in a complicated manner. This is impractical and perhaps impossible in practice—but this is exactly the upshot of Einstein’s theory.

What this shows is that motion in modern sense is not a real change but only a Cambridge change.  And this is the motion that the law of inertia is talking about.

But if local motion is only a Cambridge change then local motion isn’t real motion, as understood by Aristotle and Aquinas.  And since isn’t a real motion, it isn’t a counterexample to the principle that “any real motion requires an external agency to bring it about.”

The principle that a real change or real motion, that is, a transition from potentiality to actuality, requires an outside agency is not refuted by the law of inertia.  In fact, the law of inertia preserves the principle entirely, because it states that a change in local motion (that is, a change in velocity) requires an external agency to bring it about.  Change of place turns out to be a mere Cambridge change, but change in velocity is a real change (because it entails a change in momentum, since p = mv, and momentum is a real property that is conserved)—and so it requires that the body which changes its velocity be acted upon by an external force.

So let’s reformulate the arguments

  1. The argument from motion has the premise “Every real change requires an external agency to bring it about.”
  2. The law of inertia states local motion will remain unchanged unless changed by an external agency.
  3. The law of inertia is true.
  4. Local motion is a real change.
  5. So local motion is counterexample to the premise used in the argument from motion.
  6. So the argument from motion has a false premise.
  7. So the argument from motion is unsound.

This refutation of the argument from motion fails because premise 4 is false.  And since 4 is false, 5 is false. And since 4 and 5 are false, 6 and 7 are not demonstrated.

We can present the proper argument thus:

  1. The argument from motion has the premise “Every real change requires an external agency to bring it about.”
  2. The law of inertia states that change of place of bodies does not require an external agency to bring it about.
  3. The law of inertia is true.
  4. ∴ Change of place of bodies is not a real change in bodies.

And this seems to be correct, if very counterintuitive for small moving objects near the surface of the earth.  But it is really no more counterintuitive that idea of the earth being in multiple kinds of motion relative to the sun and the stars.  And as twisty a history as it is, the word “motion”, which used to mean “change”, came to mean “local motion,” which then turned out not be a case of real change at all!

Certainly the idea that something which changes place is only a relative change is somewhat startling, running counter to our ordinary intuitions, but relativity theory is nothing of not challenging to at least some of our ordinary intuitions.  Neither Aristotle nor Thomas would dispute Einstein.  If change of place is only a relative change and not a real one, it is.  Of course it would be foolish to think Aristotle or Aquinas were fools to think what almost everyone thought until Einstein, namely, that change of place is a real change (although some philosophers did question it; Leibniz’ dispute with Newton over Newton’s concepts of absolute space and absolute time comes to mind—as usual, Leibniz was right, or at least, not wrong).

In any case, the law of inertia doesn’t refute the argument from motion. It simply turns out that the law of inertia shows that change of place or local motion isn’t a case of real motion as Thomas and Aristotle use the word “motion”; or, as we would say today, change of place or local motion isn’t a case of real change.

Is Belief in God a Delusion?

A persistent atheistic trope is calling belief in God delusional or a delusion.  The most obvious popular example is Richard Dawkins’ pro-atheism book The God Delusion, a book that, while popularly successful, is notorious for its shallowness and lack of rigorous argumentation (interested readers may wish to look at Alister McGrath’s The Dawkins Delusion for a highly detailed account of the many deficiencies under which Dawkins’ book suffers).

Is belief in God a delusion? The most widely used and accepted definition of a delusion comes from the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association (APA):

Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person’s culture or subculture (e.g., it is not an article of religious faith).

Well, things don’t look very promising for the atheist trope, do they?

To begin with only the most obvious point, delusions are standardly defined in such a way as to exclude articles of religious faith, something for which belief in God obviously qualifies.

Now, an atheist of course could claim that the DSM’s defining delusion in such a way as to exclude commonly held articles of religious faith is an error, a kind of special pleading exemption for religious beliefs, which are not being treated the same as other beliefs. On this basis, the atheist might insist on adopting a different definition of delusion. But this very demand for changing the standard definition appears to be a kind of special pleading on the part of the atheist.   Why shouldn’t religious beliefs be treated in a manner different that other sorts of beliefs? The theist merely needs to note that religious beliefs are not like other beliefs, because they are about a particular part of reality that is sui generis, viz. the divine or transcendent dimension of being.  The atheist could respond that there simply is no such dimension of being, but would immediately fall right back into special pleading and/or begging the question against the theist—unless of course the atheist can bring forth proof that there is no divine or transcendent dimension of being, a proof we still await.

But of course the theist need not rest her case on the specific exemption for articles of religious belief in the DSM’s definition of delusion. Let it go. Let us look at the other factors which make a delusion a delusion.  We see right way that there are three.  To count as a delusion a belief must be

  1. based on incorrect inference about external reality
  2. firmly sustained despite what almost everyone else believes
  3. firmly sustained despite what constitutes incontrovertible and obvious proof or evidence to the contrary

Let’s start with criterion 2.  It appears that the authors of the DSM are aware of something which many atheists manage to somehow overlook, namely, the consensus gentium.  What is the consensus gentium? It is merely a technical name for the common consensus of humanity.  And the common consensus of humanity speaks overwhelmingly in favor of theism.

Here too, it is open to the atheist to object that the consensus gentium is not infallible and to treat it as such is to commit an ad populum fallacy (an appeal to popularity).  But the consensus gentium does not, in itself, constitute an ad populum fallacy—it is, in itself, not a proof of a given proposition, but it does constitute evidence.  To dispute the consensus gentium requires one to hold that the majority of human beings are deceived or delusional in their beliefs (as atheists do hold).  But this view seems to be strong evidence for the proposition “human belief formation is highly unreliable,” since it reliably produces false or delusional beliefs.  But the belief that human belief formation is highly unreliable serves as an all-purpose defeater for any belief whatever, including itself and the belief that “belief in God is delusional.” In other words, to dispute the consensus gentium without specially pleading that human belief formation is reliable everywhere except with respect to the divine, seems to be a self-defeating move.

Just on the face of it, it is obvious that theism is the majority belief of human beings, and always has been, just as atheism is tiny minority belief, even if one grown loud and strident in our modern, highly secularized society.  Theism very obviously fails to meet criterion 2 of the DSM’s definition of a delusion, so it isn’t one.  So far, we’ve seen that belief in God is not a delusion twice over. But there’s more.

Things are worse yet for the atheist who wants to use the “delusion” trope.  Criterion 1 specifies that the belief must be the result of some kind of faulty reasoning, an “incorrect inference about external reality.”  And criterion 3 specifies that a delusion is a belief held “despite what constitutes incontrovertible and obvious proof or evidence to the contrary.”

In other words, it would need to be shown that theism is (1) irrational and (2) obviously false.

And of course atheism has not met either of these challenges.  It has not even come close. In fact, atheists by and large admit that not only have they not proven either of these things, that they cannot do so, more, that it is impossible to do so.

It is no accident that the vast majority of today’s atheists are “lack of belief” atheists.  They follow philosopher Antony Flew’s 1972 redefinition of “atheism” to mean “lack of belief in God” as opposed to the standard and traditional definition “belief that God does not exist” (which is still held by ~80% of people, according to the Oxford Handbook of Atheism).

The “lack of belief” atheist does not claim that he knows or even believes that God does not exist, but only that he remains unconvinced that God does exist.  Well, good for him. (Actually, it’s bad for him, but set that aside).  That someone happens to be unconvinced that a belief is true in no way indicates that the belief is false.  It isn’t even a statement about whatever the belief is about, but a statement about a psychological property of a belief-holder.  If A says, “I lack a belief that G,” a perfectly legitimate response is to make a psychological report of one’s own and note “And I have one. What of it?”

There are, to my knowledge, only three serious arguments which attempt to show that theism is false, that is, that God does not exist:

  1. The Argument from Evil, which holds that the existence of unnecessary evil in the world is incompatible with an all-good being, which a perfect being, God, must be.
  2. The Argument from the Unnecessariness of God, which holds that God is unnecessary as an explanation for anything, and therefore is merely a gratuitous hypothesis which, following Ockham’s Razor, we ought not to make.
  3. The Argument from Self-Contradiction, which holds that the concept of “God” is self-contradictory, and therefore, God cannot exist.

All three arguments are notoriously weak and easily refuted:

  1. The Argument from Evil can be highly persuasive as an appeal to emotion.  One points to some horror or tragedy, personal or historical, and demands “How could a good God let this happen?” The honest answer, and the rational one, is that we don’t know, and we aren’t in a position to know the thoughts of an omniscient, omnipotent, and perfect being. Most theists believe that, since God is good, He does not cause or will evil, but only permits or allows it for a sufficiently good reason, which we simply do not (yet, fully) understand.  Nor can the atheist rule out the possibility that whatever evil he regards as “too much” is not, in a way beyond human understanding, for the best when seen from God’s point of view.  The most an atheist could do, it seems, is what Ivan Karamazov does in Dostoevsky’s The Brothers Karamazov, and willfully refuse to accept some evil or another (Ivan cannot accept the suffering of innocent children).  Yet, one of the basic elements of faith, as Christians use that term, is trust in God.  The Argument from Evil may well test one’s faith, but if it convinces, it does not do so as a rational argument. Christians trust that in the end, even though we do not understand how, in the words of St. Julian of Norwich “All will be well, and all will be well, and all manner of thing will be well.”
  2. The Argument from the Unnecessariness of God fails for two reasons.  As in the Argument from Evil, we simply don’t know enough to be sure that God is not necessary; on the contrary, there is a very strong case that God is necessary as the only possible answer to “The question of Being” namely “Why is there anything at all, and not rather nothing?” But even if we could be sure, which we cannot, that we need not invoke God as an explanation, to infer the nonexistence of God from this is simply a non sequitur.  At most, this argument could aim to show that belief in God is an unreasonable postulate to make, in which case it collapses into the Evidentialist Argument (see below).
  3. And as for the Argument from Self-Contradiction (and also it’s cousin, the Argument that Religious Language is Meaningless), well, no one has ever succeeded in making anything close to a good argument for the notion.  There are of course ways one can define God such that the concept is self-contradictory (and it causes the theist no pain to admit that God so defined does not exist), but no one has ever shown even a good candidate for contradiction in the traditional conceptions of God. The “paradox of omnipotence” which sometimes impresses philosophical beginners is not a paradox at all, once one grasps the fairly basic point that “power to do anything possible” ≠ “power to do things that are impossible.”

With the failure of the only robust atheistical arguments on the table (and they are not very robust), in our time we have seen atheism retreat and retrench from ontology to epistemology, and the rise of the “lack of belief” atheist, which brings us to the Evidentialist Argument.

The Evidentialist Argument is not an argument that God does not exist, and does not attempt to prove that theism is false. It merely argues that belief in God is not sufficiently warranted by the evidence to count as a reasonable belief.

Before taking this up, we should note that even if the Evidentialist Argument turned out to be 100% successful, it would still fail to establish that belief in God is a delusion, since by definition, a delusion must be “firmly sustained despite what constitutes incontrovertible and obvious proof or evidence to the contrary,” and as we have just seen, the atheist has no such “incontrovertible and obvious proof” of the falsity of theism.  He doesn’t even have a remotely plausible one.  Nowadays, the typical atheist doesn’t even try to make an argument.

As we have already seen, belief in God is NOT a delusion, since criteria 2 and 3 cannot be met by the atheist who claims it to be one.  But what about criterion 1? Is belief in God based on an “incorrect inference” about reality? Can the atheist at least meet one out of the three criteria that establish delusion?

No, he cannot. The best the atheist can do is appeal to his own personal incredulity.  He looks at the evidence (or doesn’t look, commonly) and says “I’m not convinced.” The theist looks at the evidence and is convinced.  What sort of epistemic error is the theist making? Why is her belief absurd or rationally unwarranted? These questions have simply never been answered in a way that is non-question-begging, that is, in a way that doesn’t assume from the beginning, tacitly or explicitly, that belief in God is absurd or rationally unwarranted.

The best case the atheist has, it seems to me, is a specularly weak one, but one which happens to sell fairly well in our age. I mean the Appeal to Scientism. Science is so highly regarded today as a way of acquiring knowledge, that unreflective persons can sometimes be induced to accept the claim that “scientific knowledge” is an absolute touchstone of all knowledge or knowledge as such, and so “only scientific knowledge is valid knowledge” which, joined with “there is no scientific evidence of God” would indeed yield the desired result, so

  1. Only scientific knowledge is valid knowledge.
  2. There is no scientific evidence of God
  3. ∴ It is unreasonable to believe in God.

This is the Argument to Scientism in a nutshell. It’s valid and premise 2 is true. The argument fails because premise 1 is obviously false. Science is not the only source of knowledge we have. Science itself makes use of extra-scientific knowledge, and does so necessarily and constantly: it is part of the scientific method to use both mathematics and empirical observation (i.e. experience)—and neither mathematics nor experience, which science seeks to explain, are cases of scientific knowledge, just as such. One is not “doing science” when one is having experience. Another instance of a proposition that is not a scientific one is Premise 1 of the Argument to Scientism itself. Scientism, as a doctrine, is notoriously self-refuting: if it is true, we must reject it as true, on the grounds of itself, because it isn’t itself scientific knowledge.  It fails its own truth test.

The Argument to Scientism also falls to a simple objection of common sense (another often valid source of knowledge, and indeed, the root of the consensus gentium spoken of above): we know that science studies nature (or nature plus human activities, if you count the social sciences as full sciences), and we also know that God, as traditionally understood, is transcendent of nature. God simply doesn’t fall under science’s domain, any more than goodness does, or for that matter, logic and math do.  Why on earth would a  very excellent method for studying nature discover something it neither looks for nor can see, given what its method and scope are? The short answer is: science simply has nothing to say about God; it studies nature. Period.  So appeals to science, including bogus appeals to principles that aren’t scientific but look vaguely “science-y”, as in the Argument to Scientism, fail because they cannot succeed without unreasonably making science omni-competent in every sphere of knowledge, which it obviously is not (Who should you vote for, according to the scientific method?), and reducing all other sources of knowledge to nullity, which would would destroy mathematics and logic and experience as valid kinds of knowledge, and so take science down with it.

To bring this to a close, even if we charitably overlook the DSM’s explicit distinction between delusion and articles of religious belief (one that is entirely reasonable, as I argued), belief in God is still not a delusion: the atheist who claims that it is a delusion cannot meet even one of the three criteria needed to establish a belief as delusional.

I conclude that the atheist trope of calling belief in God “a delusion” amounts to nothing more than name-calling. It doesn’t have the slightest amount of rational weight behind it.

A Dilemma for Scientism

Professor Paul Moser discusses some problems with scientism in his book The Evidence for God: Religious Knowledge Reexamined:

5. A DILEMMA FOR SCIENTISM

Our dilemma will bear on positions (i)–(vi), given that it bears on the aforementioned core statements of naturalism satisfied by those positions, namely:

Core ontological naturalism: every real entity either consists of or at least owes its existence to the objects acknowledged by the hypothetically completed empirical sciences (that is, the objects of a natural ontology).

Core methodological naturalism: every cognitively legitimate method of acquiring or revising beliefs consists of or is grounded in the hypothetically completed methods of the empirical sciences (that is, in natural methods).

These are core statements of ontological and methodological naturalism, and they offer the empirical sciences as the criterion for metaphysical and cognitive genuineness. They entail ontological and methodological monism in that they acknowledge the empirical sciences as the single standard for genuine metaphysics and cognition. These core positions therefore promise us remarkable explanatory unity in metaphysics and cognition. Still, we must ask: is their promise trustworthy? For brevity, let’s call the conjunction of these two positions Core Scientism, while allowing for talk of both its distinctive ontological component and its distinctive methodological component.

Core Scientism is not itself a thesis offered by any empirical science. In particular, neither its ontological component nor its methodological component is a thesis, directly or indirectly, of an empirical science or a group of empirical sciences. Neither component is endorsed or implied by the empirical scientific work of physics, chemistry, astronomy, geology, biology, anthropology, psychology, sociology, or any other natural or social empirical science or any group thereof. As a result, no research fundable by the National Science Foundation, for instance, offers Core Scientism as a scientific thesis. In contrast, the National Endowment for the Humanities would be open to funding certain work centered on Core Scientism, perhaps as part of a project in philosophy, particularly in philosophical metaphysics or epistemology.

Core Scientism proposes a universality of scope for the empirical sciences (see its talk of “every real entity” and “every cognitively legitimate method”) that the sciences themselves consistently avoid. Individual sciences are typically distinguished by the particular ranges of empirical data they seek to explain: biological data for biology, anthropological data for anthropology, and so on. Similarly, empirical science as a whole is typically distinguished by its attempt to explain all relevant empirical data and, accordingly, by the range of all relevant empirical data. Given this typical constraint on empirical science, we should be surprised indeed if the empirical sciences had anything to say about whether entities outside the domain of the empirical sciences (say, in the domain of theology) are nonexistent. At any rate, we should be suspicious in that case.

Sweeping principles about the nature of cognitively legitimate inquiry in general, particularly principles involving entities allegedly outside the domain of the empirical sciences, are not the possession or the product of the empirical sciences themselves. Instead, such principles emerge from philosophy or from some product of philosophy, perhaps even misguided philosophy. Accordingly, Core Scientism is a philosophical thesis, and is not the kind of scientific thesis characteristic of the empirical sciences. The empirical sciences flourish, have flourished, and will flourish without commitment to Core Scientism or to any such philosophical principle. Clearly, furthermore, opposition to Core Scientism is not opposition either to science (regarded as a group of significant cognitive disciplines) or to genuine scientific contributions.

Proponents of Core Scientism will remind us that their scientism invokes not the current empirical sciences but rather the hypothetically completed empirical sciences. Accordingly, they may be undisturbed by the absence of Core Scientism from the theses of the current empirical sciences. Still, the problem at hand persists for Core Scientism, because we have no reason to hold that Core Scientism is among the claims or the implications of the hypothetically completed empirical sciences. A general problem is that specific predictions about what the completed sciences will include are notoriously risky and arguably unreliable (even though this robust fact has not hindered stubborn forecasters of science). The often turbulent, sometimes revolutionary history of the sciences offers no firm basis for reasonable confidence in such speculative predictions, especially when a sweeping philosophical claim is involved. In addition, nothing in the current empirical sciences makes it likely that the completed sciences would include Core Scientism as a thesis or an implication. The monopolistic hopes of some naturalists for the sciences, therefore, are hard to anchor in reality.

The problem with Core Scientism stems from its distinctive monopolistic claims. Like many philosophical claims, it makes claims about every real entity and every cognitively legitimate method for acquiring or revising beliefs. The empirical sciences, as actually practiced, are not monopolistic, nor do we have any reason to think that they should or will become so. Neither individually nor collectively do they offer scientific claims about every real entity or every cognitively legitimate method for belief formation. Advocates of an empirical science monopoly would do well to attend to this empirical fact.

The empirical sciences rightly limit their scientific claims to their proprietary domains, even if wayward scientists sometimes overextend themselves, and depart from empirical science proper, with claims about every real entity or every cognitively legitimate method. (The latter claims tend to sell trendy books, even though they fail as science.) Support for this observation comes from the fact that the empirical sciences, individually and collectively, are logically and cognitively neutral on such matters as the existence of God and the veracity of certain kinds of religious experience. Accordingly, each such science logically and cognitively permits the existence of God and the veracity of certain kinds of religious experience. We have no reason, moreover, to suppose that the hypothetically completed empirical sciences should or will differ from the actual empirical sciences in this respect. Naturalists, at any rate, have not shown otherwise; nor has anyone else. This comes as no surprise, however, once we recognize that the God of traditional monotheism does not qualify or function as an object of empirical science. Accordingly, we do well not to assume, without needed argument, that the objects of empirical science exhaust the objects of reality in general. An analogous point holds for the methods of empirical science: we should not assume uncritically that they exhaust the methods of cognitively legitimate belief formation in general.

Proponents of Core Scientism might grant that it is not, itself, a claim of the empirical sciences, but they still could propose that Core Scientism is cognitively justified by the empirical sciences. (A “claim” of the empirical sciences is, let us say, a claim logically entailed by the empirical sciences, whereas a claim justified by the sciences need not be thus logically entailed.) This move would lead to a focus on the principles of cognitive justification appropriate to the empirical sciences. Specifically, what principles of cognitive justification allegedly combine with the (hypothetically completed) empirical sciences to justify Core Scientism? More relevantly, are any such principles of justification required, logically or cognitively, by the (hypothetically completed) empirical sciences themselves? No such principles of justification seem logically required, because the (hypothetically completed) empirical sciences logically permit that Core Scientism is not justified. Whether such principles of justification are cognitively required depends on the cognitive principles justified by the (hypothetically completed) empirical sciences, and the latter matter clearly remains unsettled. We have, at any rate, no salient evidence for thinking that the (hypothetically completed) empirical sciences will include or justify cognitive principles that justify Core Scientism. The burden for delivering such evidence is squarely on naturalists, and it remains to be discharged.

Moser, Paul K.. The Evidence for God: Religious Knowledge Reexamined (pp. 76-80). Cambridge University Press. Kindle Edition.

Galileo Was Not “the Father of Modern Science”

This is Tim O’Neill‘s account of why Galileo does not deserve the title “the father of modern science,” although Galileo has sometimes been called this.

You can read Tim’s original post on Quora as Why is Galileo Considered the “Father of Modern Science”?  here.

See also my “Galileo was a Dogmatic, Unscientific Ass“.

Galileo Father of Modern Science

Science and God

You sometimes hear it claimed that “Science has disproven God” or more modestly “Science has not discovered any evidence of God.”

The first claim is false.  The second is true but trivial.

What is science?  This is a question we really need to get clear about before we can talk about what science has done, or not done, because we need to know what science can do, and what it can’t do.

The first thing to note is that “science” is a vague concept which cannot be defined precisely.  It is a Wittgensteinian “family resemblance” concept (as is the concept “religion”) which covers a wide variety of activities and their results, which share many overlapping features (in the way that members of a particular family all “look alike”) without there being or needing to be any one particular feature that they all have in common.

We can talk about science in broad strokes: we can say that it is a way of acquiring knowledge about some areas of the world.  We can say that science is methodical, although again there is no such thing as “the scientific method”—rather we are again faced with a number of methods which bear a family resemblance to one another.  The Big Bang cosmologist does not proceed in any way like the psychologist (for example, neither observe, at least not in any direct way, since neither the past nor the psychē can be observed—even though “observation” is often held to be central to “science”).

There is dispute about whether the social sciences are sciences at all.  Certainly disciplines like sociology and cultural anthropology do not produce results which are the same kind of stable, objective, universally-agreed upon knowledge as is given by, say, chemistry.

I am going to suspend judgment about the status of the social or human sciences as sciences, and affirm that natural science is definitely science.

Let me offer what I think is fairly good (if necessarily broad) definition of “natural science”: the study of physical nature by means of an empirical-quantificational method.

This definition would assert that science has a specific domain: physical nature, by which I mean all parts of reality that involve material existence, e.g. the realm of matter and motion, usually disclosed (at least initially) by the senses.  This is the part of reality the Greeks called γένεσις (becoming) or simply φύσις (nature).  What we call “natural science” Aristotle called “physics,” which is knowledge of φύσις.  Aristotle also called it “second philosophy.”  First philosophy was the name he gave to what we nowadays call metaphysics—a discipline that gets its name from Aristotle’s texts that come, literally, “after physics” and go “beyond physics” (“after” and “beyond” are both ways of reading meta-).

My definition would further assert that science is a methodical study of nature, which has the characteristics of being empirical, that is, based ultimately in experience (usually in the restricted sense of sense experience) but which aspires to a level of certainty and objectivity closer to that of mathematics, by means of measurement and quantification; e.g. anyone can experience that objects fall towards the earth; but it is “scientific knowledge” that objects accelerate towards the earth in proportion to their mass and inversely to the square of the distance between them and the earth and in accordance with a specific constant G, the gravitational constant), and indeed this is true of any two objects which possess mass, not merely objects and the earth, so:

Gravity Formula

On my understanding of science, “natural science” proper would have begun 17th century, with the work of people like Bacon and Galileo, although there were certainly many precedents for what they were doing.  In a way, then, I am taking Newtonian physics as a kind of paradigm for “what natural science looks like.” Or James Clerk Maxwell’s discover of electromagnetism.

Two things are absolutely crucial to notice about science, on this understanding:

  1. Science, by its very nature, does not study all of reality.
  2. Science, by its very nature, does not even study nature in all its aspects.

If science studies nature, and nature is only one part of reality (or Being, if you prefer the more ancient way of speaking), then it is clear that science does not study all of reality.  What parts of reality lie outside the scope of science? If nature is defined as above, the part of reality that involve matter and motion, then science would not study those parts of reality which do not involve one or both of those aspects.

What aspects of reality do not involve matter or motion? Arguably, the entirety of the psychical or mental realm, the realm of thought or consciousness, does not involve matter—at the very least it does not obviously do so, nor could it possibly in all respects.

And just as importantly, if not moreso, there is the realm of what me might call Platonic reality, in which would be included such things as eternal nonmaterial truths, e.g. those of mathematics, and of logic, Platonic forms or essences, and even the laws of nature which science seeks to formulate (there is a sense in which the laws of nature are not part of nature and so are not studied by science).

Take as a simple example the logical form modus ponens:

  1. P ⇒ Q
  2. P
  3. ∴ Q

There is no obvious sense (and no sense at all, I would say) in which the validity of this form of inference involves matter.  It is true that this is a valid form of inference, whether or not anyone knows this or actually infers in accordance with it.

Truth in general is nonmaterial.  This point cannot be overstated.  There is no intelligible sense in which truth can be construed as a physical substance or entity which e.g. occupies space, or has mass, or is subject to physical forces, or motion, or change.

This was the realization which caused St. Augustine to abandon materialism and eventually atheism.  His great intellectual stumbling block to accepting Christianity was his difficulty in rising above the senses and seeing how anything nonmaterial could be (his other great nonintellectual stumbling block was his addiction to fornication).  It is obvious that a materialist is not going to be able to believe in or even readily comprehend the concept of an immaterial God.  But every human being has a direct experience of truth. And truth cannot be material.  Even if truth is apprehended only in by thinking, and thinking is somehow a function of matter (e.g. physical events in the brain or some such), it will still be the case that truth is what it is entirely apart from the thinking which apprehends it.  To think long and clearly about the nature of truth is the best way I know of to overcome materialism.  There is simply no way that one can torture truth into a material entity (not even when it is truth about material entities) nor is it possible for a sane person to renounce truth.  And of course, once the materialist realizes that something as important and all-pervasive as truth is nonmaterial, the dam has been broken, so to speak, and he or she is then free to realize that many things are not: goodness and being, unity and plurality, identity and difference, logic, mathematics, essence and existence, consciousness and on and on.  And once that step into the realm of Platonic reality is achieved (call it the first step out of the Cave), the way is at least no longer blocked for such a person to reason their way upwards to what Plato called The Good, or God.  

Nor is it plausible that thinking itself or consciousness is reducible to material events.  To begin with, the two are logically distinct.  No contradiction is involved in thinking that there is thinking which involves no matter.  Indeed, it is only within consciousness that we have an experience of what we call matter.  This is the main thrust of Descartes’ Meditations of First Philosophy: I am able to doubt, as a matter of principle, that my experience of the physical world is veridical (I could be dreaming; I could be inside a “Matrix” or other virtual simulation or illusion)—but I cannot doubt my own thinking or my own having of experiences, even if they should turn out to be non-veridical.  What is absolutely impossible to doubt is that, so long as I am aware and thinking, I am: Cogito; Sum.

Thinking or the mental realm thus has a certain priority over the physical world. Whatever relation pertains between the two in fact, it is a given that the mental world is, for us, first; it is logically first, in that we cannot doubt it—and it is closer to us in another way, it is, in a primal way, what we are.  As Descartes says “I am a thinking thing.”  Even if we wish to side with Aristotle and say that a human being is a hylomorphic unity of body and soul (which we should), we must still admit with Aristotle that “I am most of all my thinking.”

Let’s take a step back.  On the account I have given so far, science would be unable to study nonmaterial realities such as mathematics and logic (I do not consider these disciplines to be “natural science,” which is certainly not to say I doubt their validity); nor would science be able to study any aspects of the mental or of consciousness which could not be resolved into “third person, objective” knowledge that has some kind of material empirical correlate.  Again, by its very nature, as third-person objective knowledge of physical nature, science does not and cannot know things that are irreducibly first-person, subjective—i.e. what philosophers call qualia, the raw “what it is like” to experience something. For example, the redness of red.

Qualia are a much-debated subject in the philosophy of mind, but if there are qualia (which there evidently are) it is clear that they lie outside the purview of science. Here is a link to the Stanford Encyclopedia of Philosophy’s entry on Qualia: The Knowledge Argument—which argument I find entirely conclusive.  Here is, from the same page, philosopher Frank Jackson’s deservedly famous thought-experiment about Mary the neuroscientist:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’.… What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then is it inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false.

Jackson’s thought-experiment attempts to show that there is simply more to the world than third-person objective physical facts about it; there are also irreducible first-person subjective experiences of what-it-is-like.  Mary the neuroscientist knows all the physical facts about color perception; but Mary the human being does not know something very important about the color red, namely, what it looks like.  And this is something she could never learn from science, because science can never know this.

Another seminal article is this debate is Thomas Nagel’s also justly famous “What is it like to be a bat?” in which he discusses this question of “what-it-is-like-ness.” I urge you to read this article as well. TL;DR: bats find their way around largely by echolocation; no human knows “what it is like” to “see” the world by means of echolocation, but it makes perfect sense that there is “something it is like to perceive the world by means of echolocation” just as there is “something it is like to see”—something which congenitally blind people do not know, never having experienced it.  (Reading the accounts of congenitally blind persons when they describe their second-hand experience of sight in others is fascinating—sometimes as a kind of touch-at-a-distance that one cannot feel.)

I should note here that in asserting that science has certain intrinsic limitations because of its nature and method, I am not disparaging science in any way.  As the Dutch philosopher René van Woudenberg puts it

Saying that science is limited is, of course, something very different from criticising science. I take science with utter seriousness. I take my guitar with real seriousness too—but I must say the instrument has its limits: I can’t produce the golden sound of a horn by means of it (nor, for that matter, drive to Chicago in it). Saying so much is not criticizing my guitar.

Science is a remarkable and wonderful method, and I appreciate science greatly. However, since science is our great success story as moderns, we have an unfortunate tendency to overestimate science and attribute to it a kind of omni-competence that it simply does not possess.  Hence the popularity today among the semi-educated of scientism, the ridiculous doctrine that science is our only source of knowledge.

A final area in which science is limited is axiology, or the study of values.  One of the major things often asserted about science, in fact, is that it deals with facts rather than values.  And so it does (although, as I have argued, with only some of the facts, the physical facts). But this marks another clear limit of science, if we suppose that any values have truth content which can be known; and almost all of us think that this is the case; we think that some things are better than others, e.g. we think that science is a good thing, because we think that knowledge is better than ignorance, or that scientific discovery can produce technological benefits, that is, goods, such as advances in medicine—something which makes sense only on the recognition that somethings, such as health, are real goods for human beings.

But science is simply mute with respect to values: science cannot tell us that health is good, although it may be supremely useful in helping us to be healthy. Science cannot tell us that science is a good thing to do.  Science cannot tell us anything about ethics, or how we should live our lives well or morally. Science cannot tell us anything about politics, that is, in what the common good of a people or society consists.  Once we have come to know that something is good or bad, science can indeed be useful: for example, once we know by extra-scientific means that human extinction is bad, we might be able to use our scientific knowledge to prevent it from occurring.  But science itself has nothing to say about whether or not ecological devastation or human extinction is good or bad.  Or whether anything is good or bad, right or wrong, beautiful or ugly, just or unjust, and so on.  If we can have knowledge of these matters (and I hold we can) it will not be scientific knowledge.

I suppose the time has come to bring this to a close.  I have argued that science is a method for acquiring knowledge about physical nature, and that, as such, it has intrinsic limitations to what it can do: science does not and cannot have anything direct to say about immaterial Platonic realities, including essences, logical entities, and mathematical entities; science does not and cannot have anything to say about irreducibly mental entities such as direct first-person conscious experiences, qualia, what-it-is-like-nesses and other such things; science does not and cannot have anything to say about the entire realm of value, about good and evil, good and bad, right and wrong, justice and injustice, beauty and ugliness, and so on.  All of these things are limits of science.  The fact that science has limits is not a judgment that science is bad or useless.

So we come to the point at last. God, as God is understood in traditional classical Christian theism (and in all higher theism), also very clearly falls outside the domain of natural science. God is an immaterial being who utterly transcends not only physical nature, but all other reality as we know it, since God is understood to be, by definition, the transcendent source of all being, reality, truth, goodness, and so on.

If the God of traditional Christian theism exists, science has absolutely nothing to say about the matter, one way or the other.  Let’s return to the two propositions I began with:

“Science has proved that God does not exist.” This is, as I said at the start, absolutely false. To prove such a thing would be completely beyond the competence of science to do.

“Science has not discovered any evidence of God.” This is, as I said at the start, true but trivial.  Science has not discovered any such evidence because science is a method which not only does not look for such evidence, but as part of its rigorous method, actually excludes such evidence from consideration.  That an event E had a supernatural cause could be true, but it could never be accepted as scientific explanation of E; not because it is false (it isn’t; by hypothesis, it is true), but because science, as the study of nature, methodologically excludes supernatural explanations.  It does not follow from this methodological exclusion of the supernatural by science that nothing supernatural exists, or that we cannot know the supernatural.  All that follows is, if there are supernatural beings or causes or events, science will necessarily be blind to that aspect of them, and will thus remain scientifically puzzled at such events, since they will have (by hypothesis) no true natural—and thus, scientific—explanation.

As a traditional, classical, liturgical, Orthodox Christian, there is not a single point at which my beliefs conflict with science, nor is there any genuine scientific knowledge that I am logically required to renounce.  The idea that there is some sort of deep and abiding conflict between science and religion, or between reason and faith, is false.

Concerning the existence of God, science simply has nothing to say about the matter.