Intellectually Dishonest Atheists

As philosopher Edward Feser has pointed out, some atheists are simply not intellectually serious. They may be very ignorant or uneducated, directly dishonest, deeply confused, ill-informed, willfully obtuse, ideologically dogmatic, or just plain stupid; the end result is the same: it is not possible or fruitful to have a serious, rational discussion about God with such people. Here are some red flags which may alert you that you are dealing with an intellectually dishonest or defective atheist:

✅ 1. A persistent inability or refusal to distinguish God from a god or gods. This is a distinction 3 or 4-year-old children can easily grasp, so any atheist who claims not be be able to grasp it is either severely intellectually impaired or lying. In almost all cases, the atheist is simply attempting to conflate God with a god in order to set up a strawman and/or trying to annoy you by belittling God—while ignoring the basic conceptual distinction that all European languages mark by differentiating the word “God” from the word “god” by capitalization. As the Stanford Encyclopedia of Philosophy explains, in the entry written by atheist philosopher J. J. C. Smart:

‘Atheism’ means the negation of theism, the denial of the existence of God. I shall here assume that the God in question is that of a sophisticated monotheism. The tribal gods of the early inhabitants of Palestine are of little or no philosophical interest. They were essentially finite beings, and the god of one tribe or collection of tribes was regarded as good in that it enabled victory in war against tribes with less powerful gods. Similarly the Greek and Roman gods were more like mythical heroes and heroines than like the omnipotent, omniscient and good God postulated in mediaeval and modern philosophy.

Theists have little to no interest in discussing gods, at least not when God is the topic of discussion. If an atheist wants to discuss gods, he is free to do so, but he cannot pretend talk of gods has any bearing on or relevance to a discussion about God.

✅ 1.1 A persistent inability or refusal to distinguish God from such things as imaginary friends, faeries, wizards, spaghetti monsters, Santa Claus, or other fabulous, fictitious, or mythological entities.

✅ 1.2 A persistent habit of paraphrasing religious ideas in ways which are deliberately ludicrous, derisive, or tendentious, e.g. describing the resurrected Christ as “a zombie,” or God as a “sky daddy.”

✅ 1.3 Persistent use of the fallacious “I just believe in one god less than you” rhetorical trope.

✅ 1.4 Persistent use of tendentious and irrelevant rhetorical mischaracterizations of Christianity, e.g. as “Bronze Age mythology.” Christianity, of course, dates from long after the so-called “metallic” ages, in fact from the prime of the Roman Empire, on of humanity’s civilizational high points. And Judaism, its precursor religion, derives almost entirely from the Iron Age up through historical times—not that the age of a teaching has any bearing whatever on its truth-value.

✅ 1.5 Persistent dishonest characterization of God as some kind of “cosmic tyrant” or “cosmic oppressor” (interestingly enough. the position of Satan).

✅ 1.6 Persistent dishonest characterization of God, especially in the Old Testament, as a moral monster.

✅ 1.7 A persistent inability or refusal to distinguish miracles from magic, usually paired with a tendency to attribute magical powers to nature, e.g. in such claims as “the universe created itself out of nothing” or “properties such as consciousness just emerge out of unconscious matter, because they do.”

✅ 2.0 Belief in scientism, the logically incoherent claim that “only scientific knowledge is valid/real/genuine knowledge” or that “only science or the scientific method can establish the truth-value of propositions,” claims which are neither themselves scientific nor established by science, and hence, self-defeating, and which entail such absurdities as “no human being knew anything before Europeans in the 1600s.”

✅ 2.1 Persistent claims that science, which studies physical nature by means of empirical observation and quantitative measurement, has any bearing on the question of the existence of God, who is by definition, beyond nature, not empirical, and not measurable in terms of quantity. Persistent insistence that claims about God must be proven “scientifically” or that any evidence for God must be “scientific” fall into this category.

✅ 2.2 The claim that Galileo Galilei’s run-in with the Roman Catholic Church in 1633 proves (somehow) that there is some kind of natural antipathy between either (a) science and religion, or (b) science and Christianity, or (c) science and Catholicism. This indicates a complete ignorance of the history of the Galileo affair, and is merely a recycled weaponized meme of the early Enlightenment.

✅ 2.3 Use of the non sequitur that the multiplicity of religions proves that no religion is true, either wholly or in part. By this logic, of course, one may also “prove” that no scientific theory is or can be correct, wholly or in part, since there are always rival theories.

✅ 2.4 Claiming or assuming that the atheist, a finite being who is not all-knowing, is not all-powerful, is not all-wise, and is not all-good, nevertheless is in an epistemic position to know with certainty what an all-knowing, all-powerful, all-wise, all-good being would or would not do or have done.

✅ 2.5 The belief the atheist knows the true or real origin of religion in human pre-history, a matter which, since it occurs far in human pre-history, we have no certain knowledge of, but only conjecture.

✅ 2.6 The peculiar belief held by some atheists that their total ignorance with respect to God and divine matters is in fact an infallible indication of their intelligence or wisdom or knowledgeableness precisely about the things about which they know nothing.

✅ 2.7 Repeated assertion of the evidently false claim “there’s no evidence for God.”

✅ 3.0 Persistent use of the burden of proof fallacy, that is, the rhetorical trope which combines an argument from ignorance (“my position is the default position,” i.e. “my position is true until proven false, so I need not argue for it) with special pleading that the atheist be allowed to use arguments to ignorance in support of atheism (i.e. “atheism is true because I am totally ignorant about God or divine matters”).

✅ 3.1 Chronological bigotry, i.e. the absurd belief that human beings who lived prior to (say) Richard Dawkins were one and all somehow mentally inferior to anyone living today, up to and including the greatest minds of the past. This would also include the belief that all human beings in the past were incapable of skepticism or critical thinking, or were somehow exceptionally gullible or credulous in a way we, the Enlightened Moderns, are not.

✅ 3.2 “Arguments” that consist wholly of posting atheist memes, e.g. “Eric the God-Eating Penguin.”

✅ 3.3 “Arguments” that consist of no more than exercises in blasphemy or obscenity.

“Fallacies” aren’t what you think they are—and they aren’t very useful.

If you spend any amount of time online following or taking part in debates, you’ll eventually see someone accuse someone else of committing or employing “a fallacy.” What exactly does this mean?

The superficial thing it means as that the accuser is claiming that there is something wrong with the other person’s argument—specifically that the conclusion they are drawing doesn’t follow from the premises they are using; and so (by definition) their argument is logically invalid—where “logically invalid” just means “it is possible that the premises are true and the conclusion is false.”

But people often assume or act as if a fallacy were something more than this, that it is a kind of meta-error that makes arguments erroneous, that it is a kind of formal invalidity maker. But “fallacies” are no such thing.

What is a “fallacy”?

There is fairly wide agreement about what one is, some kind of error in reasoning, but if you take a close look at how logicians both professional and popular define “fallacy,” you will quickly see that there seems to be no “something more than just an error” that people seem to think there is. Here’s a representative list of definitions:

“A fallacy is a deceptive error of thinking.” – Gensler, Introduction to Logic

“A fallacy is a mistake in reasoning.” – Kreeft, Socratic Logic

“A fallacy is a flaw in reasoning.” – yourlogicalfallacyis.com

“A fallacy is an error in reasoning.” – The Nizkor Project

“A fallacy is the use of invalid or otherwise faulty reasoning or ‘wrong moves’ in the construction of an argument.” – Wikipedia

“Fallacies are deceptively bad arguments.” – Stanford Encyclopedia of Philosophy

“A fallacy is a kind of error in reasoning.” – Internet Encyclopedia of Philosophy.

What makes an argument valid or invalid?

There are two basic principles of logic that you must grasp to understand this, and one false principle that you must recognize as false. Let’s start with the correct principles:

(V1) An argument is valid if and only if it instantiates any valid argument form.

(V2) An argument is invalid if and only if it fails to instantiate any valid argument form.

This principle is INCORRECT:

(F1) An argument is invalid if it instantiates an invalid argument form.

Reasoning is movement of the mind from premises to a conclusion, from point A to point B. An argument is valid if it can get from point A to point B by any possible route; it is invalid only if one cannot rationally get from A to B at all. Note that the fact “you can’t get from A to B by route F” doesn’t entail “you can’t get from A to B at all.” This is why (F1) above is a FALSE PRINCIPLE: The fact that a given argument A instantiates a given invalid argument form F shows nothing at all about the validity of A; A can, in principle, instantiate any number of invalid argument forms and still be valid—it only has to instantiate just one valid argument form to be valid.

Consider: “You can’t reach the North Pole by going south; therefore, you can’t reach the North Pole.” “You can’t reach the North Pole by going east; so you can’t reach the North Pole.” “You can’t reach the North Pole by going west; so you can’t reach the North Pole.” “You can’t reach the North Pole by going southeast; so you can’t reach the North Pole.” “You can’t reach the North Pole by going southwest; so you can’t reach the North Pole.” ETC.

The fact that there are innumerable directions by which you cannot reach the North Pole doesn’t show anything at all about whether you can reach the North Pole.  The fact that there is one direction you can travel in which will get you to the North Pole—namely, north—shows that you can, in fact, reach the North Pole. The only thing that matters is that there is a way that succeeds.

And logic is just like that. If there is a valid route from A to B, it doesn’t matter whether there is an invalid route from A to B, or many, or an infinite number (which there are: there are always an infinite number of invalid routes from A to B you can propose).

AN EXAMPLE: Consider the following argument:

  1. q ⇒ q
  2. q
  3. ∴ q

This argument instantiates the invalid argument form affirming the consequent. Is it invalid? No. Why not? It also instantiates the perfectly valid form modus ponens.  So it really doesn’t matter that it instantiates an invalid form, since it also instantiates a valid form—which is all that matters.

THE UPSHOT: people think that IF they have correctly identified an invalid argument form that an argument instantiates, THEN they have shown something about the argument’s validity.  But they haven’t. That no more works than what is sometimes called the “fallacy fallacy,” which is to hold that showing that an argument for conclusion C is invalid shows that conclusion C is false. But this doesn’t work. An argument for C can be invalid and C still be true.  And it is just as wrong to hold that showing an argument A instantiates an invalid argument form F shows that argument A is invalid.

Sophia vs Jacob: A Hypothetical Twitter exchange

Suppose Sophia and Jacob are having a Twitter exchange, and Sophia makes an argument, to which Jacob responds by typing (in all caps of course) “BANDWAGON FALLACY!” and ‘helpfully’ providing Sophia with a link to yourlogicalfallacyis.com’s description of the Bandwagon fallacy:

1. Jacob hasn’t shown anything about Sophia’s argument. All he has done is say the name of an alleged fallacy. If Jacob’s position is that an argument is proven to be invalid on the condition that someone says the name of a fallacy, then he’s in pretty dire trouble. Sophia can simply elect to accept Jacob’s rule and proceed to “refute” all Jacob’s arguments by saying the names of various fallacies; or she can ask him to prove his rule is correct—and defeat every argument he attempts to give for it by saying the names of various fallacies; or if Sophia is feeling particularly snarky, she can post one of my cards to Jacob, which uses the rule he is implicitly appealing to to defeat itself. When someone thinks they can refute your argument by saying the name of a fallacy, this is the name of the fallacy you say to refute their argument that they have refuted your argument by saying the name of a fallacy:

evecardfallaciesnominalfallacyfallacy

THE UPSHOT HERE is that Jacob has still failed TO DEMONSTRATE HIS CLAIM that Sophia has made an error in her argument. All Jacob as done so far is argue “I have said the name of an error; therefore your argument is in error.

2. Suppose Jacob mans up and really does try to show that Sophia’s argument fits the alleged pattern of the “bandwagon fallacy.” And let’s suppose he succeeds. Well, he still hasn’t shown anything about the validity or invalidity of Sophia’s argument. We’ve allowed that he has shown that Sophia’s argument instantiates a type of reasoning that, as an informal fallacy, can be invalid, but it is the nature of informal fallacies (aka material fallacies) that they are sometimes errors and sometimes not. I’ve written a lot about this, mostly for Twitter. If you like, you can have a look at what I have to say about material fallacies HERE.  But just to give some examples:

(a) an appeal to authority is sometimes erroneous, sometimes not; appeal to experts in their fields or a scientific consensus are not errors of reasoning. In the 1990s, Andrew Wiles proved Fermat’s Last Theorem is indeed a theorem.  How do I know this? Because a few dozen of the the few thousand people capable of understanding Wiles’ proof checked it very carefully and agreed. So, my belief that Wiles’ proof of Fermat’s Last Theorem is sound is based entirely on the authority of some of the world’s foremost mathematicians. I could verify Wiles’ proof for myself, but it would require around a decade of mathematical study before I’d be in a position to—and frankly I don’t have the time or inclination. That is what expert mathematicians are for. If we were disallowed from appealing to authority, we’d have to establish all knowledge for ourselves at all times, which would completely defeat the point of the division of intellectual labor, and completely wreck science, since no established scientific “conclusions” could ever be appealed to.

(b) an argument from parts to whole is not always an error.  If you argued that “every brick in the Yellow Brick Road weighs 3.5 kg, so the entire Yellow Brick Road weighs 3.5 kg” you’d be making an error (and if you want to call it “fallacy of composition” go ahead).  But if you argued “every brick in the Yellow Brick Road is brick, so the Yellow Brick Road is a brick road” or “every brick in the Yellow Brick Road is yellow, so the Yellow Brick Road is yellow” you wouldn’t be making an error.

(c) An appeal to universal human experience is not an “ad populum fallacy,” for example in the case of such claims as: “minds exist” or “human beings by nature are divided into two sexes” or “time has three dimensions, past, present, and future” or “anger and fear are emotions found in human beings” or Euclid’s Common Notions: “The whole is greater than the part” or “equals added to equals are equal” or “things which are equal to the same thing are equal to one another.”  The point here is that there really are COMMON NOTIONS, as Euclid uses that term, namely, things which all human beings know or can recognize that do not stand in need of any formal proof, because they are too simple or too obvious to have one (this is also called the consensus gentium, “the consensus of the whole species”—I’ve discussed how it differs from an ad populum error HERE.

THE UPSHOT HERE is that Jacob has still failed to TO DEMONSTRATE HIS CLAIM that Sophia has made an error in her argument, even if he has shown that her argument instantiates the form some informal fallacy or another. All Jacob has done so far is argue “Your argument instantiates a form which may or may not be an error; therefore it is in error!

3. But let’s make it a bit easier on Jacob, and suppose that he has detected in Sophia’s argument not an informal fallacy but a formal fallacy, such as affirming the consequent. Now, formal fallacies are different from material or informal fallacies in that they are invalid just in virtue of their form, and so do not depend on the situation, context, or content as to whether they are valid or invalid. They are simply invalid. Period.

Now, let us suppose that Jacob succeeds in showing that Sophia’s argument does indeed instantiate the invalid argument form called affirming the consequent (P⇒Q; Q; ∴P). Has Jacob now shown that Sophia’s argument is invalid? No, he has not. He has not, because to do so, he would have to be appealing to principle (F1): An argument is invalid if it instantiates an invalid argument form, which as we have seen, is a false principle.

Sophia is therefore entirely within her epistemic and logical rights to says to Jacob, after he has demonstrated that her argument instantiates the form of affirming the consequent, “So what? You still haven’t shown my argument is invalid. You wasted a lot of effort on showing something that has no logical bearing on whether my argument is valid or invalid.”

THE UPSHOT HERE is that Jacob has still failed TO DEMONSTRATE HIS CLAIM that Sophia has made an error in her argument, even if he has shown that her argument instantiates the form even of a formal fallacy. All Jacob has done so far is argue “Your argument instantiates a formally invalid argument form, an error; therefore it is in error.” And this argument is an enthymeme that requires the false principle (F1) as its hidden premise, and so is unsound.

4. Jacob’s problem is that, in order to show that Sophia’s argument is invalid, he has only two options: (A) the direct logic-indifferent method, in which he can show that Sophia’s argument is invalid by showing that, while her premises are true, her conclusion is false.  This isn’t really a “method” at all—it is simply showing that Sophia’s argument is a case of the very definition of an invalid argument, namely, an argument with true premises and a false conclusion; or (B) Jacob can attempt to show that there is NO logically valid argument form which Sophia’s argument instantiates (remember, her argument only needs to instantiate ONE to be valid), in any formal-logical system, including those which have not yet been discovered or constructed.  In other words, to use method (B) Jacob would have to prove the nonexistence of a logical form which Sophia’s argument instantiates. And as most people are well aware, it is damn-near impossible to prove absolutely the nonexistence of something (showing it to be contradictory is the only way I know that this can be done).

THE UPSHOT HERE is that all appeals to fallacies as a way of refuting arguments or proving invalidity all seem to be instances of (B)—and they all fail because they can’t actually do the work of demonstrating invalidity.  THE MOST they can accomplish is to raise a doubt about the validity of an argument by suggesting that the argument in question has nothing more to it than the invalid form it instantiates.  That is to say, that the person making the argument is appealing to the invalid form as a valid form, which he or she means to establish validity, either in the mistaken belief that it is valid or disingenuously as a rhetorical move.  But if the person making the argument says “No, I see that pattern is invalid, but that’s not what I’m claiming makes my argument valid,” then an appeal to fallacy really can’t DO anything else. People WANT to say “No! Your argument instantiates an invalid argument form! It’s invalid!” But they can’t logically say that. That’s (F1) again, and (F1) is false—obviously false, even: “You can’t get from A to B” obviously does not follow from “You can’t get from A to B by route F.”

Why aren’t “fallacies” very useful?

A fallacy is usually a name given to some general type of error or mistake. The problem with this is that errors do not, strictly speaking, have ‘types’—there are no general forms of error, because error by its way of being is indefinite and indeterminate—and what is indefinite and indeterminate is cannot be defined or determined rigorously.

Fallacies aren’t very useful because they CAN’T DO MUCH.

Naming a fallacy certainly doesn’t show anything about an argument’s validity or invalidity.

Showing that an argument fits the form of a informal fallacy doesn’t show anything at all, since material fallacies aren’t always fallacious—that depends entirely on the content, and you’d still have to show that the argument in question is in error, something which, if you are able to do it, makes the citation of the “fallacy” completely redundant and superfluous, and if you can’t do it, makes the citation of the “fallacy” completely toothless and pointless.  So in the case of informal fallacies, citing the fallacy accomplishes nothing either way; everything turns on whether you can demonstrate an actual error in the argument. EITHER WAY, the citation of the fallacy adds nothing and does nothing.

Showing that an argument instantiates a formally fallacious argument form also doesn’t show anything about the validity of the argument. Because (F1) is false, from the fact that a given argument A instantiates a given formally invalid argument form F, NOTHING FOLLOWS ABOUT THE VALIDITY OR INVALIDITY OF A.  So, once again, if you are going to get anywhere, you’d have to show an error in the argument itself, and the appeal to the fallacy (1) does not show any error, nor (2) add anything to the demonstration of error if one is able to show an error in the specific argument. So in the case of formal fallacies, citing the fallacy accomplishes nothing either way; everything turns on whether you can demonstrate an actual error in the argument. EITHER WAY, the citation of the fallacy adds nothing and does nothing.

Basically, citing a fallacy or appealing to a fallacy is just a roundabout way of saying “Your argument is in error”—and this is something that still needs to be shown. Either can you can show an error, in which case the citation of the fallacy is superfluous and adds nothing; or you cannot show any error, in which case the citation of the fallacy is pointless and accomplishes nothing.

EITHER WAY, the citation of a fallacy ADDS NOTHING and DOES NOTHING. 

ADDENDUM:

It is with satisfaction and pleasure that I learn that Peter Geach, one of the greatest logicians of the 20th century, and a philosopher I respect extremely highly, makes the same point that I do: that ‘fallacies’ understood as invalid argument forms are not invalidity-makers of arguments:

GeachValidInvalid

As Geach notes, if it were the case that invalid argument forms were invalidity-makers, then all arguments would be invalid, since all arguments can be reformulated an the conjunction of all their premises with &s, leaving us with the invalid form

  1. p1 & p2 & p3 & p4 & … & pn
  2. ∴ q

or more simply

  1. p
  2. ∴ q

which gives us a simple modus tollens

  1. If instantiating a logically invalid argument form makes an argument invalid, no arguments are logically valid.
  2. But some arguments are logically valid.
  3. ∴ Instantiating a logically invalid argument from does not make an argument invalid.

Q.E.D.

Philosophers are (or should be) interested in truth, not originality, so I am always pleased to find points that I make in philosophers I respect.

Retortion

Philosophers test things. We test arguments for logical validity and soundness. We test propositions for coherence and truth-value. We test concepts and definitions. Testing things is built in to the activity of philosophy.  This is the meaning of the famous Socratic ἔλεγχος (elenchos), which is translated as “refutation” or “cross-examination,” but really just means “a careful scrutiny or examination of something.”  If that something is a claim, and that claim doesn’t withstand testing, then refutation will be the result, as it typically was with Socrates, since most of his interlocutors’ claims didn’t hold up under testing.

Philosophers have many means of testing claims.  One of the best techniques for testing very broad or universal claims or principles is the technique of retortion.  It is one of the most useful and one of my personal favorites. ‘Retortion‘ is Latin for ‘turning something back on itself’ (this is what we do when we ‘retort’ to someone—we turn words back on them).  In the case of philosophy, retortion means applying a universal principle to itself to test whether it passes its own test or meets its own standard.

Many claims are such that they are self-stultifying, that is, in some critical way or another, they undermine themselves.  A self-destructive or self-refuting claim is a claim that, if it is true, must be false, for example “There is no truth”—if this claim were true, it would be false.  A self-undermining claim is one that, while it doesn’t itself entail its own falsity, does entail that the claim is unwarranted, and so cannot be known to be true or asserted as true, for example “No human being knows anything to be true”—if this claim were true, we could not know it to be true.

Retortion can also be fruitfully employed in regard to ethical principles and other practical principles or imperatives.  As my readers who follow me on Twitter know, I regularly encounter a claim to the effect of “The person who makes a claim has the burden of proof to prove that the claim they are making is true.”  Now, this is not a claim I accept at face value, but I always assume the person making the claim actually accepts it (otherwise they are trying to bind me with a rule of argument that they are exempt from, which is clearly unfair, and which I am therefore right to refuse).  But if the person making this claim does accept it, it follows—by way of retortion—that he has the burden of proof to prove that the burden of proof does lie with the one who claims, since he is claiming this.  And if you follow me on Twitter, you also know that there are all manner of reactions to this—to me, perfectly reasonable—request to obey their own principle, but so far, actually attempting to prove the claim to be true has not been among the reactions. (Of course, it cannot be proven to be true, because it isn’t. They’d do much better if they said “This is a procedural rule I’d like to adopt for this discussion. Would you agree to this?” I wouldn’t agree to it, and would propose an alternative, but this would be the way to go about it.)

Roger Scruton has a rather well-known quote which makes use of retortion to good effect:

scrutontruth

There are innumerable principles which at first glance seem to be profound or important insights that go down like wheat before the scythe of retortion:

“No one can know any metaphysical truths, only scientific ones.”  How do you know this metaphysical truth?

“Only scientific statements are valid.” That isn’t a scientific statement, so it isn’t valid.

“Everyone’s opinion is equally valid.” Is my opinion that opinions are not equally valid valid?

“We can never be certain of anything.” Are you certain of that?

“No one can have knowledge about God.” How do you know this about God?

“Truth is determined through empirical observation.” What empirical observation determines this to be true?

“What is true for me may not be true for you.” It is true for me that anything true is true for everyone.

“No truth is immutable.” So that truth is mutable, and may have changed.

“No belief can be ultimately justified.” Is that belief ultimately justified?

“It’s always wrong to make moral judgments.” Overheard by philosopher Mary Midgley at a party.

Examples could be multiplied indefinitely, from the lowest banalities of “there is no truth” up to clever and interesting logical paradoxes such as “The set of all sets that do not contain themselves does not contain itself” or “This statement is only false and not true.”

Remember: a very good cognitive habit is to always apply the retortion test to any principle presented to you as absolute or universal. 99 times out of 100, this will burn said principle to the ground. Your interlocutor may get annoyed with you, but that isn’t your fault or your problem. If he is reasonable, he must understand the force of the objection, and seek to reformulate his principle in a rationally cogent manner.

“You Can’t Prove a Negative” Part 2

As I wrote in my post “You Can’t Prove a Negative”, this claim—that you can’t prove a negative—is a silly urban legend of logic that needs to die.

So it came up again on Twitter, and someone was kind enough to direct me to an essay by another philosopher addressing this same absurd bit of “folk logic” (as he aptly calls it). I also think his view that “you can’t prove a negative” is largely a view held by people who  have “a desperate desire to keep believing whatever one believes, even if all the evidence is against it.” In other words, “you can’t prove a negative!” is code for “you can’t prove I’m wrong, so I’ll continue to think I’m right!”

I think it is worth reblogging, so here is Steven D. Hales “You Can Prove a Negative”:

________________________________________________________________

THINKING TOOLS: YOU CAN PROVE A NEGATIVE
Steven D. Hales

A principle of folk logic is that one can’t prove a negative. Dr. Nelson L. Price, a Georgia minister, writes on his website that ‘one of the laws of logic is that you can’t prove a negative.’ Julian Noble, a physicist at the University of Virginia, agrees, writing in his ‘Electric Blanket of Doom’ talk that ‘we can’t prove a negative proposition.’ University of California at Berkeley Professor of Epidemiology Patricia Buffler asserts that ‘The reality is that we can never prove the negative, we can never prove the lack of effect, we can never prove that something is safe.’ A quick search on Google or Lexis-Nexis will give a mountain of similar examples.

But there is one big, fat problem with all this. Among professional logicians, guess how many think that you can’t prove a negative? That’s right: zero. Yes, Virginia, you can prove a negative, and it’s easy, too. For one thing, a real, actual law of logic is a negative, namely the law of non-contradiction. This law states that that a proposition cannot be both true and not true. Nothing is both true and false. Furthermore, you can prove this law. It can be formally derived from the empty set using provably valid rules of inference. (I’ll spare you the boring details). One of the laws of logic is a provable negative. Wait… this means we’ve just proven that it is not the case that one of the laws of logic is that you can’t prove a negative. So we’ve proven yet another negative! In fact, ‘you can’t prove a negative’ is a negative  so if you could prove it true, it wouldn’t be true! Uh-oh.

Not only that, but any claim can be expressed as a negative, thanks to the rule of double negation. This rule states that any proposition P is logically equivalent to not-not-P. So pick anything you think you can prove. Think you can prove your own existence? At least to your own satisfaction? Then, using the exact same reasoning, plus the little step of double negation, you can prove that you aren’t nonexistent. Congratulations, you’ve just proven a negative. The beautiful part is that you can do this trick with absolutely any proposition whatsoever. Prove P is true and you can prove that P is not false.

Some people seem to think that you can’t prove a specific sort of negative claim, namely that a thing does not exist. So it is impossible to prove that Santa Claus, unicorns, the Loch Ness Monster, God, pink elephants, WMD in Iraq, and Bigfoot don’t exist. Of course, this rather depends on what one has in mind by ‘prove.’ Can you construct a valid deductive argument with all true premises that yields the conclusion that there are no unicorns? Sure. Here’s one, using the valid inference procedure of modus tollens:

1. If unicorns had existed, then there is evidence in the fossil record.
2. There is no evidence of unicorns in the fossil record.
3. Therefore, unicorns never existed.

Someone might object that that was a bit too fast  after all, I didn’t prove that the two premises were true. I just asserted that they were true. Well, that’s right. However, it would be a grievous mistake to insist that someone prove all the premises of any argument they might give. Here’s why. The only way to prove, say, that there is no evidence of unicorns in the fossil record, is by giving an argument to that conclusion. Of course one would then have to prove the premises of that argument by giving further arguments, and then prove the premises of those further arguments, ad infinitum. Which premises we should take on credit and which need payment up front is a matter of long and involved debate among epistemologists. But one thing is certain: if proving things requires that an infinite number of premises get proved first, we’re not going to prove much of anything at all, positive or negative.

Maybe people mean that no inductive argument will conclusively, indubitably prove a negative proposition beyond all shadow of a doubt. For example, suppose someone argues that we’ve scoured the world for Bigfoot, found no credible evidence of Bigfoot’s existence, and therefore there is no Bigfoot. A classic inductive argument. A Sasquatch defender can always rejoin that Bigfoot is reclusive, and might just be hiding in that next stand of trees. You can’t prove he’s not! (until the search of that tree stand comes up empty too). The problem here isn’t that inductive arguments won’t give us certainty about negative claims (like the nonexistence of Bigfoot), but that inductive arguments won’t give us certainty about anything at all, positive or negative. All observed swans are white, therefore all swans are white looked like a pretty good inductive argument until black swans were discovered in Australia.

The very nature of an inductive argument is to make a conclusion probable, but not certain, given the truth of the premises. That just what an inductive argument is. We’d better not dismiss induction because we’re not getting certainty out of it, though. Why do you think that the sun will rise tomorrow? Not because of observation (you can’t observe the future!), but because that’s what it has always done in the past. Why do you think that if you turn on the kitchen tap that water will come out instead of chocolate? Why do you think you’ll nd your house where you last left it? Why do you think lunch will be nourishing instead of deadly? Again, because that’s the way things have always been in the past. In other words, we use inferences — induction — from past experiences in every aspect of our lives. As Bertrand Russell pointed out, the chicken who expects to be fed when he sees the farmer approaching, since that is what had always happened in the past, is in for a big surprise when instead of receiving dinner, he becomes dinner. But if the chicken had rejected inductive reasoning altogether, then every appearance of the farmer would be a surprise.

So why is it that people insist that you can’t prove a negative? I think it is the result of two things. (1) an acknowledgement that induction is not bulletproof, airtight, and infallible, and (2) a desperate desire to keep believing whatever one believes, even if all the evidence is against it. That’s why people keep believing in alien abductions, even when flying saucers always turn out to be weather balloons, stealth jets, comets, or too much alcohol. You can’t prove a negative! You can’t prove that there are no alien abductions! Meaning: your argument against aliens is inductive, therefore not incontrovertible, and since I want to believe in aliens, I’m going to dismiss the argument no matter how overwhelming the evidence against aliens, and no matter how vanishingly small the chance of extraterrestrial abduction.

If we’re going to dismiss inductive arguments because they produce conclusions that are probable but not de nite, then we are in deep doo-doo. Despite its fallibility, induction is vital in every aspect of our lives, from the mundane to the most sophisticated science. Without induction we know basically nothing about the world apart from our own immediate perceptions. So we’d better keep induction, warts and all, and use it to form negative beliefs as well as positive ones. You can prove a negative — at least as much as you can prove anything at all.

Steven Hales is professor of philosophy at Bloomsburg University, Pennsylvania.

“Of all people, the atheist is the most unfortunate”

st_nektarios_of_aegina

St. Nektarios of Ægina

Of all people, the atheist is the most misfortunate person because he has been deprived of the only good thing upon the earth: faith—the one true guide toward the truth and happiness. The atheist is a most misfortunate person because he is deprived of hope: the essential staff needed to journey through life’s lengthy path. The atheist is a most misfortunate person because he is deprived of human love, which caresses the aching heart. The atheist is a most misfortunate person because he has been deprived of the divine beauty of the Creator’s image, which the Divine Artist has etched within man and which faith unveils.

The eye of the atheist sees in creation nothing other than the operation of natural processes. The brilliance and magnificent beauty of the Divine Creator’s image remain hidden and undetectable to him. As he glances aimlessly at creation, nowhere does he discover the beauty of God’s wisdom, nowhere does he see God’s omnipotence, nowhere does he observe God’s goodness and providence, nowhere does he discern the Creator’s righteousness and love for creation. His mind is neither capable of ascending higher than the visible world nor reaching beyond the boundaries of physical matter. His heart remains anesthetized and indifferent before God’s ever-present divine wisdom and power. Within it, not even the slightest desire to worship the Lord exists. His lips remain closed, his mouth silent, and his tongue frozen. His soul voices no hymn, doxology, or praise as an expression of gratefulness to God.

The peace of the soul and the serenity of the heart have been removed by disbelief; instead, mourning has inundated the depth of his being. The delight, which the faithful person experiences from executing God’s divine commandments, and the great pleasure that he enjoys from an ethical way of life are unknown feelings for the atheist. The elation which faith bestows to the believer has never been felt by the atheist’s heart. The assurance that arises from faith in God’s providence, which relieves man from the anxiety of life’s worries, is a power unknown to him.

creation-icon

The joy poured upon the entire universe has abandoned the heart of the atheist because God has fled from his heart. The ensuing void has instead been filled by sadness, dejection, and anxiety. The atheist lives in a dispirited state; listlessness has taken hold of his soul. He wanders astray in the lightless and expansive night of this present life without even one ray of light to illumine his crooked paths. There is no one to lead him or guide his footsteps. All alone, he passes through the arena of life with no hope of a better future. He walks amidst many traps, but there is no one to free him from them. He is caught within these snares and crushed by their weight. In times of difficulty and sorrow, there is no one to alleviate or console him.

Feelings of love and gratitude remain unknown mysteries for the atheist. The atheist, having appointed matter as his principal governor, limited man’s true happiness within the narrow confines of temporary pleasures. Consequently, he constantly seeks to enjoy these pleasures and is ceaselessly concerned and preoccupied with them. The beauty of virtue is completely foreign to him. The atheist has not tasted the sweetness and grace of virtue. The atheist is oblivious to the source of true happiness and has raced toward the fountains of bitterness. He has been filled to satiety by ephemeral pleasures; satiety in turn has induced in him disgust; disgust has resulted in ennui; ennui has given rise to affliction; affliction has developed into pain; and, finally, pain has led to hopelessness. All the pleasures have lost their glitter and beauty because all of the world’s pleasures are transient, and, as such, are incapable of rendering the atheist fortunate.

Man’s heart was created to be filled by the greatest good; therefore, only when it enjoys this good does it leap and rejoice—because this good is God. God, however, has fled from the heart of the atheist. The human heart has infinite desires because it was created to embrace the infinite God. However, since the atheist’s heart is not filled with the infinite God, it can never be filled or satisfied with anything—even though it perpetually groans, seeks, and desires to do so. The pleasures of the world are incapable of filling the heart’s emptiness. The pleasures and delights of this world quickly evaporate, leaving within the heart dregs of bitterness. Similarly, vain honor and praise are accompanied by sorrows.

christcrucified

The atheist is unaware that man’s happiness is found not within the enjoyment of earthly pleasures but in the love of God—Who is the greatest and eternal good. He who denies God denies his own happiness and eternal bliss. The poor atheist struggles through life’s hard and toilsome journey, fearfully walking toward the end of his life without hope, headed for the grave that happily waits for him. The sweet waters of joy and happiness flow beneath his feet, while he, as another condemned Tantalus,1 is incapable of quenching his thirst and watering his tongue that has been dried and withered by atheism—for the waters flowing from the life-giving spring of faith recede from his lips.

The atheist has become a misfortunate slave subjugated to a harsh tyrant! How was your happiness stolen? How was your treasure seized? You lost your faith, you denied your God, you denied His revelation, and you rejected the abundant wealth of His divine grace.

How wretched is his life! It consists of a series of torments. His eyes see nothing joyful in nature. The natural world seems to him sterile and barren. It neither provides him with joy nor generates within him feelings of delight. None of God’s works smile at him. A mournful blanket covers the grace and beauty of the creation, which no longer contains anything attractive. His life has become an unbearable burden and a perpetual, unendurable misery.

Despair already stands before him as an executioner, and a merciless tyrant tortures this fearful man. Disbelief has corrupted the ethical powers of his soul; he has run out of courage and is now too week to resist. He is led, like a helpless being, by disbelief and handed over to the frightful bonds of despair. Unmerciful and uncompassionate despair, in turn, violently and harshly severs the thread of his pitiful life, and hurls him into the depths of perdition and darkness, from where he will resurface only when the voice of his divine Creator—Whom he denied—calls him to give an account of his disbelief, at which point he will be condemned and sent to the eternal fire.

anastasis

Χριστὸς ἀνέστη

The Spiritual Disorder of Atheism: St. Nektarios

 

st_nektarios_of_aegina

St. Nektarios of Ægina

Atheism is a mental disorder: it is a terrible ailment of the soul that is difficult to cure. Atheism is a passion that severely oppresses whomever it seizes. It holds in store many misfortunes for its captive, and becomes harmful not only for him but also for others who come into contact with him.

Atheism denies the existence of God. It denies that there is a divine Creator of the universe. It denies God’s providence, His wisdom, His goodness, and, in general, His divine qualities. Atheism teaches a falsehood to its followers and contrives false theories concerning the creation of the universe. It professes, as Pythia upon a tripod, that the creation is an outcome of chance, that it is perpetuated and preserved through purposeless, random interactions, that its splendor transpired spontaneously over time, and that the harmony, grace, and beauty witnessed in nature are inherent attributes of natural laws. Atheism detracts from God, Whom it has denied, His divine characteristics, and, instead, bestows them and His creative power to lifeless and feeble matter. Atheism freely proclaims matter to be the cause of all things, and it deifies matter in order to deny the existence of a superior Being, of a supreme, creative Spirit Who cares for and sustains all things.

On account of disbelief, matter becomes the only true entity; whereas the spirit becomes non-existent. For atheism, the spirit and the soul are egotistical inventions of man, concocted to satisfy his vainglory. Atheism denies man’s spiritual nature. It drags man down from the lofty height where he has been placed by the Creator’s power and grace, and lowers him amongst the rank of irrational animals, which he accepts as ancestors of his distinguished and noble lineage. Atheism does all this in order to bear witness to the words of the Psalm: “Man, being in honor, did not understand; he is compared to the mindless animals, and is become like unto them” (Ps. 48:20).

Atheism detracts faith, hope, and love from the world, these life-giving sources of true happiness for man, it expels God’s righteousness from the world, and denies the existence of God’s providence and succor.

Atheism accepts the laws that exist in nature, yet denies Him Who has appointed these laws. Atheism seeks to lead man to an imaginary happiness; however, it abandons and deserts him in the middle of nowhere, in the valley of lamentation, barren of all heavenly goods, void of consolation from above, empty of spiritual strength, bereft of the power of moral virtue, and stripped of the only indispensable provisions upon the earth: faith, hope, and love.

Atheism condemns poor man to perdition and leaves him standing alone as prey amidst life’s difficulties. Having removed love from within man, atheism subsequently deprives him of the love from others, and it isolates him from family, relatives, and friends. Atheism displaces any hope of a better future and replaces it with despair.

Atheism is awful! It is the worst of all spiritual illnesses!

 

Agrippa’s Trilemma aka Münchausen’s Trilemma

Whenever a proposition is asserted to be true, we can ask “how do we know this proposition is true?”

Agrippa’s Trilemma is one of the most ancient philosophical problems.  If one is challenged on the truth of a given assertion, one may attempt to demonstrate or prove one’s assertion to be true.  But any demonstration or proof which is given will make use of premises of which it may again be asked “Are they true?”  This generates a problem of proving one’s proofs or demonstrating one’s demonstrations, and results in the following trilemma, called Agrippa’s Trilemma:

  1. There is an infinite regress in which every demonstration requires a demonstration of its own, with the result that this never comes to an end, and so no demonstration is ever accomplished, nothing is demonstrated, and one merely arbitrarily stops at some point; or
  2. A circular demonstration is given in which something is ‘demonstrated’ to be true on the basis of its own truth or the assumption that it is true; or
  3. Demonstration comes to an end in one or more undemonstrated and indemonstrable axioms or first principles, the truth of which, because it is undemonstrated and indemonstrable, remains unknown, thus undermining the warrant of any demonstration made on the basis of such axioms or first principles.

Agrippa’s Trilemma is also sometimes called Münchausen’s Trilemma, after the legendary Baron Münchausen, who was able to free himself and his horse from a swamp in which they had become mired, by the expedient of pulling himself and his horse up and out by lifting himself by his own hair:

baronmunchhausen

This technique, often called “bootstrapping” in English (from the idea of lifting oneself up by pulling on one’s own bootlaces), seems most akin to number 2 of Agrippa’s Trilemma, the circular demonstration, insofar as Baron Münchausen is both the lifter and the one being lifted at the same time.

The essential problem is that an infinite regress seems to undermine the possibility of any demonstration, a circular demonstration seems fallacious by its very nature, and any appeal to axioms or ἀρχαί (first principles) can be interpreted as arbitrary, because the legitimacy of the appeal cannot be demonstrated.

Is Agrippa’s Trilemma inescapable?

As with so many cases in philosophy, this is one of those limits of λόγος or discursive reason that thinking will run up against if pressed far enough. The best answer to it seems to be the kind of refutation that Aristotle calls retortion, which means “to turn something back on itself.” In this case, the poser of the Agrippa’s Trilemma seems to be assuming that demonstration or discursive proof is the standard of all truth-warrant, or even that “it is the case that we are caught in Agrippa’s Trilemma.”  But how does he know this to be true?

It seems entirely reasonable not to accept Agrippa’s Trilemma until it is demonstrated that it is a correct representation of our epistemological situation.  But how exactly does the proponent of the Agrippa’s Trilemma propose to demonstrate its validity as a representation of our epistemological situation without himself falling prey to it?

Either he can, or he cannot.

If he can, then Agrippa’s Trilemma is defeated by whatever means the proponent of Agrippa’s Trilemma uses to demonstrate it, which did not fall prey to it. But this cannot happen, since the Trilemma purports to show that all demonstration is impossible; thus a demonstration of the validity of the Trilemma would defeat itself (insofar is it would necessarily require a fourth method of demonstration, thus defeating the trilemma).

On the other hand, if the proponent of Agrippa’s Trilemma cannot demonstrate that it is a correct account of our epistemological situation, then we are justified in simply refusing to accept it, as something undemonstrated.  And so again, the Trilemma is defeated.

The classical answer, besides the argument to retortion showing that the Trilemma is self-defeating, is to hold that it reduces truth-warrant to demonstration, whereas there are other ways in which truth is warranted besides demonstration. As Aristotle puts it, “it is a sign of lack of education not to know of what one ought to seek a demonstration and what not”:

aristotledemonstrationlackofeducation

The classical answer, to which I adhere, is that in addition to λόγος or discursive reason, human beings also have the cognitive power of νοῦς or νόησις (Latin: intellectus) which is a kind of direct mental “seeing” of certain basic truths, which are self-evident.  “Self-evident” means something which to understand it and to understand that it is true are one and the same.  Note that self-evident does not mean “obvious.”  A thing might be difficult to understand and so not obvious immediately or to everyone, but still be such that, once understood, it is seen to be necessarily true (e.g. that the sum of the angles of a Euclidean triangle is equal to two right angles is self-evident, but not obvious).

Despite our possession of νοῦς, human cognition is nevertheless essentially discursive. The human νοῦς, according to Socrates, Plato, and Aristotle, is essentially connected to or entangled in λόγος or discursivity (and so to language and linear, temporal thinking-through or διάνοια), such that the human νοῦς ought to be regarded as much inferior to that of God.  Aristotle in de Anima calls the human νοῦς, “the so-called νοῦς,” as if it were almost not worthy of the name.  It is also worth noting that the Christian philosophical tradition holds that the human νοῦς is not merely finite or limited as compared to that of God, but has also been darkened or impaired or damaged by the Fall of Man; although it has not been erased or obliterated entirely.  Saint Paul’s expression at 1st Corinthians 13:12 is usually taken as a characterization of the human noetic condition

For now we see through a glass, darkly, but then face to face. Now I know in part; but then shall I know, even as also I am known.

We see through a glass, darkly” seems not only a beautiful expression, but a true one. Human cognition is neither perfect nor blind. We are creatures that both by nature and our fallen state are “in-between” perfect knowledge and complete ignorance.

As Pascal says of humanity, “We know too much to be skeptics, and we know too little to be dogmatists.”

BlaisePascalDogmatistsSkeptics

It seems, as always, all roads of merely human wisdom lead back to Socrates, whose human wisdom was the wisdom to know what he did not know, and the resultant endless search for the wisdom he did not have called philosophy.

socrateshumanwisdom

Revisiting Whales and Fish One Last Time

As some of you know I have been involved in an argument on Twitter with one DrJ (@DrJ_WasTaken) concerning the usage of the term “fish.”  It began when he asserted that Geoffrey Chaucer was using the word “fish” (or “fissh”) wrongly, because Chaucer includes whales under the term “fish.”

I pointed out something I took to be something very obvious, that correctness and incorrectness in word usage is conventional, and is therefore contextual and relative to the community of language speakers of which one is a part.  The fact that many modern English speakers would not use the word “fish” in such a way as to include whales merely reflects a change in usage, where popular language has tended somewhat to conform to usage in science, in which whales, being mammals, would not be regarded as “fish.”

Although it is not actually clear that biologists use the word “fish” in any formal sense—”fish” is, at most, a paraphyletic classification, similar to “lizard” and “reptile.” That is to say, it based on phylogenetic ancestry, but includes a couple of arbitrary exclusions.  For example, here is a chart of the paraphyletic group Reptilia:

traditional_reptilia

Reptilia (green field) is a paraphyletic group comprising all amniotes (Amniota) except for two subgroups Mammalia (mammals) and Aves (birds); therefore, Reptilia is not a clade. In contrast, Amniota itself is a clade, which is a monophyletic group.

In other words, “reptiles” seem to be defined as “all animals which are descended from the Amniota, with the (semi-)arbitrary exclusion of mammals and birds.”

There was once a Class Pisces, but this is no longer recognized as a valid biological class.  Nowadays, the biological use of “fish” seems to refer to three classes: Superclass Agnatha (jawless “fish”; e.g. lampreys and hagfish), Class Chondrichthyes (cartilaginous “fish”; e.g. rays, sharks, skates, chimaeras), and Class Osteichthyes (bony “fish”), but excluding Class Amphibia, Class Sauropsida, and Class Synapsida, although these all belong to the same clade.  So “fish”, like “reptile” is a paraphyletic classification. It includes some members of a clade but just leaves some other members out.  Here’s a chart:

fishparaphyleticchart

I think this element of arbitrariness in the biological taxonomic classification of fish is important, and goes to substantiating my point about the flexibility of the usage of the word “fish.” In this case, the point is: the term “fish” even as used in contemporary biology is essentially arbitrary.  It involves drawing a line around certain kinds of living beings and saying “These are fish.”

The issue is that DrJ holds the position that, for any given English word, such as “fish,” there is one and only one absolutely correct usage of this word, that this correct usage is completely independent both of all historical context and of actual usage, and that any other usage of the word is, in some absolute way, INCORRECT or WRONG.

Thus, he maintains, that English speakers in Chaucer’s day, including Chaucer, were using the word “fish” wrongly, because they do not use it as modern scientists use it, which is the one, single, eternally correct way—even though, as I mention above, this modern “scientific” usage is essentially an arbitrary paraphyletic grouping.

This position generates what I take to be a number of absurdities, more than sufficient to refute the position by a reductio ad absurdum.  For example, it entails that in many cases, and definitely in the case of the word “fish,” whatever English speakers first coined the word “fish,” used the word they had created WRONGLY, the very instant they used it at all.  They had just now made a new word to name something, but they were ignorant of the fact that the word which they had just now created, really names something else—their intentions in creating the word notwithstanding.

Now, there are many natural facts about animals, e.g. that whales give live birth and so are mammals.  But “the correct name of this animal or animal kind in English” is not a natural fact.  I would have said that it is a social fact or a convention (which still seems correct to me).  But DrJ denies this.  He maintains that there are facts about the correctness of word usage which are neither natural facts nor conventions.

As a Platonist, I am perfectly prepared to admit that there are such facts, non-natural facts,  for example, facts of logic or mathematics.  Logical facts and mathematical facts are neither natural or physical, because they are about things which are immaterial, nor are they conventional or social, since they are entirely objective.  I would not, however, have thought that English word meanings were the sorts of things about which there could be transcendent metaphysical facts outside time and space, not subject to the actual usage and conventions of English speakers. I had always taken it to be obvious that word usage was a convention or social fact.

I have spoken of DrJ’s belief in PLATONIC WORD-MEANING HEAVEN. I intended this term to show (what I take to be) the absurdity of his position, but I want to stress that it is LOGICALLY NECESSARY that he believe in something like this. If the CORRECT meaning of words is determined neither by nature nor by convention, it MUST necessarily have some kind of eternal, transcendent ground beyond convention and outside of nature—if not God, then at least something like Platonic Word-Meaning Heaven. I don’t care what he calls this transcendent ground beyond nature which determines eternal correct word meaning—all that matters to me is that he must believe in such a thing, because whatever it is (and perhaps he knows the one, true, eternally correct name for it?), this is what he is appealing to when he holds there is a standard of correctness for words which is non-conventional and above nature.  He cannot be, for example, appealing to the usage of modern biologists, because his claim JUST IS that this usage is eternally correct, and—he has been very clear on this point—it was correct in Chaucer’s day, before any actual English speaker used the word “fish” in this way—which is what enables him to say that Chaucer’s use of “fish” is incorrect, and that the use of “fish” by whichever English speakers who first used the word “fish” was equally incorrect.

We are not debating about HOW the word “fish” is used by modern biologists (although that might be interesting—it’s paraphyletic nature seems to add an ad hoc, arbitrary element, which makes his case that it is the one, true, eternally correct use even more suspect.)—we are discussing the grounds of DrJ’s claim that ONLY the use of the word “fish” by modern biologists is or can be CORRECT, and that any and all other uses, past, present, or future, are, necessarily, INCORRECT or WRONG.  It is very clear DrJ is maintaining that correctness in word-usage is in some way an eternal truth comparable to the truths of mathematics and logic.  I remain unconvinced by this claim, and have yet to see any good evidence offered for it, beyond a dogmatic insistence that it is so, ad nauseam.  But I want to know what his actual arguments are for this Platonic position on word-usage.  I am a Platonist, so he’s already got my concession that there are such things as eternal, non-physical, not-temporal standards (e.g. of math, logic, ethics, etc.).  I’m just not convinced that word usage is like that. Given the way that words vary among languages and the way they change meaning over time, it seems absurd to me to class word-meanings among the eternal objects—although of course we are forced to speak of eternal entities by means of temporal words, but that’s another story.

My suspicion is that he is continually confusing the USAGE OF A WORD TO REFER to some truth about the world with the REFERENT BEING REFERRED TO IN THE USAGE.  That is to say, I think he is doing the equivalent of confusing the natural fact that snow is white with the English sentence “snow is white.”  The fact of the color of snow is what it is, regardless of how that fact is EXPRESSED in English.  The very same fact can be expressed in German as “Schnee ist weiss.”  But the WORDS USAGE which expresses the fact is CONVENTIONAL.  Nothing in the fact of snow’s being white in any way entails that the stuff has to be called “snow” or the color called “white.”  These are arbitrary sounds that convention has linked with the frozen precipitate that falls from the sky and the color or tint which is a quality of said precipitate.

Plato’s Cratylus is devoted to the question of whether or not there are “true names” for things, or whether names are conventional.  Cratylus says there are true names, and Hermogenes holds they are conventional.  For SOME MAD REASON the two call upon Socrates to decide the matter—Socrates then proceeds to refute Cratylus and argues him into conceding that words are conventional, and before Hermogenes has time to gloat, Socrates turns on him, refutes him, and argues him into the position that there are true names for things.  And with the opponents having switched sides, and the question still unanswered, Socrates leaves. RULE OF LIFE: DO NOT ASK SOCRATES TO “SETTLE” AN ARGUMENT.

Anyway, I have a number of questions for DrJ that still remain unanswered, to wit:

1. What are the reasons for your belief in a transcendent ground which determines CORRECT word usage over and above actual historical usage? Can you demonstrate that such a Platonic realm of word meanings exists? Or that there are transcendent facts about word meaning in the same way or in a similar way that there are transcendent truths about mathematical entities and logical entities?

2. How do you access this transcendent place wherein the one true eternal correct word meanings of English are stored? I would like to know the true, eternally correct meanings of words, so I can use them properly.  How do I check which definition is the one, true, eternally correct one?  What sort of argument would demonstrate that usage X of a word is the ‘one, true, eternally correct’ one and usage Y is not?

3. If your thesis is true, why don’t linguists, who study language, recognize that it is true? Or if any linguists do maintain that there is one and only one eternally correct usage for any given English word, can you please give me their names?  And can you point me to their arguments as to why they think this is true?

4. If your thesis is true, why don’t lexicographers recognize it to be true? Why do dictionaries, at least every one I am familiar with, give more than one definition for some words? Are lexicographers unaware that there is and can be only one true correct meaning for each word? For example, the Oxford English Dictionary says of itself

The OED is not just a very large dictionary: it is also a historical dictionary, the most complete record of the English language ever assembled. It traces a word from its beginnings (which may be in Old or Middle English) to the present, showing the varied and changing ways in which it has been used and illustrating the changes with quotations which add to the historical and linguistic record. This can mean that the first sense shown is long obsolete, and that the modern use falls much later in the entry.

Why does the OED focus on “the varied and changing ways in which [a word] has been used” instead of on the “one, true, eternally correct meaning” of a word? Shouldn’t the one, true, correct meaning be regarded as more important than all the historically incorrect usages? Why does the OED speak as if multiple senses of one word are all valid, if this is (as you say) false? Or again, the OED says

What’s the difference between the OED and Oxford Dictionaries?

The OED and the dictionaries in ODO are themselves very different. While ODO focuses on the current language and practical usage, the OED shows how words and meanings have changed over time.

Why does the OED think that word meanings change over time, when they are, in fact, static and fixed by your transcendent standard of correctness? Why do all dictionaries think this? Why have you not corrected the OED and the various other dictionaries on this extremely important point? Surely, if it is worth taking the time to explain to me on Twitter that words have only one, true, eternally correct meaning, it is worth explaining this to the OED and other dictionaries, so they can change their priorities.  Or can you point me to a dictionary whose policy is to give ONLY the one, true, eternally correct definition of each word?

5. If every English word has one, true, eternally correct meaning, does this go in reverse? Does every eternal meaning have only one true word to which in corresponds? Or do you hold that although there is only one, true, eternally correct meaning for each word, there can be arbitrarily many words (e.g. in other languages) that express this meaning?  In other words, is “fish” the one, true, eternally correct word to express the one, true eternally correct meaning of “fish,” so that all other languages are wrong to not use the English word “fish”? Is Chaucer’s “fissh” wrong? Is the German “Fisch” wrong? Are they “less wrong” than the French “poisson”? Is it always English words that are correct, so that all speakers of non-English languages are always using all words wrongly, just because they are not speaking the one, true, eternally correct language, English? Or are the true, eternally correct words divided among the various languages of the world?

It brief:

  1. What is your argument that words have only one, true, eternally correct meaning, such that all other uses are wrong?
  2. How do you access whatever supernatural eternal ground of word meanings there is wherein you find these eternal standards of correct word use?
  3. If you are correct, why don’t linguists acknowledge that you are correct?
  4. If you are correct, why don’t lexicographers acknowledge that you are correct?
  5. Are you saying that not only does every English word have one, true eternally correct meaning, but that every meaning has one, true, eternally correct word that expresses it?

The Rationality of Christian Faith

Contemporary atheists are fond of defining faith as “belief without evidence.”

This is not the Christian understanding of the concept, and I don’t know of any other religion that makes faith central in the way Christianity does, so it isn’t clear what their target is.  Islam perhaps?

As an argument, it is roughly on a par with defining mathematics as “absurd,” defining empirical observation as “utterly unreliable,” and going on to deduce that the scientific method is based on absurd and utterly unreliable things, and (the argument continues) anyone who trusts in science as a source of knowledge must be a very foolish person, since he places his trust in a method based completely on things which are absurd and utterly unreliable. Let us all laugh at such a fool.

The Christian understanding of faith is expressed directly, if somewhat cryptically, in the Epistle to the Hebrews:

Now faith is the substance of things hoped for, the evidence of things not seen.

Ἔστιν δὲ πίστις ἐλπιζομένων ὑπόστασις, πραγμάτων ἔλεγχος οὐ βλεπομένων·

This is one of the fairly rare passages where the King James Version seems to be really the best translation into English.

I want to note in particular the word ἔλεγχος which is here translated as evidence. This is an entirely legitimate translation of ἔλεγχος—which means “proof” or more precisely, “that whereby something is proven.” ἔλεγχος is what Socrates engages in.  It is what happens in courts of law during a trial (a trying; a testing).  ἔλεγχος is submitting something to a process of rigorous testing and examination.

In short, the Christian understanding of faith is so far from the stock atheistic trope of “belief without evidence,” or “belief completely without any rational foundation” that it has a word which means both evidence and rigorous testing and examination built into it.

It is a very odd modern superstition which modern atheists are especially prone to (and I have found atheists to be among the most superstitious of human beings) that human beings in ancient times were somehow extraordinarily, even inhumanly, credulous, and would more or less believe anything they were told.  Not only is there no evidence for such a belief, there is plenty of evidence that human beings in ancient times were no more or less credulous than human beings today.

David Hume, that incorrigible skeptic, writes in a justly famous passage in his An Enquiry Concerning Human Understanding:

It is universally acknowledged that there is a great uniformity among the actions of men, in all nations and ages, and that human nature remains still the same, in its principles and operations. The same motives always produce the same actions: the same events follow from the same causes. Ambition, avarice, self-love, vanity, friendship, generosity, public spirit: these passions, mixed in various degrees, and distributed through society, have been, from the beginning of the world, and still are, the source of all the actions and enterprises, which have ever been observed among mankind. Would you know the sentiments, inclinations, and course of life of the Greeks and Romans? Study well the temper and actions of the French and English: You cannot be much mistaken in transferring to the former most of the observations which you have made with regard to the latter.

To think the ancients more credulous than human beings are today is unfounded and absurd.  Yes, some of them were credulous and went in for belief in absurd cults and fad ideas—phenomena we see around us all the time today.  I note, as an example, that a significant portion of the GDP of Nigeria comes from the so-called 419 scams, in which Westerners are emailed a notice claiming they have inherited millions of dollars, and all they need do to become rich is send their back account number to the “lawyer” sending the email—people fall for this obvious scam by the thousands; you yourself have almost certainly gotten multiple versions of these emails. Tell me again how “modern Westerners” are no longer gullible thanks to science or the Enlightenment?

And some of the ancients, the educated, were extremely skeptical and critical (“skeptical” and “critical” both being Greek words and concepts, after all).  They were certainly well aware that when one man or group says “This is so” and another man or group says “It is not so,” that it is necessary to test the claims of each, to rigorously examine them, and only then make a judgment about who is speaking truly.  This is what an ἔλεγχος IS.

Now it is true that Christians are called upon to believe in things which are not testable or verifiable by any ordinary means at the disposal of human beings.  Some matters concerning the nature of God and matters concerning, for example, the future state of things, exceed our human powers to know or to test directly.

But consider the following:

  1. I know that God exists; indeed more than exists, God is existence itself;
  2. I know that God is all-good; indeed more than all-good, God is goodness itself;
  3. I know that God is loving; indeed more than loving, God is love itself;
  4. I know that God is all-knowing; indeed more than all-knowing, God is truth itself.
  5. I know that God is all-trustworthy; indeed more than trustworthy, God is fidelity itself.
  6. God has revealed certain things (πράγματα) to human beings as true things, which I have no independent way to test or verify, since they exceed the cognitive powers of human beings.

These things, in number 6, are the matters to which faith pertains when it plays a cognitive role. Faith, as a supernatural virtue, particularly means an attitude or stance or comportment towards God—the best term would really be the German Verhältnis, which has only the disadvantage being a German, and not an English, word. Faith, first and foremost, is NOT a cognitive word synonymous with belief.  This is made very clear in the Epistle of James 2:19:

You believe that God is one; you do well. Even the demons believe—and tremble.

σὺ πιστεύεις ὅτι εἷς ἐστιν ὁ θεός; καλῶς ποιεῖς· καὶ τὰ δαιμόνια πιστεύουσιν καὶ φρίσσουσιν.

Faith, πίστις, is however a word in the same linguistic ballpark as “belief”, πιστεύειν—but it is something more than mere belief.  As we are told by St. James, even the demons believe in God.  We can put it more strongly: the demons know that God exists, but they certainly do not have faith in God. Faith, therefore, is not primarily something that has to do with knowing or believing.

While it is necessary to emphasize that Christian faith, πίστις, is not primarily a cognitive word, it is not without a cognitive dimension.  It is, in one of its aspects, a holding something to be true, an assenting of the will to certain things as true, on the basis of one’s trust in God’s revelation of these things.

Knowledge is warranted true belief.  Suppose I come to believe certain divinely revealed truths to be true just on the basis of this revelation.  The basic question is: am I warranted in forming and holding such beliefs?

In ordinary human circumstances, report or testimony is at least partially warrant-conferring. We believe much of what we do about the past, for instance, on the basis of indirect testimony of those who were alive and present at the time.  We admit the testimony of witnesses as a basic kind of evidence used in trials of law.  Almost all of our scientific knowledge is something we take on trust of our science teachers—we are not in a position, after all, to directly verify such things ourselves via our own observation and experiment.  Just to take an obvious example, how many of the ~8 billion human beings has access to make use of the Large Hadron Collider at CERN, which one would need to “verify” some of the basic claims of particle physics? Do the scientists who do have access to it have the engineering skills or knowledge to “verify” that it has been designed properly? It is very obvious that the great majority of human knowledge, including scientific knowledge, is taken on trust—and if one wish to answer “But such knowledge can be verified in principle!”—well, unless you have in fact verified it all, that too is something you take on trust.

At the same time, common human prudence cautions us against taking all human testimony at face value. Why? Human beings are known by us sometimes to be mistaken, sometimes to lie, or for some other reason, give false reports and false testimony.  So it is wise and prudent to take testimony and report as not fully warrant-conferring—although it would be foolish to dismiss testimony and report as if they could never be warrant-conferring.  As an analogy to testimony, we regularly warrant many beliefs on the basis of our senses, such as sight, while being aware we are subject to various sorts of optical illusions, hallucinations, distortions, varying environmental and lighting conditions, blind spots, and so on. It would be manifestly foolish to hold that sense perception can never be warrant-conferring on a belief, on the grounds that sense perception is not infallible.

In this light, consider my six points above.  Is a report or revelation from God warrant-conferring? Am I rationally warranted in believing such a report or revelation? Is a divine revelation truth-warranting for a belief?

I cannot honestly see how I am not rationally warranted in forming and holding beliefs on this basis.

The normal objections which would make a report from another not fully warrant-conferring, that he might be mistaken or deceptive or ignorant of the full picture, etc., can none of them apply to God.  Since God is all-knowing and all-truthful and all-trustworthy, what reason could be adduced that belief in something revealed by God is rationally unwarranted?  To believe the word of a being who is all-knowing, all-truthful, and all-trustworthy is, as far as I can tell, the best and strongest kinds of warrant for belief one could have.

An atheist will of course say that I do not know propositions 1-5, which I claim that I do know. But that is really immaterial in this context.  The point remains that IF I know 1-5 (which is really the same as knowing 1 only, since 2-5 follow from it), THEN faith or πίστις as Christians understand that term IS properly cognitively warrant-conferring.  An atheist, as I have noted elsewhere, would have to demonstrate the truth of the proposition that there is no God, to show that God is an insufficient basis for knowledge.

And since knowledge just is warranted true belief, and faith is properly warrant-conferring, faith is a source not merely of belief, but of knowledge.

This is as far from “blind belief without evidence” as it is possible to get. It is the opposite, the antipodes, of that.

Far from being irrational belief without evidence, Christian faith constitutes fully rationally warranted knowledge.  The charge that Christian belief is irrational is true only in the following limited and specific sense: Christians know some things which are in themselves above reason, and so beyond the power of human reason to directly ascertain or verify, but in which they are fully rationally warranted in believing.   The charge that Christian belief is irrational in the sense of being rationally unwarranted belief is simply false

At most, an atheist would be able to claim something to the effect that “Nothing can be warrant-conferring that human cognition is unable to verify for itself,”—but this will leave the atheist in the impossible position of having to verify this principle, something that cannot be done.  Christians need not be impressed or concerned when the only objections advanced against them are unwarranted and self-defeating claims.

Again, I am aware that an atheist will attempt to challenge the warrant-conditions of faith, but—and this point is crucial—he cannot do so in a non-question-begging way, unless and until he has established that there is no God.  That is, until he has demonstrated the truth of atheism in the traditional sense of belief in the proposition that there is no God.  This is so because the existence or non-existence of God is not a matter which leaves the epistemic landscape unchanged; the very nature of warrant will differ depending on this question, so it cannot be prescinded from in favor of purely epistemological arguments which, for now, table the ontological question of the existence of God.  I write this paragraph merely as a reminder of this, because it is such an important point; I have discussed this point elsewhere in Atheists Cannot Evade the Existence Question, which I invite you to read if you haven’t.

Monotheism vs Polytheism

I have long been under the impression that “monotheism” and “polytheism” are two of the most unfortunate words in English, or in any language.  The two words present one who hears them with a near intellectual necessity to think the concepts are speaking about the same sort of thing, in exactly the same way, as “monosyllabic” means a word of only one syllable, and “polysyllabic” means a word with more than one syllable.

“Polytheism” does mean “many gods.”  However, “monotheism” does not mean “only one god,” but rather “God, rather than the gods.” The two terms seem to be saying something parallel, on the same level, but they are not.  This generates endless confusion, because monotheists are NOTHING AT ALL LIKE POLYTHEISTS.  In fact, in may be the case that all polytheists ARE ALSO monotheists—at least in one sense of the term.

I was trying to track down the source of these execrable terms. I had assumed some idiot scholar somewhere was responsible—and I was eager to add his name to my list of people who will get a well-deserved beating if I ever get my hands on a time machine—but the truth turns out to be much more complicated and interesting, as truth often does. And the idea that they are “parallel” terms on a par with one another, seems to have been an kind of accident of language, as much as anything else, (although there is some scholarly blame, as we’ll see).

Apparently, “polytheist” was originally the coinage of the philosopher Philo of Alexandria.  Now, Philo was both a Jew and a Platonist and would never be so stupid as to fall into the confusion above—just the opposite, in fact.  In coining the term “polytheist,” he was attempting to underscore to his Greek audience the manifest absurdity of the idea of putting together “many” and “gods” (he also needed a more polite and non-Jewish word for “Gentiles”).

Philo was a learned Jew of Alexandria who was of the “middle camp” of Jews.  In those days, in Alexandria especially, the Jewish community was roughly divided into three camps, of roughly equal size: some Jews were Jewish purists, who wanted to wrap themselves up in Jewish tradition and keep out encroaching Hellenism at all costs.  And some Jews (like everyone else) saw the glory and greatness of Greek civilization and wanted to Hellenize. Philo was an advocate of the middle way between the Jewish Separatists and the Hellenizers: while he recognized the greatness of Hellenic culture and thought, he held that the Jews ought not be be considered inferior, comparable to the many other barbaric and unsophisticated cultures that the Greeks had simply swamped and absorbed.

AristobolusPlatoMoses

Nothing could make the Jews stop being the Jews, not even the Greeks—and that is saying something. And for all the glory and greatness and excellence of the Greeks, it was clear that there was SOME ONE ESSENTIAL THING that the Greeks were lacking, and this was precisely what the Jews HAD: namely, a proper knowledge of the nature of God.

Philosophically speaking, the Greeks could arrive at a correct conception of God.  Socrates was already aware that “the gods” were not “the God” (to whom Socrates frequently referred—indeed, it was Socrates’ rather evident disdain for ‘the gods’ that made the charge of atheism leveled against him plausible).  Plato arrived at the ἡ ἰδέα του ἀγαθού or the Idea of the Good, which as Socrates says in The Politeia (The Republic), is “still beyond being, exceeding it in dignity and power.”  Aristotle arrived at the idea of the “prime mover” or absolutely first mover of all things, which is an unmoved mover.

And certainly no one can doubt the mythological and poetic richness of Greek mythology and tradition.  Aesthetically, the gods of Olympus are as remarkable as anything else the Greeks set their minds and hands to.  But Greek mythology, beautiful fantasy that it is, falls rather short as actual theology.

Between the splendid gods of the Greeks and Plato’s The Good or The One, there lies a gulf.  It was precisely this gulf, or gap, that the early Jewish-Hellenic philosophers such as Aristobolus and Philo were convinced that the Jews could fill: the Greeks needed the Jewish understanding of the ONE GOD, of THE GOD who is THE GOOD, the God who is not a magnificent being among beings, nor an abstract and impersonal first principle like Parmenides’ BEING IS, but is the I AM.  For the Greeks, the idea of a personal being was practically a contradiction to the idea of The One or The Good.  The particularity involved in being an I seemed to them to exclude the universality of Being. The Jews, Philo argued, were the people who had learned how to see God precisely as the eternal I AM.

So Philo coined the term “polytheism” as a kind of barbed term. An intelligent, educated Greek could hear the dissonance in the word.  There is something wrong about it, something unstable, deliberately so.  The “poly” is at odds with the “theos.” If the nature of divinity is to be ultimate or highest or first, how could there be many such? Deep down, the Greeks knew well enough that what really mattered was ‘the God’ (as Socrates would had said) and not the various gods of the myths and stories.  The idea of divine unity was one of the central (and (somewhat) hidden) teachings of many of the Greek “mystery” cults. The word “polytheist” would have struck the educated Greek ear like “manyTheOne-ist”—it is a word that is literally its own refutation.  By calling the Greeks “polytheists”, Philo was attempting to jar them into philosophical questioning, to awaken questions to which, he intended to show, the Jews had answers that the Greeks did not. It wasn’t an easy task to convince the Greeks that another people had insights they did not, but it wasn’t impossible either: the Greeks were never averse to borrowing any idea from any other people they encountered, and everyone, even the Greeks, recognized that the Jews were somehow, for good or ill, a unique and singular people.

Philo met with at least partial success and can be perhaps be credited with laying some of the ground upon which Christianity would take root and grow a couple centuries later. But we are chasing the words “polytheist” and “monotheist,” so we must leave fair Alexandria, a city made half of philosophy and half of demotic riots.

“Polytheism” appears to have lain dormant as a term until the French philosophe Jean Bodin resurrected it in 1580, taking it over directly from Philo.  It reappeared in an era newly obsessed with the idea of classification.

“Monotheism” seems to be an English coinage, from various theological debates that centered around a group of called the Cambridge Platonists, which included, among others, Ralph Cudworth and Henry More.  The O.E.D. attributes the first use in print of “monotheist” to More’s 1660 An Explanation of the Grand Mystery of Godliness, but there seems to be some evidence that Cudworth used it first.  Cudworth is known to be the first to use “theism” in print, which he also coined, and it is likely that the terms came about in discussion between Cudworth and More, who were likeminded, and close friends.

“Monotheism” in the original usage of Cudworth and More seems to be equivalent to “pantheism,” the “mono-” being taken as the idea that there is just one thing, and that thing, everything that is, is God.

Cudworth and More were both “Cambridge Platonists” but were also thoroughly men of their time, that is to say, of the early Enlightenment.  The were part of what scholar Nathan MacDonald has rightly spoken of as “an explosion of ‘-isms'” at that period in Western intellectual history.   MacDonald cites N. Lash on the point:

It is, I think, almost impossible to overestimate the importance of the massive shift in language and imagination that took place, in Europe, in the seventeenth century; a shift for which de Certeau has two striking phrases: the ‘dethroning of the verb’ and the ‘spatialization of knowledge.’

Suffice it to say that a massive shift in understanding was occurring, one in which European thinkers discovered a vast new love of categorization and classificatory schemata, and Cudworth and More became the dominant theological voices that shaped the discourse in English. Simply consider this: how many people, today, would think it unusual to USE the term “monotheism”? And yet, the term dates from 1660, and has neither a clear nor direct connection with, for example, Christianity.

In fact, when one considers it, “the number of deities involved” seems to be an obviously terrible and completely misleading way of dividing the world’s various “religions.” What has a mere quantity got to do with anything substantial?

So Cudworth and/or More coined the term “monotheism” (and well as “theism”, by Cudworth) in the mid-17th century, but as everyone should be aware, those who coin a term do not necessarily always get to be the ones who determine its meaning.

From the day of More and Cudworth, the term “monotheism” has had a long and torturous history, undergoing at least THREE major scholarly reconceptualizations, plus innumerable idiosyncratic uses of the term by individual scholars, either in connection with this tradition or not.

The most complete account I’ve been able to find of the history of the word “monotheism” is Nathan MacDonald’s Deuteronomy and the Meaning of Monotheism, to which I refer any reader interesting a truly elaborate account of the term across the 3 1/2 centuries it has been in use, in all its twists and turns.

Despite the fact that scholarly use has always been rather nuanced, although not consistent, when the words “monotheism” and “polytheism” passed into popular usage, it was probably inevitable that they would be put together as parallel terms by those unfamiliar with the historical, scholarly uses of the terms.  Rather, the forms of the words, just by themselves, suggest exactly the same sort of parallelism which is implied by pairs such as “monosyllabic and polysyllabic” or “monogamy and polygamy.”

The point to emphasize are these terms are not from within any given faith or religious tradition. No “polytheists” even called themselves “polytheists.”  Indeed, for polytheisms, there could hardly be a more irrelevant question than the number of the gods. Nor did those who were categorized by Enlightenment scholars as “monotheists” ever think of themselves under such a heading.  It was, indeed, absolutely crucial for the Hebrews that YHVH was absolutely ONE and absolutely OTHER than the various gods of the various other peoples.  But of course the ONENESS and UNIQUENESS of YHVH has nothing to do with mere enumeration, as if one were to count “the gods of the Hebrews” and arrive at the result ‘one’.”

So it eventually happened—and this was probably inevitable—that “polytheism” came to mean, in the popular mind, “belief in many gods” and “monotheism” “believe in only one god.”  This was not what the term “monotheism” was ever meant to express, but no matter. The form of the words was enough to make the case in most people’s minds.

This is conceptually ruinous in several ways. (1) The alienness of the words to the faiths and traditions they attempt to classify, causes systematic misunderstanding of those faiths and traditions.  (2) The false appearance that the words were MUTUALLY EXCLUSIVE, when they are not. (3) And worst of all, the tendency strongly engendered by these rather impoverished concepts to conflate God with the gods; the ONE GOD, who is the absolute source of all things with the various “powers and principalities” which populate the spiritual world.  This conflation would be not unlike confusing the Platonic form of DOG for a dog—except it is much worse, since there is less distance between a dog and the intelligible form of dog—both are creatures, particular dogs participate in the form of dog, etc.—than between God and the gods.

God has no more to with the gods than He does with rocks, planets, electrons, or assault rifles.  The only creatures of which we are aware than have any connection to God not shared by all other creatures are human beings, and this because we are created by God in the image of God—or at least so the Christian teaching has it.

Vedantic Hinduism is a good case in point of the near-complete uselessness of the terms “polytheism” and “monotheism.”  Hindus believe in literally millions of gods, so they are obviously polytheists.  And yet, at the same time, all the millions of gods are created by Brahma; they are merely the “dreams of Brahma” (as are men and the world) and will vanish when Brahma wakes.  And yet we are still not done—because Brahma is much more like Plato’s Demiurge, who creates the cosmos by giving recalcitrant matter form, than Brahma is like the God of the Jews or the Christians.  Still beyond Brahma is Brahman, which is the absolute and ultimate beginning and end of all things, including Brahma, creator and sustainer of all lesser things.  Very clearly, it is Brahman—not Brahma, and certainly not any of the millions of other Hindu gods—that Jews and Christians would recognize as God (albeit, they would say, God imperfectly understood, due to the lack of direct divine revelation).  Brahma in his relation to Brahman seems a little parallel to the heresy of Arianism, which held Christ to be the first and highest creature of God the Father—something the Church emphatically rejected.  Let Saint Nicolaus’ breaking Arius’ nose at the Council of Nicaea stand for the whole of the orthodox Christian tradition on this point (Yes, Santa Claus punched out the arch-heretic. It only makes me think the better of him.)  And much the same could be said about Plato’s idea of the Good—this is what Jews and Christians would recognize as God, and not the Demiurge spoken of in the Timaeus or the Nous spoken of in the Philebus.

To bring this to a close, my judgment is that the terms “polytheism” and “monotheism” are at best useless and in truth, usually positively harmful as aids to understanding.  As concepts, they do not clarify the things they name, but in fact, positively mislead and distort them. I suggest that you cease using them.  They do no good, and considerable harm.

By far the greatest harm the terms do is make it too easy to confuse God with the gods, something which, as I have noted, is the most basic distinction one needs to be able to make, in order to talk meaningfully about God.  See my posts God vs the gods,  Which God Exists?, God, gods, and kamikazes, as well as my reblog of the Maverick Philosopher, Bill Vallicella’s “Some Of Us Just Go One God Further“, in which he takes apart the banal atheist trope about theists “being atheists about other gods.”