Explanation and Epistemology

William G. Lycan

    Explanation and epistemology are closely related in at least three ways.  First, the notion of explanation is itself an epistemic one.  To explain something is an epistemic act, and to have something explained to you is to learn.
    Second, there is a form of ampliative inference that has come to be called ‘inference to the best explanation,’ or more briefly ‘explanatory inference.’  Roughly: From the fact that a certain hypothesis would explain the data at hand better than any other available hypothesis, we infer with some degree of confidence that that leading hypothesis is correct.  There is no question but that this inference is often performed.  Arguably, every human being performs it many times in a day, perhaps without letup.
    Third, there is an epistemological thesis, sometimes called ‘Explanationism,’ to the effect that explanatory inference is (not only performed but) warranted and does epistemically justify the accepting of its conclusion.  That thesis comes in several grades of strength, but even its lowest grade is controversial among epistemologists.

1 Explanation
    What is explanation?  That question has been voluminously discussed, though by philosophers of science rather than by epistemologists, since the heyday of Logical Positivism.  The Ur-answer of the 20th century was Hempel and Oppenheim’s (1948) Deductive-Nomological (D-N) theory, also called the ‘covering law’ model.  According to that theory, one explains a fact or event (the ‘explanandum’) by showing that it followed by law of nature from a preëxisting set of circumstances or conditions.  Such a showing would take the form of a deductive argument, a deduction of the explanandum from the antecedent conditions and one or more laws of nature.
    To take an example of Jaegwon Kim’s (1967):  A room had its walls painted white.  But the walls later blackened.  Why did they?  Explanation: (i) The paint contained lead carbonate; (ii) the gas used for lighting the room contained sulfur (evidently the example is set in a pre-electric century); (iii) lead carbonate combines with sulfur to form lead sulfide; and (iv) lead sulfide is black.  (Though (i)-(iv) do not strictly entail that the walls blackened, they can be eked out: We can add that the burning of the gas gave off sulfur, that the sulfur diffused through the air and made contact with the paint on the wall, etc.)  Initially, there was paint containing lead carbonate, and sulfur was introduced; so the laws of chemistry kicked in and blackening was the then logically inevitable result.
    More generally, a D-N explanation takes the form:

                  C1, C2,…, Ck      Antecedent conditions
                  L1, L2,…, Lr        Laws of nature
                  \  E                       The explanandum

--where the horizontal line and the \ symbol mean that E has been logically deduced from the two sets of premises.
    The D-N model was soon beset by counterexamples and other objections of many different types.  (For a review. see Salmon (1989).)  But for our purposes, the important thing to notice is that a D-N explanation exhibits what are actually several different features, each of which is individually relevant to the business of explaining.
    First, a D-N explanation is a case of subsuming.  The explanandum is collected under a generalization, given by the pertinent laws.  Thus it is exhibited as part of an overall pattern.
    Second, a D-N explanation shows that given the antecedent conditions, the explanandum was to be expected, or could have been predicted, by anyone who knew the laws.
    Third, if we assume that for X to lead by natural law to Y entails that X causes Y, a D-N explanation is a causal explanation.  (More strongly, it presents a case of causal necessitation; it does not merely cite some causal factors.)
    Fourth, a D-N explanation is a complete explanation, not in the sense of containing all imaginably relevant information, but in that of containing enough explanatory information to entail its explanandum.  Given the antecedent conditions and the laws, it is not even conceivable that the explanandum should not have ensued.
    For proponents of the D-N model, it was only natural that the foregoing four features should coincide.  But one lesson of the subsequent critical literature is that they do not fully coincide in real life; there are many cases of explanation in which they come apart.  Take the last feature first:  Real explanations in real science almost never have it.  Toy examples in Newtonian physics do, and perhaps our paint example would if thoroughly enough filled in, but quantum-mechanical, biological, geological, and certainly meteorological explanations do not.  Quantum-mechanical explanations are probabilistic; and special-science explanations rest too heavily on idealizations and are too vulnerable to lower-level hardware breakdowns.
    So, in particular, a causal explanation need not be ‘complete’ in the entailing sense.  A perfectly good causal explanation can also fail to show that its explanandum was to be expected.  Standard examples of this are highly improbable events whose mechanisms we can work out but only after the fact.  E.g., evolutionary biology can explain the emergence of a trait in a population, but could not have predicted in advance that that trait would emerge (Scriven (1959)); an atomic nucleus suddenly decayed at time t and gave off an alpha particle; quantum mechanics can and does explain the particle’s emission, but quantum mechanics itself entails that the emission at t was highly improbable and could not possibly have been predicted (Railton (1978)).  Nor need a subsumption show that its explanandum was to be expected; the nucleus in Railton’s example is subsumed under a rigorous quantum-mechanical law, and thus elegantly exhibited as part of a pattern, but the law is a probabilistic law rather than a universal generalization.
    And a perfectly good causal explanation can fail to subsume in the D-N sense.  I can show that one event caused another without knowing any interesting general law that underlies my explanation.  For example, I may work out by Mill’s Methods that it was the chicken salad that must have gone bad and poisoned the stricken cafeteria patrons, without having any idea what toxin did the poisoning or according to what biological laws.  Conversely, a good subsumption need not be causal, as when physics relies on purely geometrical explanation (or for that matter, when geometry does).  And by the same token, we can show that a surprising event was to have been expected (had we known one or two of its antecedents in light of an inductive correlation between them and it), without having the faintest idea of how or why the antecedents led to the event.
    Thus, so far we have three distinct though overlapping paradigms for scientific explanation—subsumption, showing-to-be-expected, and causal.  And there are at least two more.  One is the ‘pragmaticist’ conception associated with Scriven (1962), van Fraassen (1980) and others, of filling a gap in understanding by answering a ‘why’-question in a contextually informative way.  It can readily be checked that filling a gap in understanding, being a matter of individual psychology, is conceptually independent of any of the preceding three paradigms.  A fifth paradigm, sometimes touted in textbooks, is the reduction of the unfamiliar to the familiar.   Such reduction, though it happens (as in the case of assimilating electricity to the coursing of little balls through a pipe), is somewhat unusual in science; certainly molecular genetics, general relativity and economics do little of it.  ‘Scientific explanation’ begins to look like a family-resemblance sort of category, comprising the distinct conceptions mentioned so far as well as perhaps others.
    Certainly it comprehends a few more specialized explanatory formats as well.  There is the sort of function-analytical explanation that pervades cognitive psychology, biology, computer science, systems theory as applied to artifacts, electrical engineering, and auto mechanics (Simon (1969), Wimsatt (1976); Cummins (1983)).  One explains the behavioral capacities of an organism or system by decomposing that system into subsystems and showing how the subsystems coöperate to produce the corporate output of the whole; then, for any of the subsystems, the process can be repeated at the next lower level of organization, and so on.  (An automobile works—locomotes--by having a fuel reservoir, a fuel line, a carburetor, a combustion chamber, an ignition system, a transmission, and wheels that turn.  If one wants to know how the carburetor works, one will be told what its parts are and how they work together to infuse oxygen into fuel; and so on.)  There are the special patterns of explanation found in history and sometimes in the social sciences:  We often explain people’s behavior by rationalizing it, by showing why it was a good idea from the principals’ point of view (Dray (1963), Dennett (1987)).  This style of explanation presupposes norms of theoretical and practical rationality; nothing of the sort appears in physics or chemistry, though such norms are not entirely foreign to evolutionary biology.
    ‘Scientific explanation,’ if that means explanation of any type that is regularly offered in the sciences, is motley.  It is not likely to be captured by a single set of necessary and sufficient conditions.
    Of course, not all explanation is scientific explanation, nor is epistemology primarily concerned with scientific explanation.  Most of the explanations that ordinary people (including ordinary philosophers) provide and receive in real life are not scientific, but are couched in terms of everyday things, events and people.

2 Explanatory inference
    As its name suggests, an inference to the best explanation proceeds from an explanandum or a set of data to a hypothesis that explains the data better than any available competing hypothesis would.  To put it in that way sounds scientistic, and indeed, the sciences do justify their theoretical posits on grounds of the posits’ explanatory power.  The germ theory of disease is accepted (in part) because it explains striking epidemiological facts, as well as patients’ symptoms.  Even more vividly, the atomic theory of matter is accepted because it explains the very remarkable generalizations of classical chemistry.  Astronomical hypotheses were originally accepted only because they explained the synchronic and diachronic patterns of light observable in the night sky.
    But, again, explanatory inference is hardly limited to science.  A detective solves a murder case by reflecting on the various clues and constraints and arriving at the best explanation of the clues given the constraints, the story that makes the best sense of the clues.  An auto mechanic diagnoses your car trouble by inferring the best explanation of the car’s symptoms.  It may be tempting in such cases to say that the detective or the mechanic has arrived at the ‘only possible’ explanation and therefore has really performed a deduction, by ruling out all the alternate possibilities (Sherlock Holmes talked explicitly in this way).  But that would not be accurate.  There are always many possibilities that have not been logically ruled out by the evidence, but are just poor or outright fanciful explanations:  The murder might have been committed by a very small paratrooper who landed silently in pre-dawn darkness on the garage roof and had some way of getting through the window without breaking it, etc.  Or it might have been committed by invisible aliens.  ‘The only possible explanation’ has to mean ‘the only even halfway plausible explanation.’
    Nor is explanatory inference limited to professional practitioners such as detectives and mechanics.  We all perform it in everyday life as well.  I find what appear to be droppings on my lawn, and infer that an unleashed animal has been by.  The last slice of pizza has unexpectedly disappeared from the refrigerator, and I infer that my daughter has stopped at home after school instead of proceeding directly to her orchestra rehearsal.  Some philosophers, such as Russell (1959) and Quine (1960), argue that our constant flood of beliefs about ordinary physical objects in our environment is the result of constant explanatory inference from the ways we are appeared to.
    In some but not all of the foregoing examples, the inferred explanans can be directly checked after the fact.  For example, the murderer may be caught and confess; the auto mechanic may then check the relevant engine part and verify her diagnosis; I can interrogate my daughter when she gets home.  But ordinarily we make the explanatory inferences with some confidence whether or not we then go on to check them.
Representing explanatory inference schematically:

                  F1, F2,…, Fn are facts in need of explanation.
                  Hypothesis H explains the Fi.
                  No available competing hypotheses would explain the Fi as well as H does.
                  \  H is true.

    Some commentary is required.  (1) What are ‘facts’?  For our purposes, they are just states of affairs that, in the context, are reasonably presumed to obtain.  (They do not have to be facts about any particular privileged subject-matter, such as about objects that have been ‘directly observed.’  Nor do they have to be known with certainty.)  (2) When is a fact genuinely ‘in need of’ explanation?  An interesting question, but let us not try to settle it here; substantive disagreements about what does or does not need explaining are rare.  (3) ’Explains’ in the second premise cannot, without question-begging, mean ‘actually explains’; rather, it is used in the sense of ‘would explain if true.’  (4) Which type(s) of explaining, from among the types distinguished in section 1, sustain the present pattern of inference?  Causal explanation, certainly; probably showing-to-be-expected; and probably function-analytical; but extensive discussion is needed here.  (5) ’As well as’ in the third premise implies an evaluative comparison; one hypothesis explains the facts better than another one explains them.  There must therefore be criteria for ranking hypotheses in this way.  And so there are; see the next section.  (6) Of course the explanatory argument form is not deductively valid; the conclusion is not logically entailed by the premises.  The \ sign should here be pronounced as ‘therefore, probably,’ indicating that the argument is ampliative.
    The foregoing schema must be restricted in at least two ways.  First, the third premise alone is not strictly enough to motivate the superior hypothesis H, because H might be only barely superior to a very poor field (Lehrer (1974), p. 180).  Suppose a strange and weird event occurs, and no faintly plausible explanation suggests itself.  All we can think of is that the event was perpetrated by aliens.  The hypothesis that it was caused by aliens from Venus is less implausible than the hypothesis that it was caused by aliens from outside our solar system, since it at least does not require interstellar travel.  But it is not at all plausible and should not be accepted even to a small degree.
    The moral is that there must be a threshold.  In addition to merely outstripping its competitors, H must meet some minimum standard of credibility.  (Though we must leave it open that H be antecedently improbable; obviously we do often confidently infer explanations that would have seemed highly improbable until we had seen the particular set of data they explain.)
    One might think of adding that besides meeting the minimum standard, H must outstrip its nearest competitor by a considerable margin.  After all, if H´ is nearly as good an explanation as is H, falling just barely short, why should we then plump decisively for H?  But we do not need to plump decisively for the conclusion of an explanatory inference.  Like any ampliative argument form, our schema’s instances warrant their conclusions to varying degrees.  In the present circumstance, though we would not be warranted in accepting H in marked preference to H´, we would be warranted to a very small degree in accepting H rather than H´.
    The second restriction is required by the possibility of subjects who are very unimaginative.  Suppose I am especially gullible.  I receive a gaudy letter in the mail telling me that I have won a money prize, and all I need to do to collect the prize and perhaps further millions is to fill out the complicated form in the envelope and return it by mail to the Publishers’ Raffle Co. to participate in their final drawing.  I do this, believing that I have won a money prize and that all I need to do to collect the prize and perhaps further millions is to fill out the form; no alternate explanation of my having been sent the letter occurs to me.  Or suppose I am just no good at thinking up hypotheses:  I find the slice of pizza missing and I conclude wonderingly that my refrigerator is defective in such a way that it makes pizza disappear without trace.  These are not reasonable inferences.
    So we need to restrict the third premise by requiring that a reasonable range of hypotheses have at least tacitly been considered.  (What range is ‘reasonable’ in a given context will depend both on the contextual facts and on the subject’s existing beliefs and expectations.)
    But now we must address the question of what makes one hypothesis a better explanation than another.

3 The explanatory virtues
    We must consider the case in which two competing hypotheses explain or accommodate overlapping data, and neither has been refuted by having been shown to entail something false; thus they are viable mutual competitors.  How do we tell which is ‘better,’ i.e., which should be preferred?  In practice, we make such judgments on the basis of various pragmatic reasons (Quine and Ullian (1978), Thagard (1978), Harman (1986)):  H may be preferable because it is simpler, or because it explains more than does its competitor, or because it is more readily testable, or because it is less at odds with what we already reasonably believe, or (more likely) because of some more complex combination of such factors.
    The preference for simplicity in particular is illustrated by the standard example of experimental scientists’ practice in curve-fitting on graphs:  Given a set of data points that in fact lie along a straight line, any scientist will go ahead and draw a straight line through them rather than any more complicated curve, and leave it that way unless further, refuting data should come in.  This compelling smoothness of the linear hypothesis is a virtue of some sort, one that is not shared by the hypotheses respectively expressed by other curves that pass through the very same data points, such as perhaps a sine curve.  Notice that there are countless many more complex curves that pass through those same points, including one that looks like a rather scrawled handwritten token of ‘God defend New Zealand.’
    There are many different types and respects of simplicity other than the simplicity of a mathematical function: elegance of structure; parsimony of posits and/or of ontology; fewer principles taken as primitive; and no doubt more.  It should not be suggested that ‘simplicity,’ even simplicity in a single respect, is easily measured, or even that it can be given a clear general characterization; see Foster and Martin (1966) and Sober (1975).  And simplicity’s different kinds and respects overlap and cut across each other, often conflicting; there are no set rules for resolving such conflicts.
    When a theory explains more than does its competitor, especially if the added explananda are taken from a distinct range of phenomena, we speak of greater explanatory power; other things being equal, we prefer a hypothesis of greater power.  Perhaps this is a higher-order manifestation of the drive for simplicity; it makes for greater simplicity in the overall belief system that contains the two ranges of data.
    Other pragmatic virtues include:
    Testability.  Other things being equal, a hypothesis H will be preferred to a competitor H´ if H has more readily testable implications.  The verificationists were rash to hold that untestability amounted to cognitive meaninglessness, but testability is an important component of a hypothesis’ merit.  Intuitively, if a hypothesis makes no testable predictions, it has little explanatory force.  Suppose someone believes that the tides and the weather at sea are controlled by capricious demons who are invisible and otherwise undetectable.  This subject has no further belief on the topic, and is unable to predict future weather because, he says, it all depends on the demons’ whims at the time.  Thus, his only available explanation of a past event is, ‘The demons must have wanted it that way,’ which (n.b., even if we are perfectly happy to accept the existence of demons) is not highly explanatory.
    Fecundity.  H will be preferred to H´ if H is more fruitful in suggesting further related hypotheses, or parallel hypotheses in other areas.  (Perhaps this is a higher-order form of simplicity again.)
    Neatness.  H will be preferred to H´ if H leaves fewer messy unanswered questions behind, and especially if H does not itself raise such questions.
    Conservativeness.  H will be preferred to H´ if H fits better with what we already believe.  If this sounds dogmatic or pigheaded, notice again that, inescapably, we never even consider competing hypotheses that would strike us as grossly implausible; the detective would never so much as entertain the hypothesis that the crime was committed by invisible Venusian invaders, nor the mechanic that your car trouble is caused by an infusion of black bile or evil fairy dust.  Nor should we consider such hypotheses, even if we could enumerate them all; someone who insisted on doing so would be rightly accused of wasting everyone’s time.  All inquiry is conducted against a background of existing beliefs, and we have no choice but to rely on some of them while modifying or abandoning others—else how could any such revisions be motivated?
    Every pragmatic virtue is a matter of degree.  And there is the obvious complication:  Our preference for any one of the virtues always comes qualified by ‘other things being equal,’ and the ‘other things’ are the respective degrees of the other virtues.  Clearly the virtues can conflict among themselves.  Perhaps the most obvious tension holds between simplicity and conservativeness, since often simplification is gained only through bold overthrowing of previously accepted theory, as in the case of Copernican vs. Ptolemaic astronomy.  But because of the complexity of any detailed real-world case study, there is no generally accepted policy for weighing the various degrees of the various virtues against each other in any particular inquiry.  The lack of such a policy understandably has led some epistemologists to skepticism or relativism concerning theory choice based on pragmatic virtues.  At the very least, it would be an Augean task to sort through the history of science and other factual inquiry, comparing cases of what we consider reasonable theory choice and trying to sift out the combinations of degrees of the various virtues that motivate our judgments of the reasonableness of those choices.
    Still, in the vast majority of ordinary cases we do not much disagree on which hypotheses are better than others.  Such disagreement is the exception rather than the rule, and even when there is disagreement, consensus can often be reached through discussion that makes more of the relevant factors explicit.  Even in science, disagreement at the frontier presupposes a great deal of agreement on theory choices made in the past.

4 Explanationism
    (The slightly barbarous label was coined by James Cornman (1980).)  As the term shall be used here, it designates roughly the doctrine that what justifies an ampliative inference--or more generally the formation of any new belief--is that the doxastic move in question improves the subject’s explanatory position overall, or as we sometimes say, the move increases the explanatory coherence of the subject’s global set of beliefs.  In particular, the Explanationist holds that some beliefs are indeed justified by ‘inference to the best explanation’ as described in the previous section.  Explanationism derives from Peirce and Dewey, by way of Quine (1960) and Wilfrid Sellars (1963).  But Harman (1965) was the first to articulate it and defend it against better entrenched competing epistemologies.  It has since received support from Thagard (1978, 1989), Lycan (1988), and Lipton (1991).
    One must distinguish between at least three grades of Explanationism; we may call them respectively ‘Weak,’ ‘Sturdy,’ and ‘Ferocious.’  (A fourth and higher grade will be mentioned shortly.)  Weak Explanationism is only the modest claim that explanatory inference can epistemically justify a conclusion.  (As we shall see, that claim has been vigorously disputed.)
    Sturdy Explanationism adds that explanatory inference can do its justifying intrinsically, i.e., without being derived from some other form of ampliative inference, such as probability theory, taken as more basic.  (That claim is disputed by Cornman (1980) and by Keith Lehrer (1974), who argue that explanatory inference is at bottom a use of probability theory.  Other theorists have tried to reduce individual explanatory virtues, such as simplicity, to probabilistic features.)
    Ferocious Explanationism adds that no other form of ampliative inference is basic; all are derived from explanatory inference.  (That claim is disputed by almost everyone.)  Interestingly, Harman originally defended Ferocious Explanationism, ignoring the two weaker forms, by trying to exhibit various common patterns of traditional inductive inference as enthymematic instances of explanatory inference; see also Lycan (1988, Ch. 9).  Harman’s mature Explanationist view of all reasoning is given in his (1986).  At times he seems to incline toward an even more ambitious doctrine, which we might call ‘Holocaust Explanationism’: the view that all inference and reasoning, including deductive as well as ampliative, is derived from explanatory inference.  That would require reconstructing simple logical deduction in explanatory terms.
    Weak Explanationism is a commonsensical view, but of course we must address objections that have been made against it, especially as they are a fortiori objections to the stronger doctrines.  Sturdy Explanationism is slightly more contentious, but in the absence of any actual attempt at reducing all the pragmatic virtues to probabilistic or confirmation-theoretic notions, there is little point in considering criticisms of it that do not also apply to Weak.  (The Weak-but-not-Sturdy view has not been pursued in any detail, and in any case would be too technical to be usefully discussed here.)  Following consideration of the objections to Weak, we shall then concentrate on the pros and cons of Ferocious Explanationism.

5 Two objections to Weak Explanationism, and replies
    Van Fraassen (1980) argues that explanation—even the most rigorous causal explanation that takes place in science--is interest- and/or context- and/or understander-relative in each of a number of ways.  If explanation is in the eye of the beholder, but we remain more or less realist about epistemic justification, then the former is no fit foundation for the latter.
    Actually van Fraassen calls attention under this heading to what are a number of different and unrelated phenomena, of which the most pertinent are the following.  (1) The nonuniqueness of causes.  Van Fraassen points out (pp. 115, 125-126) that any choice of ‘the’ cause of an event from among all the event’s contributing factors is undeniably interest-relative; the most one can find in nature is a set of causes, causal factors and conditions that combine to lead by law to the event.  (Reply: Quite so.  But nothing in the Explanationist program presupposes otherwise.)  (2) Knowledge-relativity.  Van Fraassen gives an example (p. 125) designed to show that explanation is relative to what one knows.  (Reply: Of course an explanation may be helpful or unhelpful to one depending on what one already knows.  No surprise there, and nothing damaging either to anyone’s theory of anything: that whether an explanation tells you anything you didn't already know depends on what you already knew is tautologous, and so irrelevant to any philosophical argument.  (3) Alleged interest-dependence of the explanans-explanandum asymmetry in particular.  Van Fraassen accepts the contention attributed to Sylvain Bromberger, that normally the height of the flagpole explains the length of its shadow rather than the other way around, and he joins Bromberger (1966) in taking this to embarrass the D-N model.  But he resists the diagnostic conclusion that the asymmetry is something in nature that the concept of explanation latches onto.  Rather, he says, explanation itself, a relation solely between theories and minds, is asymmetrical.  The asymmetry is psychological and depends on our interests, combined, of course, with our natural expectations based on de facto regularities of nature.  (Reply:  Van Fraassen's key example, his romantic story of ‘The Tower and the Shadow,’ is unconvincing; see Kitcher and Salmon (1987).)
    (4) Relativity to contrast-class.  Here we come upon a phenomenon that indeed must be addressed in any discussion of explanation and epistemology.  Van Fraassen reminds us (pp. 127-128, cf. Garfinkel (1981)) of the familiar point that ‘why’-questions, and hence explanations (their answers), seem ambiguous as to focus within their complement clauses--that is, ‘why’-questions seem to request different explanations depending on emphasis.  Van Fraassen offers the example, ‘Why did Adam eat the apple?’  In asking that, we normally would want to know why Adam ate the apple rather than scrupulously avoiding it as ordered by God.  But different explanatory requests could be produced using nearly the same sentence, by varying surface syntax or just intonation: ‘Why was it Adam who ate the apple?’; ‘Why was it the apple that Adam ate?’; ‘Why did Adam eat the apple?’  Requests for explanations, and hence presumably explanations themselves, are finer-grained--individuated more finely--than are explananda as expressed by ordinary unmarked propositions.
    What this shows is that explanation has a built-in ‘as opposed to...’ clause.  (‘Why did Adam eat it as opposed to avoiding it?’, vs. ‘Why did Adam as opposed to Eve eat it?’, vs. ‘Why did he eat the apple as opposed to the serpent, grilled?’)  In each case, what van Fraassen calls a contrast-class is presupposed—people who might have eaten the apple, things that Adam might have eaten, and the like.  This seems a fair criticism of the original D-N model.  The model would have to be modified to make the premises of a D-N explanation home in on the syntactic element of the explanandum-conclusion that signals the appropriate contrast-class.  But what bearing has the relativity to contrast-class on Weak Explanationism?
    Lipton (1991) offers an excellent discussion of contrast-relativity.  As he describes the phenomenon (p. 35), ‘[a] contrastive…[explanandum] consists of a fact and a foil, and the same fact may have several different foils.’  He neither affirms nor (strictly) denies van Fraassen’s apparent assumption that all explanations are implicitly contrastive, but he gives heavy weight to the contrastive case.  He argues that often it is easier to explain a contrast than to explain the uncontrasted fact alone (cf. Garfinkel (1981), p. 30): Lipton’s preference for contemporary plays may explain why he went to see Jumpers last night rather than Candide, but that does not suffice to explain why he went to see Jumpers.  (Though sometimes it can be the other way around; we can explain why Jones contracted paresis without being able to explain why Jones rather than Smith did.)
    One might suppose that what is going on in a case of contrast-relativity is just that two different explananda are being considered.  If so, then van Fraassen’s argument would in no way damage Weak Explanationism, because the explanatory-inference schema already isolates and presupposes a single given explanandum.  But Lipton (though himself an Explanationist) goes on to make a point that casts some doubt on this simple move, by arguing that there is no simple reduction of a contrastive explanation to noncontrastive (particularly truth-functional) form.  In particular, one cannot reduce explaining contrastively why P rather than Q to explaining noncontrastively why (it is the case that) P and not Q, because the latter would involve explaining each conjunct, and as we saw, either conjunct might be harder to explain than was the original contrast.
    Lipton’s antireduction case is convincing, but notice that it does not help van Fraassen refute Weak Explanationism.  For even if there is no truth-functional or other simple reduction of contrastive explananda to noncontrastive ones, we can still distinguish the contrastive explananda from each other.  Van Fraassen himself even gives a helpful notation in which to do so, in terms of his contrast-classes attaching to syntactic elements of the original uncontrasted explananda.  So far as has been shown, it is still entirely open to us to hold that cases of contrast-relativity are merely cases of distinct explananda, and if that is what they are, they are no threat to Weak Explanationism.
    Let us then turn to a second objection to Weak Explanationism.  Van Fraassen offers the following eloquent rebuke to common sense.

Judgements of simplicity and explanatory power are [admittedly] the intuitive and natural vehicle for expressing our epistemic appraisal.  What can an empiricist make of these...virtues which go so clearly beyond the [more narrowly evidential] ones he considers pre-eminent?
    There are specifically human concerns, a function of our interests and pleasures, which make some theories more valuable and appealing to us than others. Values of this sort, however,... cannot rationally guide our epistemic attitudes and decisions. For example, if it matters more to us to have one sort of question answered rather than another, that is no reason to think that a theory which answers more of the first sort of question is more likely to be true.  (p. 87)
Van Fraassen’s second paragraph suggests two different arguments against Weak Explanationism.  Each is attractive on its face.
    The first argument can be put as follows.  The explanatory virtues’ pragmatic value are just a mixture of corner-cutting convenience--really a form of epistemic laziness--and merely aesthetic appeal.  In curve-fitting, we prefer the smoothest hypothesis because the smooth curve both is easier to draw and looks prettier.  But why should anyone think that convenience and prettiness count in any way toward truth?  Why should a theory’s being simpler than another theory make the first theory more likely to be true, to match reality?  The Grecian Urn’s motto, that beauty and truth are one, was just Keats running romantically out of control.
    Sometimes a similar worry is expressed about simplicity by asking why we should be entitled to the assumption that ‘the world is simple.’  But that is a fallacious reading of the explanationist appeal to a pragmatic virtue.  The preference for simple theories over complex ones, is an epistemic norm only, not a metaphysical claim or assumption about what the world is like.  Whether or not it does have justifying force as the Weak Explanationist contends, its doing so would not depend on any prior vague assumption about the structure of the world.  The norm directing one to use a sharp chisel when sculpting marble does not depend on the marble’s itself being ‘sharp’ in any sense.
    Why, then, should we believe that a hypothesis’ being simpler (or more fruitful or neater or whatever) make that hypothesis more likely to be true?  Some philosophers have suggested that the appeal to a pragmatic virtue can justify only if one has first shown in some substantive way that that virtue is truth-conducive, even if one need not do that showing by invoking a sweeping metaphysical generalization about what the world is like.  It seems unlikely that anyone could establish such a thing.  To make an induction over the history of thought, for example, in the hope of establishing that simpler theories had a better truth-tracking record than more complicated ones, would not only be unfeasible but would require us (now) to have access to past truths independently of appeals to simplicity and the other pragmatic virtues.
    Lycan (1988, pp. 155-56) responded to that challenge in a way that can better be put as follows.  Let us advert to the epistemology of epistemology.  Epistemology is a study of norms, the norms governing belief and inference.  Inescapably it rests on ‘intuitions’ about what such things are justified, reasonable, legitimate, and the like.  In epistemology, as in ethical theory and for that matter in deductive logic, the intuitions in question are normative to begin with.  Now, the (here meta-)epistemology appropriate to a normative subject is that of ‘reflective equilibrium’ as proposed by Goodman (1955) for deductive logic, used by linguists to justify rules of theoretical syntax, and developed by Rawls (1971) for ethical theory:  Roughly, we begin with our instinctive normative intuitions and build an accordingly normative theory to systematize them. Mutual adjustment occurs until what Rawls calls ‘narrow’ reflective equilibrium is reached; then factual knowledge and perhaps also other norms are admitted to the equation, resulting in further adjustment and eventual ‘wide’ reflective equilibrium.
    On the present view, epistemology starts with the attempt at narrow reflective equilibrium.  The move to wide equilibrium will involve attending to probability theory, empirical cognitive science and perhaps other areas.  But each equilibrium is likely to respect the pragmatic virtues.  For our pragmatic preferences are not merely preferences, but normative practices: We instruct our science students in the techniques of curve-fitting, epicycle elimination and the like, and science would be in a bad way if we did not.  Of course, the reflective procedure might not turn out as the Weak Explanationist predicts, in which case we should reject Explanationism after all.  But if the procedure does preserve and shore up Weak Explanationist intuitions, that is all the justification that can, or need, be given for the view and for the pragmatic virtues themselves.  On pain of regress, some set of epistemic norms or other must be seen to be epistemologically primitive.  We may be wrong in thinking that the pragmatic virtues are what occupy that role, but the primitiveness the Weak Explanationist claims for them cannot itself be an objection.  Notice that aside from reflective equilibrium, we cannot give a (non-question-begging) justification for Modus Ponens, either, or for a syntactic rule such as Equi-NP Deletion. Certainly we cannot ‘show’ that either of those is truth-conducive; but there is no embarrassment in that.  As Bentham said, that which is used to prove everything else cannot itself be proved.
    But surely there must be some connection to truth?  Yes:  If our wide reflective equilibrium vindicates Weak Explanationism in the foregoing way, then the pragmatic virtues do justify.  If a subject’s belief is thus justified by the virtues, it is justified.  But to be justified in believing that P is just to be justified in believing that it is true that P, really true that P, and as many iterations of ‘true that’ as one might like.  Truth is not something to which we have access independently of holding justified beliefs.  The demand for an independent ‘connection to truth’ is misguided.
    The method of reflective equilibrium has been disputed in meta-epistemology, as it has been in meta-ethics; see particularly Stich (1990).  If it should be discredited, then the present reply to van Fraassen fails and his challenge stands.  Note, though, that we need not capitulate and try to show that the pragmatic virtues conduce directly to truth.  There may be a third alternative.  Stich himself advocates an open-minded, itself pragmatic methodological attitude towards theory preference and epistemic values generally.

6 A tougher objection to Weak Explanationism
    The second argument suggested by the van Fraassen passage is one which has also been made by Ian Hacking (1982) (cf. Cartwright (1983)):  Truth is a relation between a theory or hypothesis and the world.  But the pragmatic virtues are relations between theories and our human minds, to which relations the world seems irrelevant. The virtues have to do with the roles that hypotheses play in our private cognitive economies, not with anything external to us.  They are (in Hacking’s phrase) only what make our minds feel good.  The point is no longer just to ask rhetorically why making our minds feel good should be taken to be a warrant of truth; it is that the virtues are positively the wrong sort of properties to be so taken.
    This second argument is most compelling for the case of conservativeness.  That a hypothesis fits comfortably with what we already believe makes that hypothesis pleasant and attractive to us, but hardly justifies it.  To think it does justify is to assume that what we already believe is justified, merely by the fact of our believing it, and that idea strikes most philosophers as false on its face.  (But see Sklar (1975) and Lycan (1988, Ch. 8), countered in turn by Christensen (1994).)
    Two replies may be made to the second argument.  First, it falsely assimilates the pragmatic virtues to self-seeking emotive or other purely conative ‘reasons’ for believing things (as in Pascal’s Wager, or a case in which we are offered a lot of money if we can get ourselves to believe that unrestrained exploitation of the environment will be a good thing for everyone).  The fact is that the virtues are genuinely cognitive and in one important sense epistemic values.  (On the difference between the pragmatic values and purely conative reasons for believing, see Harman (1997).)
    There is an idea, emphasized by Reliabilists but prevalent among epistemologists more generally, that ‘truth is the goal of cognition,’ and hence that nothing should count as cognitive unless it can be shown to be truth-conducive or at least is somehow directly truth-conducive.  But it is fairly easy to see that truth cannot be the only epistemic value. Suppose it were.  If the goal, like Descartes’, is merely to avoid falsehood, then we could reach our ultimate epistemic goal simply by confining our assent to tautologies; we would still thereby believe uncountably many truths.  If, instead, the idea is to believe all truths, the goal would be radically unreachable.  Realizing those things, the truth-centered epistemologist usually alludes to a ‘favorable balance of truth over error.’  But ‘favorable’ as regards what?  Some further value or interest must be consulted to judge what is ‘favorable,’ or the suggestion is meaningless.
    Second reply to van Fraassen and Hacking:  More specifically, it is hardly unreasonable to suppose with Peirce that beliefs are for something, and that cognition has a function.  Truth cannot possibly be the only goal of cognition.  There must at least be something in the way of informativeness or other usefulness, however that might be measured.  Since belief is a guide to action, a belief’s other pragmatic virtues may also contribute to its overall cognitive goodness.
    Lycan (1988, Ch. 7) argues that the way in which the pragmatic virtues do this is precisely by making cognition efficient in guiding action.  They are the product of good design or ‘design’ by natural selection.  A hyperskilled cognitive bioengineer fitting human beings for a postPleistocene environment would, arguably, have endowed us with the same habits of hypothesis preference as those listed in section 3 above.  For example, she would have built us to prefer (other things being equal, as always) simpler hypotheses to complex ones.  Simpler hypotheses are more efficient to work with.  Complexities incur greater risk of error in application.  And for that matter, simplicity is itself a form of efficiency, in that we want to achieve plenitude of result, in the way of data subsumed and results predicted, but with economy of means.  For the same sorts of reason, the engineer would program us to seek explanatory power when other costs are low.
    The engineer would not want us to load up on beliefs that have little or nothing to do with our immediate interactions with our environment, unless those beliefs play an enormous unifying, simplifying and systematizing role.  Hence, she would have us prefer more readily testable hypotheses to less testable ones.
    Other things being equal, it would be more efficient for us to be able to extrapolate a type of hypothesis motivated by one subject-matter to other, not obviously related areas, so very likely the engineer would have us seek fruitfulness.  It is perhaps more obvious that she would instill an aversion to messy belief systems full of dead ends and paths that lead nowhere.  If we think of belief systems as maps or charts, clearly a neat one will allow us to find our way around our environment more surefootedly than would a messy one.  (But what if the system has been made too neat, and contains inaccuracies?  The recommendation of neatness methodologically assumes, here as always, that the system in question is unrefuted.  Notice too that even in fact, as in real-world cartography, accuracy should not always trump neatness; some error is tolerable, even mandatory, in the interest of smooth and fast action, if the particular error is unlikely to cause much trouble.)  Also, a particular belief that raises awkward questions thereby causes distraction, sapping at least some time and mental energy.
    Finally, the engineer would make us conservative, at least to the minimal extent of not revising our beliefs without some reason to do so.  Like social change, all belief revision comes at a price, drawing on energy and resources.  Arbitrary and gratuitous changes of belief, therefore, are to be avoided.  If there were a habit of making such changes, the resulting instability would be inefficient if not constantly confusing.
    Thus, from the design point of view, it seems to be a good thing that we cognize according to the pragmatic virtues.  We would not function at all well unless we did so.
    (It must be emphasized that the Darwinized Peircean view sketched in the preceding few paragraphs is not an attempt at justifying appeal to the pragmatic virtues in any usual epistemic sense of ‘justify.’  As we have seen, the Weak Explanationist maintains that although the virtues are identified through reflective equilibrium as basic cognitive values of ours, they are, thus, basic; there is nothing that could justify them.  The function of our cognitive bioengineer story is only to rebut Hacking’s charge that they are only mind candy, not cognitive in their value.)
    The invoking of an idealized bioengineer as a metaphor for evolutionary design will and should raise some hackles, for it suggests the Panglossian view that our cognitive powers are optimally suited for getting us about the place, or at least that when we are in top epistemic form we will seek the pragmatic virtues for all we are worth.  The first of those suggestions is plainly false, since nothing about homo sapiens is optimally suited for anything very interesting.  The second suggestion is very likely not true either, since, once we have started to consider adaptation to our environment, there may be other cognitive features that the engineer would find useful but that can conflict with the explanatory virtues, say for special purposes.  (See Stich (1985); Lycan (1988, pp. 145-53).)  If the Explanationist wants to stand by the present Peircean response to van Fraassen’s and Hacking’s challenge, s/he must block the two Panglossian suggestions.  That in itself is no great feat, because they are not strictly entailed by the engineer story; but the Explanationist must block them in a principled way, showing how and why the explanatory virtues are of central cognitive value even though they are neither adaptively optimal nor even (necessarily) adaptively overriding within the cognitive system.
    A further objection to the design response is that the idealizations needed to bring the engineer story into line with Explanationist epistemic norms are greater and more tendentious than is the familiar idealization of our species’ selection history that affords ‘designer’ metaphors in biology generally (Davies (2001)).  Evolution by natural selection aims at reproductive fitness only, and it is hard to see how the value notions of epistemology could ultimately be reduced through any series of independently motivated idealizations to nothing but reproductive fitness.  Here one does get the sense that truth would then have nothing to do with it.
    Van Fraassen (1989) pursues the “mind candy” argument in more detail, and his new version(s) will have to be answered.

7 Weak Explanationism vs. classical confirmation theory
    According to a classical, purely formal confirmation theory, if two competing theories or hypotheses explain or accommodate exactly the same data, neither of them may be preferred to the other.  For each is confirmed to the same degree, and so the two hypotheses are precisely equal in epistemic status, warrant or credibility.  Yet in real life, one of the two may be preferred very strongly.  It seems to opponents of Weak Explanationism that we have considerations of two different sorts: Data, hard evidence, bearing a quasi-formal probabilifying or confirming relation to each of the competing hypotheses H and H´, and the additional pragmatic virtues attaching differentially to H and H´.  The confirming relation is commonly expressed by terms like ‘likely,’ ‘probable,’ and ‘confirms’ in the foregoing narrow sense; the model for it is formal inductive logic or confirmation theory based on probability theory, conceived by the Positivists as a strict analogue to deductive logic.  Let us call it ‘narrow confirmation.’
    Following mainstream epistemology, let us also use the word ‘justify’ and its variants to signify overall epistemic rationality.  ‘Justified’ will then mark out the class of beliefs it is epistemically rational to hold, and ‘justification’ will mean the relation between those beliefs and the evidence in virtue of which holding them is rational.  The question then becomes, is justification in this general epistemic sense exhausted by narrow confirmation?  Which is again to ask whether the explanatory virtues are merely practical bonbons of no specifically epistemic, truth-conducing value, or are instead genuine reasons for accepting a theory as more likely to be true than is a competitor that lacks them.
    The claim that narrow confirmation does exhaust justification is what Lycan (1998) called ‘the spartan view.’  There it was argued that the spartan view gives rise to skeptical quandaries: van Fraassen’s and others’ radical skepticism about scientific unobservables; renewed Evil-Demon skepticism about the external world; and Goodman’s ‘Grue Paradox.’  A proponent of the spartan view must either accept some skeptical thesis or take on the task of showing how a classical confirmation theory can overcome the skeptical quandaries without at least tacit appeal to the pragmatic virtues.  (Lipton (1991, Ch. 6) argues that the Raven Paradox, in addition, will yield to Explanationist treatment.)  If one is persuaded and wishes to resist skepticism, it seems there are three paths: to modify classical confirmation theory in order to respect the virtues; to relegate classical confirmation theory to a confined role in inquiry, granting that justification far outruns narrow confirmation; or to abandon confirmation theory entirely as a bad job.
    Roughly the first path has been taken by Glymour (1980), whose idea is to broaden the notion of ‘confirmation,’ so that H may be counted as ‘better confirmed’ than H´ even though H and H´ are still equally probabilified by the evidence base.  If, however, one wants to demote or abandon confirmation theory entirely, one faces the daunting task of building a systematic account of the pragmatic virtues, their measurement and their comparative interaction.  And to date, no theorist has taken more than a step or two in that direction.

8 Ferocious Explanationism defended
    The most obvious direct argument in favor of Ferocious Explanationism is from the presumed truth of Weak Explanationism and the lack of any promising program for reducing the pragmatic virtues to probabilistic or confirmation-theoretic notions; if explanatory inference does justify, but so far as we know the virtues are not reducible to some more familiar epistemic value, then so far as we know, explanatory inference justifies directly, without being derived from some more basic form of ampliative inference.  Reflective equilibrium may also lead to this conclusion, on top of Weak Explanationism in the first place.
    More distinctively, Harman (1965, 1968) began the task of reconstructing standard inductive argument forms as enthymematic explanatory inferences.  For example, from the fact that all the emeralds observed so far have been green, we inductively infer that all emeralds are green.  Harman would say this is because the latter conclusion best explains the observations.  Suppose it did not; suppose we are somehow independently assured that it just happens that all the emeralds we have observed so far have been green.  Then it does not seem that we are entitled to our usual inductive inference—especially when we take into account that all the emeralds we have observed have also been grue (= green and examined before some future time tf or blue and not examined before tf).  Explanatory considerations are needed, the Explanationist will argue, in order to justify inferring ‘All emeralds are green’ instead of the competing generalization, ‘All emeralds are grue.’
    And consider the more general form of enumerative induction:

                   N% of all the observed Xs have been F.
                  \  Roughly N% of all Xs are F.

If we suppose that what licenses a sampling inference of this form is that its conclusion best explains its premise, we can also account for the two characteristic fallacies associated with the form.  The besetting fallacies are those of insufficient sample and biased sample.  The former is committed when too few Xs have been observed; the latter when we have independent reason to think that the proportion of F Xs in our sample differs from the proportion of all Xs that are F.
‘Insufficient sample’ is a fallacy, the Explanationist will say, because when the sample is too small, the suppressed assumption of best explanation fails.  Suppose I have observed just two marbles in my entire life; one was yellow and one was blue.  Those facts are hardly best explained—probably are not explained at all—by the hypothesis that 50% of all the marbles in the world are yellow and 50% are blue. At least, the latter hypothesis seems to make no prediction that if someone samples two marbles, one will be yellow and the other will be blue.
    What is wrong with ‘biased sample’ is that when a sample is discovered to be biased, that affords precisely a better explanation of the distribution than is the hypothesis that the general population is distributed in that way.  If only registered Republicans are consulted in a poll designed to test public approval of George W. Bush’s performance in his first year as U.S. President, the best explanation of the resulting positive rating is that Republicans tend to support Bush, who was their own candidate, not that a majority of Americans do.
    One can carry the argument a good deal farther than mere enumerative induction.  In an extended case study (of Semmelweis’ search for the causes of childbed fever), Lipton (1991, Ch. 5) shows that on the whole, Mill’s Methods can be neatly integrated into and explained by a careful stepwise use of contrastive explanatory inference.  Where explanatory inference diverges from the Methods, the Methods are seen to be lacking (Ch. 7).
    Unfortunately for the Ferocious Explanationist, there are other forms of inductive and statistical inference that are not so easily represented as enthymematic explanatory inferences.  For example, Harman (1968) has tackled statistical syllogism:

                   N% of all Fs are G.
                  \  The next observed F will be G.

But, as is argued by Lycan (1988, pp. 184-86), his treatment is not satisfactory, and neither is the one that Lycan there proceeds to substitute for it.  Here is perhaps the worst problem case (due to Joseph Tolliver):  Consider a random-distribution process operating over a closed surface; say a nucleus of some kind explodes and scatters lots of particles randomly onto the inner surface of a containing hollow sphere.  The particles end up distributed fairly evenly over that surface.  Now, take a large subregion R of the inner surface; say, R is the 90% that is left when we have mentally subtracted a small wedge from the sphere.  And take any one of the particles, p.  By hypothesis, the vast majority of the particles landed in R, so, probably to degree .9, p did (so long as p is not known to be atypical).  That inference is statistically reasonable, but does not seem to be explanatory in any way at all.  In particular, by hypothesis, the process is random and nothing about the particles themselves makes it true that most of them land in R.
    So it seems there is at least one central and prevalent form of ampliative inference that resists the Ferocious Explanationist’s efforts.
    But here is a new argument for the Ferocious view, aimed at those opponents such as van Fraassen who reject explanatory inference, at least taken as primitive, but who rely confidently on some more traditional forms of inductive and statistical inference:  Inductive and statistical inference rely on evidence that is in the subject’s possession.  The evidence concerns what has been observed.  What has been observed now lies in the past, however recent.  The evidence is believed on the basis of memory.  But, far from accepting all or even most of the remembered evidence, why should we believe in the reality of any past at all?  Why should we not instead accept a Russellian eleventh-hour-creation hypothesis, that the world, whatever it may include besides our present sensations and memory impressions, sprang fully formed into existence half a second ago, though to be sure complete with all the memory impressions and perceptions of apparent traces and records?
    One cannot resolve the conflict between the hypothesis that our memories are veridical and the Russellian hypothesis by appeal to inductive or statistical argument, for it is the data premises of any such argument that are neutralized by the Russellian hypothesis.  Why, then, is it reasonable for us to believe in the reality of the past at all, much less in its statistical details?  It is hard to think of any answer that does not invoke explanatory considerations taken as primitive.  The obvious answer is that the veridicality hypothesis heavily outweighs the Russellian hypothesis at least in simplicity, neatness, and conservativeness, though certainly the details here would be hard to settle.

9 Objections to Ferocious Explanationism, and replies
    Keith Lehrer (1974, Ch. 7) has offered several objections to the Ferocious view.
    First, he says (p. 178), there are ‘completely’ justified empirical beliefs (in addition to the conclusions of statistical syllogisms, noted in the previous section) whose justification depends in no way on explanatory considerations.  For example, we can use the Pythagorean Theorem to infer from the respective spatial locations of a mouse and an owl that the mouse is five feet from the owl.  (Reply:  The Pythagorean Theorem itself is justified by the explanatory roles of the geometric principles from which it is derived.  Rejoinder (p. 179):  It need not have been.  A tribe constitutionally averse to explanation might have worked out the Theorem as an empirical generalization.)
    Second (pp. 170-71; cf. Cornman (1980)):  Achieving explanatory coherence is cheap. One can greatly increase the explanatory coherence of one’s overall belief system by simply throwing out data; that is, whenever a lower-level belief resists explanation or causes trouble in conjunction with another belief, we can preserve explanatory order by just ceasing to hold that belief, rather than by adding epicycles to the system in an attempt to accommodate it.  So the splendid explanatory virtue of a belief system is in itself no reason to accept that system.  (Reply:  Lehrer seems to assume either an unrealistic degree of doxastic voluntarism, or a strange absence of norms concerning what doxastic items must be respected, or both.  We constantly find ourselves with spontaneous beliefs, most but not all produced by our sense organs, which we neither can simply choose to abandon nor would be justified in trying to do so.)
    Third (p. 181):  For any hypothesis or theory held on explanatory grounds, ‘there are always conflicting theories concerning some aspect of experience that are equally satisfactory from the standpoint of explanation,’ so a rational decision between the conflicting theories would necessarily have to appeal to some entirely nonexplanatory desideratum.  The obvious reply to this is to remind Lehrer that we are to infer the best available explanation; we are not responsible for Plato’s Heaven.  But presumably what he has in mind is that in principle there might be an algorithm that could be applied to a given theory and would reliably generate a competing theory with at least as much explanatory merit as the original.  (Reply:  Why should Lehrer or anyone else be confident that there is such an algorithm?  But even if there is, conservativeness can decide in favor of the original and against the mockup.  The original was already believed when the algorithm spit out its artificial competitor.  Several strong rejoinders can be made here, even if we set aside Lehrer’s own rejection (p. 184) of conservativeness as an epistemic value; for further discussion, see Lycan (1988, pp. 174-77).)
    There is a further and more fundamental issue, that seems to demand further substantive work from the Ferocious Explanationist.

10 The unexplainers
    According to any Explanationist, beliefs are justified by their ability to explain other beliefs.  But explanation is asymmetric.  If there is not to be an infinite regress or a vicious circle of explainings, there must be some beliefs that get explained but do not themselves explain anything else—ultimate data, if you like.  Nor is this conclusion merely theoretical, for we can think of plenty of examples.  To take one of George Pappas’, I believe that my visual field contains little moving spots.  Or imagine that you hear a loud report; your belief that you heard a report does not seem to explain anything; you just did seem to hear one.  These beliefs are what Sellars (1973) and Lehrer (1974, p. 162) call ‘explained unexplainers.’  But if the unexplainers are not themselves justified, they cannot in turn justify the hypotheses that explain them, and the whole belief system is left without foundation.  Yet if they are justified, they are so in some way other than by what they explain; so the Ferocious Explanationist must cave in and admit that there is nonexplanatory justification of some sort.
    Actually that is not too large a capitulation for the Ferocious theorist, if it is one at all.  For the Ferocious view is stated specifically in terms of ampliative inference.  So it is open to the Ferocious theorist to hold that the unexplainers are all noninferential beliefs, and the Ferocious view simply does not apply.  We might give any plausible independent account of the unexplainers’ justification.  We might join Chisholm (1977) in appealing to ‘self-presenting’ mental states, or we might go all the way against Sellars and Quine and hold out for an unrevisable sensory given.  Or we might turn Reliabilist about noninferential beliefs; Reliabilism always seemed to work best for noninferential beliefs in any case.
    Still, it is at least embarrassing for the Ferocious theorist to bifurcate her/his overall epistemology in such a way.  For example, if s/he were to go Reliabilist about the explanatory foundation, thoroughgoing Reliabilists such as Goldman (1986) would naturally ask, why bother with the Explanationist superstructure?  Why not be a Reliabilist all the way up?
    The more ambitious Explanationist has at least three less bifurcatory possible moves here.  One is to argue that there are not really any total unexplainers and that not all explanatory circularity is vicious; perhaps there is a virtuous circle of explanation, with lots of little individual near-unexplainers teaming up collectively to explain some apparently higher-level belief.  Another is to maintain that the unexplainers are justified by being explained, and resist the charge of vicious circularity that would naturally attend that move.  A third would be to appeal to one or more of the pragmatic virtues, applied at the level of the unexplainers.
    Lycan (1988, Ch. 8) pursues the third strategy, invoking conservativeness.  Recall from the previous section that we have ‘spontaneous’ beliefs, that merely arise in us and (normally) cannot just be abandoned.  Noninferential beliefs will, at least normally, be spontaneous in that sense.  Now, a spontaneous belief is a belief, and the canon of conservativeness sketched in section 3 above entails (what admittedly may seem outrageous) that the bare fact of our holding a belief renders that belief justified at least to a tiny degree.  The belief may accrete further justification by being explained, indeed by being swept up into a large network of strongly cohering beliefs.  Therefore, at least some noninferential beliefs will be justified, but justified solely by pragmatic virtues even though they themselves explain nothing.
    Incidentally, Explanationism is often referred to as ‘explanatory coherentism’ and regarded as a species of coherentist epistemology, though this essay has not emphasized that theme.  On it, see Harman (1986), Thagard (1989), and Lycan (1996).

11 Conclusion
    Despite the commonsensical status of Weak Explanationism,  there is nothing uncontroversial about Explanationism in any form.  The arguments pro and con presented here only begin to illuminate the issues.


Bromberger, S.  1966.  ‘Why-Questions,’ in R. Colodny (ed.), Mind and Cosmos.  Pittsburgh: University of Pittsburgh Press.
Cartwright, N.  1983.  How the Laws of Physics Lie.  Oxford: Oxford University Press.
Chisholm, R.  1977.  Theory of Knowledge, Second Edition.  Englewood Cliffs, NJ: Prentice-Hall.
Christensen, D.  1994.  ‘Conservatism in Epistemology’, Noûs 28: 69-89.
Cornman, J.  1980.  Skepticism, Justification, and Explanation.  Dordrecht: D. Reidel.
Cummins, R.  1983.  The Nature of Psychological Explanation.  Cambridge, MA:  Bradford Books / MIT Press.
Davies, P.  2001.  Norms of Nature: Naturalism and the Nature of Functions.  Cambridge, MA:  Bradford Books / MIT Press.
Dray, W.  1963.  ‘The Historical Explanation of Action Reconsidered,’ in S. Hook (ed.), Philosophy and History.  New York: New York University Press.
Dennett, D.C.  1987.  The Intentional Stance.  Cambridge, MA:  Bradford Books / MIT Press.
Foster, M.H., and M.L Martin (eds.).  1966.  Probability, Confirmation, and Simplicity.  New York: Odyssey Press.
Garfinkel, A.  1981.  Forms of Explanation.  New Haven, CT: Yale University Press.
Glymour, C.  1980.  Theory and Evidence.  Princeton: Princeton University Press.
Goldman, A.  1986.  Epistemology and Cognition.  Cambridge, MA: Harvard University Press.
Goodman, N.  1955.  ‘The New Riddle of Induction,’ in Fact, Fiction, and Forecast. Cambridge, MA: Harvard University Press.
Hacking, I.  1982.  ‘Experimentation and Scientific Realism’, Philosophical Topics 13: 71-88.
Harman, G.  1965.  ‘The Inference to the Best Explanation,’ Philosophical Review 74: 88-95.
Harman, G.  1968.  ‘Enumerative Induction as Inference to the Best Explanation,’ Journal of Philosophy 64: 529-33.
Harman, G.  1986.  Change in View (Cambridge, MA, 1986).
Harman, G.  1997.  ‘Pragmatism and Reasons for Belief,’ in C.B. Kulp (ed.), Realism/Antirealism and Epistemology.  Lanham, MD: Rowman & Littlefield.  A revised version is included in G. Harman, Reasoning, Meaning, and Mind (Oxford: Clarendon Press, 1999).
Hempel, C.G., and P. Oppenheim.  1948.  ‘Studies in the Logic of Explanation.’  Philosophy of Science 15: 135-75.
Kim, J.  1967.  ‘Explanation in Science,’ in P. Edwards (ed.), The Encyclopedia of Philosophy, Vol. 3.  New York: Macmillan.
Kitcher, P. and W. Salmon.  1987.  ‘Van Fraassen on Explanation,’ Journal of Philosophy 84: 315-30.
Lehrer, K.  1974.  Knowledge.  Oxford: Oxford University Press.
Lipton, P.  1991.  Inference to the Best Explanation.  London and New York: Routledge.
Lycan, W.G.  1988.  Judgement and Justification. Cambridge: Cambridge University Press.
Lycan, W.G.  1996.  ‘Plantinga and Coherentisms,’ in J. Kvanvig (ed.), Warrant and Contemporary Epistemology (Totowa, NJ: Rowman and Littlefield).
Lycan, W.G.  1998.  ‘Theoretical/Epistemic Virtues,’ in the Routledge Encyclopedia of Philosophy, ed. by E. Craig (London: Routledge, 1998).
Quine, W.V.  1960.  Word and Object.  Cambridge, MA: MIT Press.
Quine, W.V., and J. S. Ullian.  1978.  The Web of Belief. Second Edition.  New York: Random House.
Railton, P.  1978.  A Deductive-Nomological Model of Probabilistic Explanation,’ Philosophy of Science 45: 206-26.
Rawls, J.  1971.  A Theory of Justice. Cambridge, MA: Harvard University Press.
Russell, B.  1959.  The Problems of Philosophy.  New York: Oxford University Press.
Salmon, W.  1989.  ‘Four Decades of Scientific Explanation.’  In W. Salmon and P. Kitcher (eds.), Minnesota Studies in the Philosophy of Science, Vol. XIII: Scientific Explanation.  Minneapolis: University of Minnesota Press.
Scriven, M.  1962.  ‘Explanations, Predictions, and Laws,’ in H. Feigl and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science, Vol. III: Scientific Explanation, Space, and Time.  Minneapolis: University of Minnesota Press.
Sellars, W.  1963.  ‘Some Reflections on Language Games,’ in Science, Perception and Reality.  London: Routledge and Kegan Paul.
Sellars, W.  1973.  ‘Givenness and Explanatory Coherence,’ Journal of Philosophy 70: 612-24.
Simon, H.  1969.  ‘The Architecture of Complexity,’ in The Sciences of the Artificial.  Cambridge, MA: MIT Press.
Sklar, L.  1975.  ‘Methodological Conservatism,’ Philosophical Review 84: 374-400.
Sober, E.  1975.  Simplicity.  Oxford: Oxford University Press.
Stich, S.P.  1985.  ‘Could Man Be an Irrational Animal? Some Notes on the Epistemology of Rationality,’ Synthese 64: 115-35.
Stich, S.P.  1990.  The Fragmentation of Reason.  Cambridge, MA:  Bradford Books / MIT Press.
Thagard, P.  1978.  ‘The Best Explanation: Criteria for Theory Choice,’ Journal of Philosophy 75: 76-92.
Thagard, P.  1989.  ‘Explanatory Coherence,’ Behavioral and Brain Sciences 12: 435-467.
Van Fraassen, B.  1980.  The Scientific Image.  Oxford: Oxford University Press.
Van Fraassen, B.  1989.  Laws and Symmetry.  Oxford: Oxford University Press.
Wimsatt, W.  1976.  ‘Reductionism, Levels of Organization, and the Mind-Body Problem,’ in G. Globus, G. Maxwell and I. Savodnik (eds.), Consciousness and the Brain.  New York: Plenum.