Академический Документы
Профессиональный Документы
Культура Документы
Erkenntnis
An International Journal of Scientific
Philosophy
ISSN 0165-0106
Erkenn
DOI 10.1007/s10670-013-9575-8
1 23
1 23
Abstract The goal of this paper is to present and defend an empiricist, neoHempelian account of scientific explanation as ampliative, specialized embedding.
The proposal aims to preserve what I take to be the core of Hempels empiricist
account, by weakening it in some respects and strengthening it in others, introducing
two new conditions that solve most of Hempels problems without abandoning his
empiricist strictures. According to this proposal, to explain a phenomenon is to
make it expectable by introducing new conceptual/ontological machinery and using
special, and non-ad hoc, non-accidental regularities. The new conditions are elaborated making essential use of two central structuralist ideas, namely T-theoreticity
and specialization. I first introduce and qualify the project, then present the new
account in detail and assess it vis-a`-vis its rivals, and finally discuss some possible
objections, concluding that the account fares better than its monistic rivals and well
enough to qualify as a promising neo-Hempelian account. Even for those unpersuaded by its monistic goals, it has the merit of calling attention to two new
necessary conditions not explicitly emphasized thus far and showing how they serve
to answer many of the criticisms addressed against Hempels account.
1 A Neo-Hempelian Program
Hempels account has a paradoxical status. On the one side, it is still the main account
implicitly presupposed in most scientific self-descriptions and in many philosophical
fields other than philosophy of science, such as philosophy of mind, philosophy of
language, or general epistemology. In philosophy of science, on the other hand, it is
almost universally considered as a failure that is not worth rescuing, not even partially.
The two main post-Hempelian alternatives, causalism and unificationism (Woodward
J. Dez (&)
LOGOS Research Group, University of Barcelona, Barcelona, Spain
e-mail: diez.ja@gmail.com
123
2009; I leave aside statistical relevance models for reasons given below), try to solve
Hempels problems, yet when they succeed it is at the price of either facing new,
serious problems, or withdrawing from monism in favor of pluralism. The account
presented here attempts to rescue Hempel from the limbo and vindicate what I take to
be his core idea, namely: to explain a phenomenon is to make it expectable on the basis
of non-accidental regularities. I find this idea intuitively compelling, although when
elaborated in the specific way Hempel does it is challenged by well-known
counterexamples. The proposal defended here aims to preserve Hempels core idea
and fix the (fixable) counterexamples at the lowest metaphysical cost, so that it does
not have to face the new problems that the rivals must contend with, being thus able to
offer a (minimalist) monistic account. Although in our pluralistic times most people
will look at the project with a skeptical eye, if not simply consider it completely
misguided, I claim, and defend, that it is at least worth exploring.
In order to fix the counterexamples to Hempels nomologic inferentialist model,
this proposal modifies it by both weakening and strengthening it. Regarding
Hempels non-necessity problems, I weaken his model by allowing expectability
with low probability, thus taking the explanans to make the explanandum expectable
in the sense of increasing its likelihood relative to background knowledge. This is
accomplished by substituting the inferential condition strictly understood (i.e.
explanations are deductive/inductive valid inferences) by a weaker model-theoretic
embedding condition. The non-sufficiency problems are fixed by strengthening
Hempels model, introducing two additional conditions to Hempels requirement
that expectability must be based on non-accidental regularities, namely: the
nomological embedding must be ampliative, and must make use of non-ad hoc,
special laws. These two new conditions directly connect the proposal with the
structuralist program, both instrumentally and substantially. On the one hand, the
conditions added are elaborated by making essential use of two key structuralist
ideas: the difference between T-theoretical and T-non-theoretical concepts, and the
difference between general guiding principles and special laws. The proposal is
structuralist in another, more substantial sense for I take it that the two conditions
mentioned are implicit in model-theoretic approaches in general, and in structuralist
reconstructions of paradigmatic explanatory theories in particular. In this regard, as
is typically the case in philosophical explications, my analysis just makes the
referred conditions explicit from the reconstructions offered by the structuralist
literature, then discusses them in the context of the general debate on scientific
explanation and applies them to solving Hempels problems and arguing for a
metaphysically sober, neo-Hempelian stance.
This is not the first time that the topic of explanation has been approached from a
structuralist perspective. Bartelborth (1996, 2002) and Forge (2002) are the main
structuralist works devoted to the topic. Although some aspects of the proposal
defended here share some family resemblance with these previous works, the goal,
strategy and core of my proposal are different. Forge defends that the components of
the embedding must be realistically interpreted for it to be explanatory. Nothing in
my account presupposes this or commits it to this. I take it that the analysis of
explanation is, and must be, orthogonal to the problem of scientific realism.
Bartelborth focuses mainly on the net-like structure of theories and its relevance for
123
their explanatory power. Although my second condition that refers to special laws is
related to this net-like structure, I emphasize the ampliative condition, which is
essential in my account but plays no explicit role in his. On the other hand,
Bartelborth emphasizes other aspects with a certain unificationist flavor, such as
inter-modelic constraints and branch-connection dependences, which, although
quite common in highly developed theories, I do not take as conceptually necessary
for (minimal) scientific explanations.
A project of this kind may appear old-fashioned in our pluralistic times, if not
altogether unpromising.1 To mitigate this impression, some clarifications are in
order:
Cf. e.g. The prospect of any one of these [monistic] models being developed to cover all good
scientific explanations (let alone good explanations in general) are dim. Perhaps we should opt for
pluralism. Perhaps there are several types of explanation, each with its appropriate model (NewtonSmith 2000, 132); cf. also Cartwright (1991), Thalos (2002), Goodfrey-Smith (2003) ch. 13, Reiss (2009),
and even on some occasions Salmon, e.g. Salmon (2001) Douglas (2009) defends also a moderate,
monistic account, but in terms of novelty; for reasons that will become apparent, I do not think that
leading to novel predictions is conceptually necessary for explanatoriness.
This is part of the wider problem of theory choice, other aspects of which I will not consider here
either; cf., e.g., Thagard (1978), Kuhn (1983), Hempel (1983), Salmon (1990) and more recently the
Bayesian literature.
123
contrast my present one have the same goal, I simply use w-explanation
instead of possible w-explanation.
While I acknowledge that the old times of strictly understood necessary and
sufficient conditions are gone, my goal is to keep as close to that ideal as possible;
that is, to provide necessary conditions which are as strong as possible, and strong
enough to block almost all cases that the concept should intuitively not apply to.
It is in this regard that I take this proposal to pursue monistic aims. It is thus my
claim that the conditions offered here: (i) solve the majority of the counterexamples to other accounts, and (ii) exhaust conceptual analysis at the general level,
i.e. there are no other general conceptual constraints. Regarding the (few)
remaining non-intended cases, I advocate a limited, harmless pragmatic
treatment. It is limited and harmless because it is not going to be the case that
any successful prediction can eventually be taken as explanatory depending on
the pragmatic context. Contrary to other stronger pragmatic accounts (e.g. van
Fraassen 1977, 1980), the conditions advocated here definitely preclude some (in
fact many) predictions from qualifying as explanatory in any context.
Finally, the proposal aims to be of some interest even to those pluralists, or
skeptics, who think that the foregoing considerations do not suffice to grant an
interesting monism. It is of interest even for them in that it points out and
articulates two new criteria that apply to a sufficient range of varied and
interesting cases of explanations (in the pluralists jargon), or specially good
predictions (in the skeptics jargon), and shows how they serve to solve many
of the problems that Hempel could not solve.
123
123
I will also leave out Scriven (1962) and other singularist criticisms, for they do not provide new
counterexamples but, rather, a different interpretation of the same, ink-table kind of examples. A general
defense against singularism goes beyond the limits of this paper, but in any event the proposal defended
here, appropriately reformulated, aims at be of interest even for singularists.
This will be useful to our dialectics, but it does not mean that causalists in general see themselves as
modifying/improving Hempels project. Most do not. As for unificationism, although Kitcher explicitly
considers that explanatory stores are constituted by arguments, other versions of unificationism may not
preserve this inferential component strictly understood (e.g. Forge 2002, more on this below).
Although one can use unification to raise some realist concerns (cf. Duhem 1893; Psillos 1999; and
even Friedman 1974).
123
123
These simplified descriptions do not take idealization into account, i.e., that theoretical models defined
by the laws never apply exactly to the phenomena (Cartwright 1983); for a structuralist treatment of
idealization, cf. Balzer et al. (1987) ch. VII.
123
123
Although T-data may very well be theory-laden, they are not T-laden; i.e., they may presuppose another
theory T, and thus be T-theoretical, but they do not presuppose the very same theory in which they play
the role of data (classical kinematics presupposes many principles in the measurement of space and
timeindependence of each other, linearity of light, length being independent of speed, and othersbut
does not presuppose that dynamical laws are correct). In this sense data models are the explananda (and
also the data for testing/conforming the theory), and thus explananda do not involve T-theoretical, i.e.
T-dependent, values. There is a broader sense of data model, emphasized by Gahde (see this volume),
according to which DM may include theoretical values, which means that sometimes we calculate
T-theoretical values from T-non theoretical data (and other T-theoretical ones). In this broader sense, our
explananda are always data models but not the other way around.
9
Insofar as the explananda are real positions of planets. If they were apparent retrograde trajectories,
then predictions using (either geocentric or heliocentric) real positions could be explanatory.
123
123
net-like structure with different branches accounting for different systems/phenomena. This Kuhnian idea has been elaborated in detail by structuralism with the notion of
theory-net: a set of nomological constraints or laws hierarchically connected by the
specialization relation, i.e. inferior laws are specifications of laws above them in
that branch (for a fully precise characterization, see Balzer et al. 1987, ch IV). For
instance, in CM, we have something of the following kind:
The general principle at the top tells us that changes in movement are due to the
presence of forces, and it specializes downwards, opening different branches for
different phenomena, specifying in each case the type of forces at work: forces that
depend on position vs forces that depend on velocity; forces that depend directly or
indirectly on position; etc., until we arrive at specific force-laws for specific
phenomena/explananda, e.g. the pendulum law, the law of gravitation, the law for
friction on inclined planes, etc. A similar structure has been identified in other
highly unified theories such as thermodynamics, electrodynamics, classical genetics,
natural selection, enzymatic biochemistry, Chomskyan linguistics, exchange
economics and others (cf. Balzer and Moulines 1998 for some examples).
What matters for our present concerns is that in all these cases the top, schematic
general principle (such as Newtons Second Law Rf = ma) tells us only the kind of
entities and the scheme of nomological connections that account for the phenomena,
without specifying any particular nomological connection. Specific empirical
restriction enters through laws that appear lower down and specialize the general
principle. In this regard, top principles (whose detailed symbolic expression varies
from one application to the next, in Kuhns words) can be qualified as guiding
principles (Moulines 1984), which, taken in isolation, are empirically nonrestrict (ibid.). They cannot be tested alone: they must always be tested through
one of the net-branches ending in a specific law. Specific bottom laws are where the
specific empirical constraint/information lies. This means that it does not make
sense to use a general guiding principle without any of its specific forms. If we did
so, it would be easy to artificially fit any phenomenon using ad hoc specifications of
the principle. For instance, if we were allowed to apply Rf = ma without any
specific systematic constraint on the kind of functions f that we can make use of, no
123
matter how crazy these functions were, then with just some mathematical skill we
could embed any trajectory. Take the weird trajectory of the pen in my hand as it
moves at my will: if we were mathematically smart enough we could always come
up with some functions f1, f2, which embed the trajectory. Thus, even when the
theory introduces T-theoretical machinery, if we use it for embedding data in this
artificial manner, just by filling up in an ad-hoc manner a schematic guiding
principle, then the resultant embedding, even if ampliative, cannot qualify as
empirically explanatory. So, we must impose the following additional condition for
a nomological embedding to be explanatory:
(S) For a T-embedding of data model DM into a theoretical model TM to be
explanatory, TM must make use of some special law
The illustration given above of the violation of this condition in CM is fictional. Are
there real, non-fictional cases? It is going to be difficult to find real-science
examples, for scientists in general do things correctly (it would be easier to find
examples in para-sciences, where we often find a kind of general principle with very
few, if any, varied and systematic special constraints and thereby always ad hoc
usages). The only case close to this would be Ptolemys astronomy, or in fact its
usual, historically inaccurate, simplified version according to which the only content
of the theory is that for every apparent planetary trajectory there is a combination of
epicycles and a deferent that embeds it.10 As Hanson (1960) points out, such a
combination is equivalent to a Fourier series, which can embed any trajectory with
the only conditions of being continuous, bounded and periodical. To the extent that
Ptolemy is interpreted in this only-general-guiding-principle way, the violation of
the above condition for empirical w-explanation would be the origin of the charge
of being mere mathematics that has sometimes been leveled at it.
This completes the core of the presentation, and the intuitive justification based
on scientific practice, of the two additional conditions that this proposal places on X.
The common conceptual core of scientific w-explanations as ampliative specialized
embedding (ASE) is thus the following:
(ASE) TM explains DM : (i) DM is nomologically embeddable in TM, (ii)
TM includes T-theoretical concepts and (iii) TM makes use of special laws.
The proposal is minimalist in that it does not look for substantive properties
(causation, robust unification, etc.), it looks rather at paradigmatic cases of
explanatory and non-explanatory theories and witnesses what the difference
consists of.
Note that the analysis says that the T-explanans, i.e. TM, must include
T-theoretical concepts, thus being ampliative, but it does not say that the new
concepts that the explanans includes are only T-theoretical. The T-explanandum
never includes T-theoretical concepts, but it might be the case that it involves only
part of the T-non-theoretical concepts and the explanans includes the rest of T-non
theoretical and the T-theoretical ones. Although unusual, this may happen in
10
In fact, Ptolemys astronomy had several quite specific and systematic specializations, cf. Almagest
(Toomer 1998) IX, 3 and 6, and X, 5.
123
I thank S. Ginnobili for pointing out this possibility, and the example, to me.
12
For instance, we explain the change in moths wing color from white to dark in the Sheffield industrial
area during the 1960 s by appealing to the fact that moths habitat, trees trunks in the Sheffield area,
became dark during the 1960 s because of the pollution, and thereby dark wing variations performed
better the function of escaping predators, which is an adaptive function, i.e. beneficial for differential
reproduction. That the function/behavior is adaptive, i.e. that it is relevant for fitness, is NS-theoretical,
but this does not mean that the very identification of the function presupposes the NS concept of
adaptation. In this case, then, the explanandum involves only part of the NS-non theoretical machinery
and the explanans includes NS-theoretical concepts together with some NS-non theoretical ones. The
unusual thing is not that the explanans includes T-non theoretical concepts, this happens for instance in
CM when one explains spatial displacement using masses, forces and a time interval; in other
explanations time is in the explanandum and space in the explanans. What is unusual is that in NS some
NS-non theoretical concepts, such as escaping predators, are never involved in an explanandum. NS
does not (aim to) explain that some species escape predators, or get adequate food supply, or that sexual
rivals compete, etc.; the theory uses these functions/behaviors only for explaining, together with NStheoretical concepts, evolution of traits. In this case explanations seem to be inter-theoretical, with more
than one theory contributing to the explanatory machinery. And this might be the case in other theories as
well. Unusual as they may be, the analysis cannot exclude these cases, so ASE makes room for them.
13
One might argue that ASE is not a rival of unificationism but a version of it. Though I acknowledge
that the net-like structure gives a certain unificationist flavor, ASE strongly departs from standard
unificationism in not demanding simplicity as a conceptual requirement. On the other side, Kitchers
inference patterns and filling instructions might seem similar to the net-like hierarchy and specialization
relation, but it is not: if I understand them well, argument patterns and filling instructions would apply
even to our bottom laws in the theory-net, so Kitchers filling instructions do not correspond to
strcuturalist specializations.
123
particular and general indeterministic facts depends on whether the causalist accepts
probabilistic causation. For those who do accept probabilistic causation (Suppes,
Popper, and the majority nowadays), it does. For those who find the ideas of
causation and indeterminism at odds with each other, causalism does not explain
indeterministic facts, so they must either abandon causalism as a monistic account
or accept that genuine indeterministic facts are not explainable. Thus causalism does
well with D1 only if probabilistic causation is accepted.
Unificationism does as well regarding D1 as classical, Hempelian inferentialism
does, for it (at least in its most elaborated, Kitchers version) preserves the
inferential component that allowed its predecessor to specify what is common and
what is specific to different types of explanations. Although some unificationists
consider only inferences of general facts (Friedman 1974; Kitcher 1993), or of
deterministic facts (Kitcher 1993), these restrictions are based on considerations that
are independent of inferentialism, which in principle has no intrinsic difficulty in
dealing with the four sub-types (modulo the acceptance of inductive inferences).
ASE does at least as well with respect to D1 as unificationism does. To the extent
that data models are generally used to represent certain empirical regularities, ASE
might be seen as fitting the explanation of general facts better. Nevertheless, nothing
precludes the data model from being of a particular fact, e.g., the Moons trajectory,
or an eclipse; the only complexity is that in such a case DM will also include some
initial conditions. As for the difference between deterministic and indeterministic
explananda (either particular or general), ASE can accommodate both. The only
difference would be that in the indeterministic case TM will be defined making
essential use of probabilistic/indeterministic laws (e.g., classical genetics or QM; of
course modulo accepting probabilistic nomological regularities). It can even
accommodate explanations with low probability. The most straightforward way to
do so is by embedding probabilistic general facts (e.g., the probability of an electron
crossing a potential barrier is 0.1 and of its rebounding is 0.9; or other cases such as
radioactive decay) in (probabilistic) theoretical models; but particular probabilistic
facts (e.g. a particular electron having a 0.1 probability of crossing a barrier) can be
ASE-embedded as well (more on whether very low-probability embeddings are
explanatory below).
3.2 Extensional Adequacy
Starting with causalism, it does well with the counterexamples to Hempel. The
prediction of the storm from the barometer drop is not explanatory for the barometer
is not part of the causal history of the storm, while the prediction from the fall in
atmospheric pressure is explanatory since the fall in atmospheric pressure is part of
the causal history of the storm. Analogously the other counterexamples are
overcome. Yet causalism has in turn been charged with having its own
counterexamples, in general to its necessity part, i.e. there seem to be cases of
non-causal explanations (cf., e.g., Achinstein 1983; Ruben 1990; Thalos 2002).
What I take to be the seven more troublesome cases are listed here, along with the
causalist defense. (1) Functional and teleological explanations. The usual response
to these cases is to provide causal analyses a` la Wright (1972, 1973). (2) Intentional
123
123
123
etc.), but they also may not (e.g., relativistic curved spacetime). Thus, ASE makes
room for causal explanations but is not conceptually restricted to them. Now, since
ASE does not imply that all explanations are conceptually causal, how does it face
traditional sufficiency counterexamples to Hempels inferentialism?
123
123
seem intuitively excludable. Since ASE exhausts all conceptual constraints, the
difference is accounted for in pragmatic terms (recall that ASE is compatible with
certain contexts imposing additional, e.g. causal, conditions). Nevertheless,
regarding these cases: their number is relatively small; the pragmatic component
is not unrestricted, thus not really harmful (more on this below); classical
inferentialism and unificationism dont do better with these cases in general;
causalism does better with some of these cases, but worse with others (since it
excludes all symmetries), and suffers non-necessity problems. All things considered, I take it that ASE does better with the extensional-adequacy desideratum than
any of its rivals and (and this is essential to rebut pluralist complaints) sufficiently
well not to be dismissed due to extensional considerations.
3.3 Elucidatory Power
As is well known, causalism is accused by philosophers with Humean concerns of
violating this desideratum, that is, of explicating obscurum per obscurius. Those who
take causation as an unacceptable metaphysically obscure notion reject its use in
analyzing explanation. Friends of causalism may either defend that the notion of
causation is a perfectly legitimate primitive notion, or otherwise accept the challenge
and try to analyze it in turn, either modally (Lewis 1973; Woodward 2003) or nonmodally (Salmon 1981, 1994, 1998; Dowe 1992, 1995, 2000). The satisfaction of this
desideratum becomes even more dubious when probabilistic causation is accepted.
Unificationism uses in its analysans the same concepts as classical inferentialism
does plus the concept of unification, which is composed of the concepts strength,
simplicity, and their weight. This addendum is the only part of the analysans we
have to test here, for the other common parts have already been tested, and accepted. In
principle the addendum does not seem to put the clarity of the analysis or its empiricist
metaphysical sobriety at risk: strength, simplicity and their weight do not seem
obscure or metaphysically heavy at a fist sight. It is true that weighing strength and
simplicity may in some cases be practically intractable, but it does not seem to be a
fundamental conceptual problem. Yet, there are other, more serious concerns.
First, unificationism has been charged with imposing a priori constraints on the
structure of the world (e.g. Railton 1989). According to unificationism, explanation
correlates with unification; but what if we discover that the world, its causal/
nomological structure, is more disunified as we come to know it better? I do not take
this as a fatal problem, for all that unificationists have to do, as Kitcher (1989) does
in a move consistent with the empiricist motivations, is to insist that the causal/
nomological structure of the world is derivative from our explanatory practices and
that there is no intelligible sense of causation independent of our best explanatory
theories (this move, though, has some antirealist costs). Second, according to some
critics it is not clear at all that unification construed in this way is always correlated
with explanatoriness. Humphreys (1993) and Morrison (2000), emphasizing the link
between explanation and understanding in different ways, have argued that not only
is there no need for unification to correlate with explanation, but that in fact they
often enter into conflict.
123
123
123
4 Discussion
First, one might object that the account, even if restricted to scientific w-explanation, is too stringent. There are some explanatory sets in scientific practice, e.g.
in history and other social sciences, that do not show this complex, highly
articulated net-like structure, which would thereby be left out of the proposal; and
not only in the social sciences, some explanations in physical sciences (e.g. of
Earths surface phenomena by its inner geological structure) are not articulated in
these net-like manner.
This is a legitimate concern. The first thing to say is that these putative cases
should be considered one by one in detail. In many cases it may be that upon closer
inspection we are able to discern an implicit structure of a similar kind even in some
social sciences (although that structure may be less complex). The same has already
been shown in other fields such as biology, where seemingly non-net-like structured
explanatory practices display, on closer inspection, a similar structure with a general
guiding principle that specializes downwards in different specific nomological
constraints for different specific phenomena (e.g. Mendelian GeneticsLorenzano
2000or Natural SelectionDez and Lorenzano 2013). But, more importantly,
ASE does not depend on the (explicit or implicit) existence of this strong net-like
structure. This structure is present in many paradigmatic explanatory, highly unified
theories, and that is why I referred to it in presenting the proposal; but there is no
conceptual need for it. The only thing that is conceptually needed, together with
nomological embedding, is that the explanatory set: (1) uses new conceptual/
ontological machinery introduced by the explanatory practice, and (2) uses not
merely schematic generalizations that allow ad hoc embeddings (more on adhocness below). These two conditions are satisfied in the paradigmatic net-like
examples considered, but they may perfectly well also be satisfied in other less
articulated, or initial, theories/systems (and in many proto scientific, or even nonscientific every day explanations). Related to this, although in the paradigmatic
examples used above, like CM, the nomological constraints that are organized in a
theory-net structure can be somehow axiomatized, the account does not require a
theory to be axiomatizable in any strict sense for it to be explanatory, no ASE
condition requires or implies that.
Secondly, one could also object that the account says too little: to the extent
that ASE does well, or at least better than others, it does so only because it says very
little; it does not provide a reason why ampliative specialized embedding is
w-explanatory or why either non-ampliative or non-specialized embeddings are not.
It does well only because it does not provide a deep explicans.
This is also a concern worth considering. And I agree, in a sense, for this is why
this proposal is, as I advanced from the very beginning, (quasi-)minimalist. The
strategy was to take scientific practice at (almost) face value: to look at intuitively
123
differential cases and just distill, and elaborate, what the difference consists of. I
already submitted that this is not very substantive; but substantive or not, I claim
that this is all that one can do at the conceptual level without withdrawing in favor
of pluralism, and more than what the rivals were able to do. The conditions
identified do impose effective constraints that exclude some, actually many,
embeddings as non-explanatory. The ampliative condition captures the difference
between explanatory and merely descriptive embeddings. The systematic, specialization condition captures the (partially pragmatic, see below) difference between
ad hoc and truly explanatory embeddings. In this regard we have identified where
w-explanatoriness resides: to w-explain a phenomenon is to introduce something
conceptually/ontologically new that makes the phenomenon non-accidentally, and
non-ad hoc, expectable. Even if from a particular meta-philosophical perspective
this analysis may be considered still too minimalist, it should at least be
acknowledged that it calls attention to two crucial conditions not made sufficiently
explicit before and that doing so it does extensionally better that its rivals.
Third, in its nomological embedding component, the analysis incorporates a key
Hempelian idea, nomic expectability, and thus it could be taken as a version of the
covering law model. Indeed, it has been presented as a neo-Hempelian account. But
covering law models of explanation have proven to be clearly inadequate, above all in
biology (cf. e.g. Smart 1963; Beatty 1995; Rosenberg 1994, 2001), but also in other special
sciences and even in some physical sciences (cosmology, climate science and others).
This objection depends on what we understand by covering law. I argued that a
weakened part of Hempels account is worth preserving. And I take it that it is
preserved in all (non singularists) accounts. The idea that explanation involves a
kind of expectability (even low expectability that makes room for low probabilities),
and that this expectability must be due to non-accidental, general relations, is the
minimal Hempelian component that nomological embedding preserves. This can be
rightly taken as a version of the covering law model, but only in this weakened
sense: explanation involves non-accidental (nomological) connections, which may,
or may not, be universal and exceptionless. Universality and exceptionlessness are
not required for the nomological component of ASE: non accidental connections
used in the explanans may be domain restricted and non-exceptionless. If one wants
to preserve the tag law only for non accidental regularities that are universal and
exceptionless, then ASE does not require the use of laws in this sense (note though
that, as Dorato 2005 argues, on such a view there would be very few laws even in
physics). But, terminological preferences aside, non-accidental regularities, though
domain restricted and non-exceptionless, are present in explanations in biology and
other special, and physical, sciences, thus there is no general challenge for ASE in
this regard (cf. e.g. Carrier 1995; Mitchell 1997; Dorato 2012 for different defenses
of nomological regularities in biology).
Related to the presence of laws, one might think that there is a specific challenge for
ASE coming from the recent mechanistic philosophy. Mechanists usually present
explanation by mechanisms as the form of explanation in biology, neuropsychology
and life sciences in general, claiming that mechanistic explanation is at odds with the
covering law model (Glennan 1996, 2002; Machamer et al. 2000; Bechtel and
Abrahamsen 2005; Craver 2007). It is undeniable the great advance that calling
123
I thank M. OLery and S. Barberis, respectively, for pointing these last two cases to me.
16
That ASEs constraints are conceptual is essential, for even a radical pragmatist such as van Fraassen
may accept that, as a matter of fact, there are constraints which are contingently common to all contexts
(cf. van Fraassen 1977, 1980).
123
charge of trivialization. Our proposal does not run this risk because the conceptual
constraints it imposes really do constrain the relevance/explanatory relation across
contexts (in empirical science). This is compatible with imposing, in particular
contexts, additional causal, unificationist, manipulativist, mechanicist, or other
kinds of constraints.
Finally, and very importantly, the second condition states that the embedding
must use special laws, not merely general guiding principles. But as a matter of
fact, as far as theoretical frameworks with general guiding principles like Newtons
Second law are concerned, it is not possible to use only the general guiding
principle. As (following Kuhn and structuralism) was stated, they alone do not
suffice to make specific predictions and are thus always used in combination with
some other more restricted generalization. Even in the case exemplified by the
strange trajectory of the pen in somebodys hand driven by her will, the odd nonexplanatory embedding would use a specific (crazy) function. So the condition
cannot be that the embedding must use something else than mere guiding principles,
for it always does, but rather that the specific generalization deployed for the
embedding is not odd, or ad hoc, but acceptable. How is this latter
distinction established?
It is true that in order to make a specific prediction/embedding we must use some
specific generalization, even when the embedding is intuitively non-explanatory, as in
the case of the pens intentional trajectory. This makes the formulation of the second
condition troublesome. On the one hand, it is not misguided to formulate it as stating
that the embedding must use a special law in the existing theory-net (or theoretical
framework), for this emphasizes that, in order to be explanatory, the embedding cannot
use an ad hoc (just cooked up) regularity independently of the current systematization of the theoretical framework. On the other hand, however, there must be room
for explanatory embeddings that use new specific laws; otherwise explanatory theories
could not change by introducing new specific laws, which would be absurd for it would
make scientific change impossible. So what the condition really demands is that the
specific regularities used in the specific embedding must be non ad hoc, acceptable
relative to the current state of the framework, either because they already exist (as
accepted) or because, if new, they are acceptable compared to the regularities that
already exist. This condition of non ad-hocness, or acceptability, is difficult to make
precise, and is in fact impossible to fully characterize in purely formal terms. It is
related to the kind of resemblance or paradigmatic relation that Kuhn (1962,
1970) emphasized: the scientific community uses the, already solved, old problems
(paradigmatic exemplars) as a guide for the new solutions, and accepts solutions to
new problems that use theoretical resources which resemble old ones. Thus, for
instance, gravitational forces were acceptable as applicable to trajectories that
could be seen as limiting cases of parabolic shots with increasing forces. In contrast, to
use a real life example more interesting than the pens intentional trajectory, although
mathematically speaking light phenomena could have been embedded into CMtheoretical models using extremely odd functions, they were finally expelled because
some light phenomena were taken as intractable using acceptable functions, functions
that satisfy at least some minimal constraints presupposed in the scientific practice of
establishing special laws.
123
There usually are some formal constraints regarding this acceptability, for instance,
in CM, not to use jumping functions that dramatically diverge between different
subsets of the domain. There also are some semi-formal traits that help to characterize
this acceptability. For instance, systematicity: if every phenomenon receives a
different, unique theoretical treatment that is completely disconnected (except for the
guiding principle) from others (the theory-net thus looking like a series of totally
independent, disconnected branches), then the embedding framework lacks the kind of
systematicity necessary for it to count as acceptable. However, even this is not fully
specifiable in formal terms (for instance, what happens at the very initial stages of a
theory?) and there may be other traits that are relevant to acceptability besides
systematicity. This means that the notion of acceptable, non-ad hoc special law has
a non-eliminable pragmatic component that introduces a pragmatic dimension into our
second condition and, as a consequence, into ASE.17 Does this mean that this
pragmatic component infests the entire account in a way that makes it radically
conceptually unconstrained? No, it does not. For one thing, the first condition has no
relevant pragmatic component and even taken alone is effective enough to exclude
certain embeddings as explanatory (e.g. Keplers laws). For another thing, whatever
degree of pragmatism this acceptability introduces into this account, the same degree
is going to be present in any account for, monistic or pluralistic, all make use of modal
constraints and must distinguish between acceptable and ad hoc nomological
regularities. Thus if, following Kuhn, I am right and this distinction has a noneliminable pragmatic component, it is going to be implicitly present in any account.
The presence of this pragmatic component that cannot be fully characterized in formal
or structural terms does not then make the analysis obscure in the sense in which the
use of primitive metaphysical notions such as causation or natural kind may seem
obscure from a Humean, empiricist perspective.
5 Conclusion
The analysis of scientific w-explanation as ampliative, specialized nomological
embedding does better than Hempels classical inferentialism, causalism and
unificationism with regard to our three desiderata, and well enough to qualify as a
promising minimalist candidate for a monistic conceptual core that is common to all
scientific w-explanations. It is neo-Hempelian in preserving nomological expectability and in introducing two necessary conditions, in addition to nomological
embedding, that are metaphysically innocent. The intuitive non-explanatoriness of
some (few) cases that are not excluded by the conceptual analysis is due to
pragmatic rather than conceptual facts. ASE provides the, partially weaker, partially
stronger, satisfactory modification of classical inferentialism and thus offers the
maximally Humeanly decent analysis along the lines of Hempels empiricist
project. The two conditions added provide a non-pluralist, minimal conceptual core
common to scientific w-explanatory practice. Even for radical skeptics who take the
17
This pragmatic component is related to, as far as I understand it, Longinos (2002) criteria for
objectivity.
123
References
Achinstein, P. (1983). The nature of explanation. Oxford: Oxford University Press.
Baker, A. (2009). Mathematical explanation in science. British Journal for the Philosophy of Science, 60,
611633.
Balzer, W., & Moulines, C. U. (1998). Structuralist theory of science: Paradigmatic reconstructions.
Amsterdam: Rodopi.
Balzer, W., Moulines, C. U., & Sneed, J. D. (1987). An architectonic for science. The structuralist
program. Dordrecht: Reidel.
Barnes, E. (1992). Explanatory unification and the problem of asymmetry. Philosophy of Science, 59,
558571.
Bartelborth, T. (1996). Scientific explanation. In W. Balzer & C. U. Moulines (Eds.), Structuralist theory
of science (pp. 2343). Berlin: Walter de Gruiter.
Bartelborth, T. (2002). Explanatory unification. Synthese, 1301, 91107.
Batterman, R. (2010). On the explanatory role of mathematics in empirical science. British Journal for
the Philosophy of Science, 61(1), 125.
Beatty, J. (1995). The Evolutionary Contingency Thesis. In G. Wolters, & J. Lennox (Eds.),Concepts,
theories and rationality in the biological sciences, the second annual pittsburgh/konstanz colloquim
in the philosophy of science (pp. 4581). Konstanz: Konstanz University Press and Pittsburgh
University Press.
Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History and
Philosophy of Biological and Biomedical Sciences, 36, 421441.
Carrier, M. (1995). Evolutionary change and lawlikeness: Beatty on biological generalizations. In G.
Wolters, & J. Lennox (Eds.), Concepts, theories and rationality in the biological sciences (pp.
8297). Konstanz: Konstanz University Press and Pittsburgh University Press.
Cartwright, N. (1983). How the laws of physics lie. Oxford: Oxford University Press.
Cartwright, N. (1991). The dappled world. Cambridge: Cambridge University Press.
Cartwright, N. (2008). Reply to Ulrich Gahde. In L. Bovens, C. Hoefer, & S. Hartmann (Eds.), Nancy
Cartwrights philosophy of science (pp. 6566). New York: Routledge.
Cohen, J., & Callender, C. (2009). Special sciences, conspiracy and the better best system account of
lawhood. Erkenntnis, 73, 427447.
Craver, C. (2007). Explaining the brain. Mechanisms and the Mosaic Unity of Neuroscience. Oxford:
Clarendon PRESS.
123
123
123
123