Exposition


The Unity of Math and Science

Sunday, March 22, 2009

Abstract: A review of the philosophy of science reveals why the common view that science and mathematics are distinct in their methods and subject matter fails to account for the efficacy of mathematics. We show that Philip Kitcher's 1993 account of science as a cognitive endeavor may be applied to mathematics, suggesting that the methods of these disciplines are determined by the same cognitive framework. From this basis we argue that any distinction to be drawn between the subject matter of mathematics and the subject matter of science is special rather than categorical. We conclude that mathematics and science may be understood as sub-disciplines of a common cognitive endeavor, with indistinguishable epistemological aims.

Introduction

It is a commonplace that mathematics is not really a science. I know that it is a commonplace because for a couple of days I put the question to every mathematician I could buttonhole on the issue (in a large university math department) and each expressed the confident opinion that this was so. The reason given in every case was essentially the same: science is inductive and empirical, whereas mathematics is purely deductive and axiomatic. Mathematics, I was told on more than one occasion, is really more of an art than a science. Some interlocutors hedged their bets, musing that after all it would depend on one's definition of “science,” but even these were confident of the “empirical/axiomatic, inductive/deductive” distinction between the two. In short, my mathematical colleagues share a more-or-less vague, received-view notion of what scientists do, and know that it is different in kind from what they themselves do.

In this paper I will explain why I believe my colleagues are wrong; wrong both about mathematics and about science, and above all about the kinship between the two. I will begin by highlighting certain inadequacies of the received view of science and of the principal post-empiricist candidates for its replacement. A telling inadequacy of these views (excepting realism to some degree) will be shown to be their inability to account for the effectiveness of mathematics as a tool of science. I will then develop an idea of Philip Kitcher's (1993) regarding how science may properly be said to progress, which in turn relies on his linguistic analysis of reference-potentials of scientific terms, to draw an isomorphism between the progress of mathematics and the progress of science. I will conclude by showing that the kinship of mathematics and science, properly construed, stems from identities of method, purpose, and foundation in the two disciplines. In short, I will show that math and science may correctly be understood as aspects or branches of the same cognitive activity, with moreover a common epistemological aim.

The Received View of Science

The principle failure of the philosophy of science, at least since Hume, has been its inability to explicate the evident connection between the world as received by our senses and constructed by our theories, on the one hand, and the world-as-it-is, whatever that may be, on the other. Indeed, throughout most of the past two centuries, and largely on the authority of Kant, any effort to link what he termed the noumenal (in the negative sense: “things-in-themselves”) to our mental conceptions of the phenomenal (empirical) has been deemed vain (Kant [1783]). Instead, with the noumenal officially unknowable, all effort has focused on explicating solely the relationship (usually construed as linguistic) between our concepts and the data provided by our senses.

The fullest flower of this doctrine was the logical positivist position exemplified in the writings of members of the Vienna Circle, who sought to codify science as an effort in applied symbolic logic. To their raw (=unbiased) sense-data, scientists would apply experimental and analytical methods constructed in accord with inductivist and deductivist principles, principles which were, it was supposed, fully formalizable in a first-order logical calculus which could embody all scientific (=correct) conclusions while excluding all others. Logical positivism was dealt a mortal blow in 1933, but its death throes were long; the notion of the objective scientist discovering objective truth via the “scientific method” became the received view, and lives on in introductory science texts and lectures in our high schools and universities. Among contemporary philosophers of science, however, and among many scientists too, the received view has long since fallen from favor.

Nonetheless, efforts at resuscitation of empiricism were ongoing through the middle of the century, as philosophers such as Rudolph Carnap, Carl Hempel, and Karl Popper sought to recapture the core of the empiricist position, both through reformulations of the linguistic foundations of “correct” inductivist and deductivist principles and through diverse (empiricist) accounts of the nature of the scientific enterprise. Thus Hempel provides an analysis of definitions [1952], and Carnap a logical analysis of language [1939], while Hempel and Popper respectively present us with “covering law/verificationist” [1962] and “falsifiability” [1968] accounts of empirical analysis. But the empiricist program was breathing its last. Without a firm analytic structure for its support, such as positivism hoped to provide, these empiricist efforts are open to any number of devastating criticisms.

It is instructively ironic that the mortal blow to the logical positivists was delivered by a mathematician and lesser satellite of the Vienna Circle, Kurt Gödel, the great logician. For it was the foundation of the logical positivists' position that symbolic logic is adequate to formulate all (scientific) truths, and it was this supposition that Gödel proved ill-conceived with his famous incompleteness result of 1933.1 Concurrently, the logicist program in mathematics was seeking to reduce all of mathematics to symbolic logic under the leadership of David Hilbert and Bertrand Russell, who were building on the logicist thesis of Gottlob Frege. Gödel's result was thus a mortal blow not only to the empiricists, but most especially to the logicists. So far from reducing mathematics to logic, or even drawing the hoped for equation “mathematics=logic,” the incompleteness result led to the absorption by mathematics of logic.

What is of interest here is that the empiricist position took symbolic (=mathematical) logic as the foundation upon which to justify scientific truths, which necessarily involved the assumption that mathematical truths are of a different kind from scientific truths. Consequently, while the empiricists took mathematics largely for granted, they failed to address its efficacy in formulating scientific theories. Why should the planets obey Kepler's laws, i.e., certain mathematical equations? To the empiricist, “why” questions such as this are ill-formed; Kepler's laws are the answer to the only questions about planetary motion that can properly be asked. Because mathematics was taken to be fundamental, necessary, and a priori—the illumination by which all phenomena was to be understood—it was not possible to ask any “why” questions about its effectiveness as a core component (often the very language) of scientific explanation.

Post-Empiricist Views

In 1962 appeared Thomas Kuhn's The Structure of Scientific Revolutions, and the corpse of empiricism was permanently tossed aside in favor of a “paradigm” conception of scientific practice. Around this conception arose a cluster of post-empiricist views tending towards (a more or less radical) social constructivism, all of them founded on the understanding that an observational-theoretical distinction cannot be supported. Empiricism relies heavily on this distinction, because it requires that scientific theories be cleanly supportable or refutable on the basis of objectively observed phenomena. Kuhn showed that objectivity about phenomena is an unattainable ideal, since scientists cannot even name a phenomenon under observation without introducing theory-laden terms. To name a phenomenon is to classify it, to classify it is to introduce a universal concept, and, as Popper put it, “since all universals are dispositional, they cannot be reduced to experience” [1965]. Phenomena, in other words, cannot be objectively observed, because any observational claims presuppose a paradigm.

An important consequence of Kuhnian conceptions of scientific practice is the thesis that scientific progress cannot be considered cumulative. Because each new paradigm construes even established scientific conclusions under terms which are necessarily theory-laden, as the paradigms shift these terms take on dispositions which are incommensurable with the dispositions they had under the old paradigms. Thus, the differences between Newtonian and Einsteinian concepts of gravity, or between Fresnel's notion of light and contemporary electro-magnetic theory, involve more than mere adjustments or refinements of theories as more is learned about the referents of these terms, but instead involve the adoption of entirely new conceptual/theoretical schemas. The Kuhnian thus finds no effective way to map the old scientific ideas and terms onto the new ones. Far from being a continuous growth of knowledge, the emergence of new scientific ideas is seen to occur in discontinuous jumps.

Regardless of whether this is a supportable view regarding natural science, it certainly fails as a description of the progress of mathematical knowledge. Revolutionary ideas do occur in mathematics, but they are hardly incommensurable in the Kuhnian sense with the ideas that came before them. For example, the discovery of non-Euclidean geometry in the 18th century certainly did represent a radical break with the past, in so far as Euclid's axioms had been thought self-evidently true and the new geometry seemed to talk about a spatial reality far removed from previous intuitions about space. However, there is no incommensurability here. Mathematicians were able to point to precisely the kind and nature of the difference (involving the negation of Euclid's fifth postulate) between the new geometry and the old, and could even reasonably claim that the new spatial intuitions were implicit in Euclid's formulations. Moreover, the new geometry did not render the old geometry obsolete or passé. Euclidean spaces remain an important subject of mathematical study. Nor did the terms of the old geometry need to be reinterpreted in light of the new theory. The Pythagorean notion of right triangle is the same coin today that it was 2,300 years ago, and the Pythagorean Theorem on right triangles is neither stated nor understood any differently by moderns than by ancients.

A consequence of the Kuhnian position must therefore be seen to be an even profounder divorce, between mathematical conclusions on one hand and scientific conclusions on the other, than was tacitly assumed by the empiricists. The only way to bring mathematics under the umbrella of Kuhnian science would be to suppose that the “mathematics paradigm” is remarkably stable, not having changed since mathematics emerged as an axiomatic system under the ancient Greeks. Especially in view of the fact that Kuhnian paradigms are supposed to be deeply enmeshed in the cultures that produce them, and that contemporary western civilization is at such an enormous cultural and historical remove from the ancient Greeks, this does not seem an option.2

Again we may reasonably ask, if mathematics is so far removed from natural science as a scholastic discipline, how does it come about that the models which modern science builds of the world (pictures of the paradigms, if you will) are mathematical? Suppose we grant that Kepler's Laws are paradigmatically incommensurable with Ptolemaic and Copernican models of planetary motion—we may still inquire (1) why all three models are mathematical, and (2) why Kepler's is the most mathematical (and incidentally the most accurate predictor) of the three? Post-empiricist views, especially in so far as they partake of social constructivism, have no cogent account of the presence (let alone the efficacy) of mathematics in scientific theories.

Realism

Not all Kuhnians are radical social constructivists. It is commonly and correctly asserted that, for whatever reasons, airplanes really do fly, and modern medicines really do effect cures. Apparently, our scientific theories are somehow effective in the world, and this fact wants explaining. One may allow for a great deal of social constructivism, and even paradigm shifts, in the formation of scientific theories, without precluding some extra-cultural explanation for the astounding usefulness of those theories.

Enter realism, which is the proposition that the world consists of real entities with actual properties, that the empirical phenomena we perceive arise from genuinely causal relationships among these entities, and that our scientific theories have come to form an effective description of these relations. Such a thesis allows for the effective modeling by scientific theory of empirical phenomena, but does not in itself explain it. This failure to explain is conspicuous when we ask, once again, what about mathematics? Why should scientific theories be mathematical? Realism can only answer that the causal relations among entities must be in some sense mathematical relations, or must be at least (arbitrarily closely) approximable by mathematical relations.

Realism is thus seen to be, not a positive philosophical thesis about the nature of science, but a commonsense empirical supposition about the relation of scientific inquiry to the subject of that inquiry. It fails to tell us what science is, but it does reassure us concerning what science is about. Most importantly for my purposes, it permits us to put the question which both logical empiricism and constructivist views forbade: why is mathematics efficacious in science? However, while acknowledging realism's contribution in permitting the question, we cannot be content with its answer, namely that “it is because the world is (somehow or another) mathematical.” To make this claim would be to plagiarize the conclusions of science, not to justify them. Instead, we must turn to a new account of science itself.

Philip Kitcher and the Progress of Science

Kitcher begins his account [1993] with an historical review of the birth and subsequent development of a paradigmatic scientific theory, the Darwinian theory of evolution. Using this example, he reveals that what we term “scientific progress” may be analyzed into four kinds of cognitive advance. To explicate these fully, it will be useful to review briefly his analysis of the emergence of Darwinism.

Prior to Darwin's publication of The Origin of Species in 1859, biologists undertook important conceptual shifts in how they viewed the diversity of life. The Naturphilosophen of the German biologists had sought to understand diversity by an appeal to a metaphysical notion of archetypes, whereas the British biologists, under the influence of utilitarian ideas, were advancing a functionalist viewpoint in which structural elements of organisms are analyzed in terms of their usefulness. The clash of these ideas at a Paris Academy of Sciences debate in 1830 resulted in several compromise positions, consequently cementing a place for functionalist theories in mainstream biology. This conceptual shift was supported by the explosion of data arising from the discovery of new fossils, and of new life forms inhabiting geographical regions not previously available to biological study. These new organisms wanted explaining, and the functionalist approach, now legitimized by the consensus practice of working biologists, provided a rational means of organizing the new data. This shift in the approach to understanding diversity represents conceptual progress, under Kitcher's definition of it as the “[adjustment of] boundaries of our categories to conform to kinds,” which are “able to provide more adequate specifications of our referents.”

This conceptual progress was not alone enough to give fruit in a theory of greater explanatory power. Explanatory progress, which “consists in improving our view of the dependencies of phenomena,” waited on two elements which fell into place over the course of the next twenty-five years. The first of these elements was experimental progress—the acquisition or development of new techniques, instruments, or opportunities of observation—provided in this case by the discovery of both extinct and extant life forms, as noted above. The experimental progress constituted by these discoveries was necessary, because it revealed that the diversity of life is not static, but changes radically over geological time frames. It also showed that the diversity of even extant life was much greater than previously realized, a fact that was arguably not necessary in a logical sense to the development of the evolutionary theory which came after, but the importance of which as a practical and perhaps sociological impetus (vis-à-vis consensus practice) cannot be denied.

The second element was erotetic progress, which consists in determining what are the significant questions about a class of phenomena under study. And here we have a real bridge: prior to the conceptual and experimental progress already outlined, it would not have been possible to ask, “how and why do organisms change?” Now, however, this question became inevitable. The evidence of life's change and diversity cried out for the formulation of a mechanism—a theory—to account for it. It is this erotetic progress that constitutes the crux of scientific progress. Explanations always abound for any phenomenon; explanations for life and its forms had been offered for millennia before the 19th century. Not until the perspicuous question could be asked was it possible for a new idea, with greatly increased explanatory power, to be formulated.

The stage was now set for Darwin's theory, which constituted explanatory progress and completed the cognitive cycle, giving birth to a new understanding of “truth about the world.” In its turn it stimulated new conceptual, experimental, and erotetic progress, historically a hallmark of theories with great explanatory power.3

Kitcher's analysis reveals that these, then, are the constituents of scientific progress:

  • Conceptual Progress: the adjustment of the boundaries of categories to more adequately correspond to natural kinds, thus providing more adequate specifications of the referents of our terms.
  • Experimental Progress: the acquisition or development of new techniques, instruments, or opportunities of observation.
  • Erotetic Progress: the determination of significant questions.
  • Explanatory Progress: the modification of existing theoretical schemas, or the formulation of new theoretical schemas, to better represent the dependencies of phenomena.4

These constituents of progress, occurring within a consensus practice community, form a kind of feedback loop, as was apparent in the Darwin example. Conceptual progress encourages experimental progress by posing a need for greater observational discrimination to distinguish the referents of terms. Experimental progress reveals new phenomena—or old phenomena with new clarity—suggesting new questions to be asked about the referents of our conceptual terms. Erotetic progress orients and focuses theoretical frameworks, requiring them to adapt and account for apparent phenomenal dependencies revealed by the conceptual and experimental progress. And explanatory progress provides a model against which to test our conceptual and theoretical frameworks—often leading to new forms of conceptual progress.

None of this works to deny the Kuhnian thesis that all scientific terms and observations are theory-laden. On the contrary, it permits of a precise definition of theory-laden terms as those terms which have heterogeneous reference potentials. That is, the theoretical hypotheses with which terms are laden are “claims that are, in conjunction, equivalent to the assertion that all the modes of reference fix reference to the same entity” (Kitcher [1993], p. 103). This feature of scientific terms points directly to an understanding of conceptual progress as an adjustment or refinement in the reference potential of scientific terms. This may be seen with great clarity in the development of language in very small children. To a toddler, any animate object of a certain shape may be designated as “dog.” Soon, however, the reference potential of the term “dog” is narrowed, as the child makes conceptual progress, until it finally fixes more-or-less accurately on domestic canines. Similarly, scientific endeavors start with broad, vague notions of a phenomenon, and then proceed by conceptual progress of much the same kind. Again, children commonly have a term, “Santa Claus,” for which the referent does not exist as they imagine it. As they become cognitively sophisticated, the implausibility of their conception is forced upon them, and the reference potential of the term changes. An apt comparison in science would be to “phlogiston,” a term whose referent did not exist as Priestly imagined it. The phenomenon to which he applied the term persisted, but the reference potential of the term proved insupportable, and so the term was abandoned.

Science is here revealed as a purely cognitive enterprise, whose advances must be understood as cognitive advances. This bears emphasis, because it draws for the first time a conceptual framework for understanding the nature of science that is suggestive of its relationship to mathematics.

The Unity of Math and Science

The following propositions have been defended: (1) that modern philosophies of science have hitherto failed to account for the efficacy of mathematics as a tool in scientific theories; (2) that only a realist philosophy even permits the formulation of this question; and (3) that science, properly understood, is a purely cognitive endeavor, as elucidated by Kitcher's account of the progress of science. It is my purpose to prove that (1) can be accounted for in terms of (2) and (3), by showing that mathematics and science are not activities of different kinds applied to different realms, but activities of the same kind undertaken with respect to realms of experience that are ultimately indistinguishable. To establish this thesis, it will be necessary and sufficient to establish that the respective subject matter(s) of mathematics and science cannot be effectively demarcated, and to show that the cognitive processes leading to epistemic advance in each discipline are essentially the same. The former goal will be seen to follow more easily once the second is attained, and, in light of Kitcher's analysis of the cognitive character of scientific progress, it will be enough for the second goal to show that mathematical progress fits the same pattern. That is, it must be shown that conceptual, experimental, erotetic, and explanatory progress constitute advancement in mathematics just as they do in the natural sciences.

That conceptual progress is important to mathematics may seem self-evident, and likewise or nearly so for explanatory progress. The abstract entities with which mathematics concerns itself are generally understood to be concepts, after all. For instance, the notion of “triangle” is a mathematical concept and a general, abstract entity about which we may prove theorems. The theorems we prove about triangles are understood to represent “facts” about these entities, whose proofs, when they are found and demonstrated, correspond to explanations of why the facts are true. All perfectly axiomatic and deductive, and fits the standard account of mathematics nicely. But this isn't a true picture. That is, it is a highly posed and polished portrait of the finished product—and utterly misleading of how the product was obtained.

Because “triangle” is a relatively primitive concept, let us take, for our example of a paradigmatic mathematical concept, the “set of real numbers” as that concept is understood in contemporary mathematics. This abstraction has been with us since the Greeks, in the form of the continuum, a term which predates the designation “set of real numbers” by millennia. In contemporary mathematics, these terms are synonymous, and just how this came about is a tale of conceptual progress in the mold of Kitcher's account, illustrating how heterogeneous reference potentials (theory-ladenness) are central to the development of mathematical thought.

Generally speaking, the notion of continuum has always been tied to our common perception of continuity in the world, for example of time and space. The ancient Greeks, who dealt with integers and ratios of integers, invested the continuum with a numeric reference potential. When the Pythagoreans discovered that the continuum, thus construed, must include incommensurable ratios (irrational numbers), the reference potential of “continuum” was expanded to include them. Many philosophical analyses of the continuum were provided by the ancients, of which Plato's conception of it as the flowing of the apeiron, the unbounded indeterminate, was perhaps the least dependent on our sensory and perceptual experience of the continuous [Boyer, 28]. All of these conceptions, however, involved heterogeneous reference potentials for the term, applying it to space both as perceived by our senses and as construed by our intellects, especially in the practice of geometry.

The next conspicuously mathematical development of the reference potentials of “continuum” arose with the creation of the calculus in the 17th century. This development was Newton's concept of infinitesimals, which he also called fluxions. (This conception has much in common with Plato's ideas about the continuum; in particular it is not atomistic and does not appeal to the infinite divisibility of the line.) Leibniz had a corresponding concept, that of differentials, and it is actually his conception (in much modified form) that we use in calculus today. Each of these ideas involved the notion of a quantity which was larger than zero but smaller than any given numerical value, and for the next two hundred years the continuum involved some such notion as “infinitesimal” as a reference potential. In the 19th century, mathematicians (notably A.L. Cauchy and Karl Weierstrass) determined that infinitesimals could be dispensed with, so far as doing the calculus was concerned, by appealing to the logical notion of a limit. They felt there were good reasons for finding some way to avoid infinitesimals, not the least of which was that there seemed to be little philosophical ground for them as a reliable reference potential of “continuum,” particularly in light of the increasing tendency during this period to view the continuum as an aggregate of geometric points. Most important, however, was that the new methods of proof which they were developing were far more rigorous, and these used limits instead of infinitesimals.

The narrowed reference potentials of “continuum” that were necessitated by the new rigor in mathematics led more or less directly to the formalization of the real numbers that is in use today. This formalization is conspicuously set-theoretic. Its development coincided with the rise of set theory, which was itself a product of Georg Cantor's struggles with representations of functions on the real numbers by trigonometric series. Finally, in the 20th century, with the set-theoretic conception of the real numbers firmly in hand, the present mathematical notion of the continuum was fully captured in the highly refined theory of real analysis.

Two things should be noted. First, with each stage in the conceptual refinement of the continuum there came fresh questions and concerns regarding its nature, and many of these questions stimulated theoretical advances of lasting depth and importance.5 Second, the historical, philosophical, and even mathematical pressure on the reference potentials of the term “continuum” have not abated, that is, the current conception is not a finished product. For instance, Newton's infinitesimals have been resurrected in a mathematically rigorous theory (non-standard real analysis) which provides a model of the continuum at striking variance with the standard formulation. Additionally, certain fundamental properties of the continuum as currently conceived remain deeply mysterious, as made evident, for example, by the independence of the Continuum Hypothesis. Finally, the relationship of the mathematical continuum to the empirical and perceptual phenomenon of continuity remains a compelling concern—to mathematicians and philosophers as well as to physicists.

Here we have as striking an example of conceptual progress as one could wish for, at the very heart of mathematics, and a like story could be told about any mathematical term of significance. Moreover, the relationship of this conceptual progress to subsequent erotetic and explanatory progress in mathematics is seen to be the same as it is among these kinds of progress in natural science.

I have not addressed experimental progress in mathematics, and it is here that many who would defend the myth that mathematics is purely axiomatic and deductive are most adamant. Science, the claim is, depends crucially on empirical data (observations), whereas mathematics deals only with abstractions, with respect to which “experiments” in the scientist's sense are irrelevant. There are several observations to make about this claim. I would first like to note the implicit dualism between “mind stuff” and “world stuff” that the claim entails.6 If mathematical conclusions are not subject in any way to testing, then they must be conclusions of an entirely different sort from those of science, conclusions moreover about categorically different sorts of things. But of course mathematical conclusions are subject to testing; mathematics abounds with conjectures that remain uncertain. The current “experimental procedure” in mathematics is to attempt to find a logical proof or disproof. Procedures in science are more various. But the difference between mathematical claims and scientific claims is not categorical, it is special. Both are informed and stimulated by phenomena, both arise through conceptual and erotetic progress, and each kind of claim is believed according to the standards of evidence current in a consensus-practice community.

What hinges the math-myth argument is the sense that, once proved, mathematical conclusions are enshrined in a way that scientific conclusions are not. Each generation of scientists, so the story goes, tears down what the previous one has built and replaces it with a new structure, but mathematicians simply add on to the existing structure. As one colleague of mine put it, science is a great, rambling house in which adding on a new room may involve rewiring the basement or tearing apart the kitchen, whereas mathematics is like a skyscraper, with new stories continually being added on to the old. This plausible and beautiful metaphor is dead wrong, however, as a picture either of the historical development of mathematics or that of science. It is quite true that scientists of the past have believed things that later seemed crude or misguided, or in need of refinement. But it is also quite true that mathematicians have believed things that in retrospect were similarly ill-founded. Medieval mathematicians would not countenance negative numbers, but believed numerology. Newton had his fluxions. Eminent mathematicians at the beginning of the 20th century professed confidence that mathematics could be finitely axiomatized. It is no good for the myth-makers to complain that the consensus practice of mathematicians no longer warrants many old methods of inference, for the same is true in science. Proof itself, as the ultimate standard of mathematical reliability, has undergone significant change in recent history, and remains an object of intense study. Likewise, the pedigree of such ancient results as the Pythagorean Theorem are no argument for a distinction from science; the scientific fact that the earth is a globe is just as ancient and just as certain. For good or ill, the myth of mathematics as a purely axiomatic and deductive system, erecting theorem upon theorem down through the ages in accordance with universal standards of proof, is a persuasive legacy of the logicist ideal, but it remains largely a myth, and to the extent that it is true it is no less true of science than of mathematics.

Finally, whatever the ontological status of mathematical objects, they are objects about which we can only theorize in virtue of entertaining them in our thought, just as we may only theorize about empirical phenomena by conceptualizing them. The experimental apparatus, so to speak, for dealing with such abstractions has until this century been exclusively the human imagination and reasoning capacity.7 However, this has now changed. If Kitcher's definition of experimental progress as “the acquisition or development of new techniques, instruments, or opportunities of observation” is coherent, then the advent of modern computing qualifies as experimental progress in mathematics. What the import of this progress may be remains a troubling question to the mathematics community. If formal proof is to be the sole criterion of mathematical truth, and the search for such proofs the bread and butter of mathematicians' working lives, then the computer will eventually take over much of the core work of traditional mathematics, much as calculating machines took over the work of human calculators. If, on the other hand, the real business of mathematicians is a perceptual task, the exercise of imagination in perceiving mathematical relationships and dependencies (with formal proof playing the secondary but still important role of warranting their conclusions), then the computer will be an exquisitely powerful but still subordinate tool of mathematical practice, directly analogous to the telescope in astronomy.

The thesis that mathematical progress fits Kitcher's cognitive-advance model of scientific progress is now fully established. It remains to show that the respective subject matters of science and mathematics cannot be effectively demarcated.

A scientist, in order to practice science, must behave as though there are objective facts-of-the-matter about the phenomenon under study. This is true regardless of the scientist's philosophical stance on the ontological status of the phenomenological entity. In the simplest case, the scientist must assume that the values of physical quantities are independent of his or her measurement of them. As Shaughan Lavine puts it [1994, p. 266], “The measurement of a value has an epistemic component, but the value itself does not.” Similarly, the mathematician must act, with regard to the real numbers, for instance, as if there are facts-of-the-matter about them that are subject to logical analysis and proof. This is as true for the formalist as for the Platonist. It is natural to suppose that the respective cases of study of a physical quantity and study of the real numbers differ, however, in that the physical quantity has properties which our measurements (and by extension our scientific theories) can only approximate, but that the real numbers, being an abstraction, are capable of precise formulation and full description. In other words, the empirical phenomenon, being external to us and knowable only through the (physical) senses, can only be proximally modeled by our concepts, whereas the set of real numbers, being already inside our heads, so to speak (in virtue of being a concept), is available to exhaustive analysis. It is upon this supposition about the difference between the subject of science and the subject of mathematics that the myth of mathematics ultimately rests, and it is this supposition that is wrong.

The crux of the error is that the real numbers are not “in our heads” at all.8 We can only think a finite number of thoughts, and even if we had all eternity to conceptualize in we are restricted in principle (due to the limitations of language and logic) to those infinite collections that can be generated recursively. But the real numbers, as we understand them, constitute more information than even a (recursively generated) infinite description can contain. Thus, what we contain in our heads is an idea or concept of the real numbers, but not the real numbers themselves. That we can never have; the real numbers are as remote from our direct, unmediated apprehension as any phenomenal feature of the world.

Our actual mental relationship to the real numbers can be made quite precise in light of Jan Mycielski's meta-mathematical work on locally finite theories [1986]. He has shown that our theoretical structures in mathematics, in so far as they are actually constructed in the language of first-order logic and not merely imagined, need not presuppose actually infinite models, but may instead be fully captured by finite models. The core result is that for any infinite model and any finite collection of statements in a theory of that model, there is a finite model in which these statements are true precisely when they are true in the infinite model. Consequently, it is evident that the mental entity which we are inclined to think of as “the real numbers” need only be a finite approximation, one that we are free to improve indefinitely as our theoretical constructs advance.

In short, the modern notion of the set of real numbers, that is, of the continuum, with all its various reference potentials, has been arrived at by a process of extrapolation, a process which is ongoing. The cycle of conceptual, experimental, erotetic, and explanatory progress continues to reveal the continuum to us in deeper and more complex ways, but the fundamental act of extrapolating from a perceptual intuition, for example of continuity, to the theoretical construct itself remains at the heart of mathematical advance, just as it does of scientific progress. How such an extrapolation can be justified need not be answered here, but Lavine's remark is suggestive: “[E]xtrapolation is legitimated by the fact that it is the counterpart within mathematics of what we take to be the relation between our measurements and their objective values” [1994]. What matters here is that the act of extrapolating to an abstract entity and the act of extrapolating to objective values of physical quantities are the same act. This places the subject of study of mathematics on the same footing, with much the same ontological quandaries, as the subject of study of any of the natural sciences. The claim that the subject of mathematics is intrinsically different can no longer be maintained.

Conclusion

The story that is usually told about science and mathematics, and their respective philosophies, is as follows. The central problem of science is to explicate the world, and the central problem of the philosophy of science is to determine what it means to do this (or at least to do this “scientifically”). Thus, neither scientists nor philosophers of science are concerned foremost with determining the proper subject of science (which is assumed) but are rather concerned with its object(s) and methods. By contrast, the central problem of mathematics is to explore the logical relationships and necessary consequences of abstract structures, and the central problem of the philosophy of mathematics is to account for the ontology of these structures and the necessity of mathematics' conclusions about them. This situation may be summarized by saying that scientists generally agree about what they are studying, but are not sure what they are saying about it, while mathematicians, on the other hand, know exactly what they are saying, but do not know what they are saying it about.

This account of science and math is consonant with the commonplace with which this essay began; that they are really quite different. However, it is the neat symmetry of the account which should have attracted our notice all along. If the arguments presented in this paper have merit, then the philosophers of each discipline would do well to look to their counterparts for answers; for I have shown that mathematics and science may correctly be understood as sub-disciplines of a common cognitive endeavor, with indistinguishable epistemological aims.

Footnotes

1. Gödel showed that no logical formalism sophisticated enough to axiomatize simple arithmetic can account for all of its own true statements, nor can it prove its own consistency.

2. Notable in this respect is that many classical mathematical results have been arrived at independently by cultures at an even greater remove, e.g., the ancient Chinese and the Mayans.

3. Taken as a whole, the process leading to Darwinism may be viewed as a Kuhnian paradigm shift, but I agree with Kitcher that this characterization is a naïve oversimplification, since it represents “non-crisis” periods in science as static, whereas Kitcher has demonstrated that progress is a cumulative process, the ongoing effect of individual and group efforts stretching through time and constituted by several different kinds of cognitive advance.

4. It is notable that this analysis entails certain ontological commitments, such as to “natural kinds” and “dependencies of phenomena,” which seem to commit Kitcher to realism. Kitcher gives an impression of ambivalence about this consequence, but does not deny it.

5. Measure theory, analysis, field theory, and topology, for instance.

6. This dualism is most problematic if the claim is to be maintained simultaneously with a non-realist view of mathematical entities.

7. Unless one were to include notational advance as providing enhanced observational ability, a thesis for which I think an argument could be made.

8. Whether the real numbers actually exist anywhere is a question beyond the scope of this paper.

References

Boyer, Carl B. (1949) The History of the Calculus and its Conceptual Development. New York: Dover. (reference)

Carnap, Rudolf. (1939) “Logical Analysis of Language.” Foundations of Philosophy of Science, James H. Fetzer, ed. New York: Paragon House.

Hempel, Carl G. (1952) “Principles of Definition.” Foundations of Philosophy of Science, James H. Fetzer, ed. New York: Paragon House.

—, (1962) “Explanation in Science and in History.” Foundations of Philosophy of Science, James H. Fetzer, ed. New York: Paragon House.

Kant, Emmanuel. (1783) Prolegomena to any Future Metaphysics.

Kitcher, Philip. (1983) The Nature of Mathematical Knowledge. Oxford: Oxford UP.

—, (1993) The Advancement of Science. Oxford: Oxford UP. (reference)

K├Ârner, Stephan. (1968) The Philosophy of Mathematics. New York: Dover.

Lavine, Shaughan. (1994) Understanding the Infinite. Cambridge, Mass.: Harvard UP. (reference) (second reference)

Mycielski, Jan. (1986) “Locally Finite Theories.” The Journal of Symbolic Logic, 51:1. (reference)

Popper, Karl R. (1965) “Dispositions, Universals, and Natural or Physical Necessity.” Foundations of Philosophy of Science, James H. Fetzer, ed. New York: Paragon House.

—, (1968) “Science: Conjectures and Refutations.” Foundations of Philosophy of Science, James H. Fetzer, ed. New York: Paragon House.

finis