EpistemologyPhilosophy

The Doomsday Argument, Self-Sampling Assumption, and Self-Indication Assumption are wrong, but induction is alive and well.

I have just completed a revised and expanded version (80 pages, double-spaced) of my paper, “Past Longevity as Evidence for the Future”; it is available online here. (Update, 7/31/2011: This work is now available as an eBook; see here. Update, 5/8/2016: This expanded work is now available as a paperback book on Amazon. See here.) The original version was published in Philosophy of Science in 2009. Here is a one-sentence summary of the new version:

The Doomsday Argument, Self-Sampling Assumption, and Self-Indication Assumption are wrong; Gott’s delta t argument underestimates longevity, providing lower bounds on probabilities of longevity, and is equivalent to Laplace’s Rule of Succession; but Non-Parametric Predictive Inference based on the work of Hill and Coolen is consistent with a plausible theory of induction.

The paper explains what all these specialized terms mean, and assumes no prior knowledge of the literature on this topic. But the paper is technical; it assumes knowledge of probability theory and basic calculus.

The new version corrects some minor errors in the original, and it makes my refutation of the Doomsday Argument—a controversial thesis claiming that the risk of extinction of the human race is much greater than had been thought previously—more elegant. In my judgment, my refutation is definitive. Nevertheless, the following statement I made about the original paper still holds for the revised version: “Far more important than what the paper argues against is what the paper argues for: an objective means for using knowledge of the past as evidence for the future.” In my judgment, the methodology that I defend in the paper forms the basis of a calculus of induction.

In the paper, I build on Ayn Rand’s identification of characteristics as ranges of measurement ([1966] 1990, 6–11). For example, the color red is a range of measurements of frequency of light. Now suppose that I have selected a number—call the number n—of pebbles randomly from a bunch of pebbles that I know were created by a similar process, and I have observed all n pebbles to be red in color. The probability that the frequency of visible light reflected by the next pebble will be the highest among all the pebbles sampled is 1/(n+1). Therefore, the probability that the next pebble examined will be red is greater than or equal to n/(n+1).

Survival too is a characteristic. Survival of a species can be thought to persist so long as a measurement of danger to the species remains below a certain threshold value. If the species has survived for a million years, then the measurement of danger has been below that threshold for each of those years. In the absence of any trends or cumulative dangers, and in the absence of any knowledge of the degree of risk except that the degree of risk is constant, the probability that the measurement of danger in the coming year will be the highest on record is 1/1,000,001. Therefore, the probability of extinction in the coming year is less than or equal to 1/1,000,001.

This general line of reasoning (though not applied specifically to the question of the longevity of the human race), which I arrived at from the lead from Ayn Rand, was developed earlier in the field of statistics in a chain of gradual advances beginning with Harold Jeffreys (1932), continuing with Bruce M. Hill (1968) and culminating with Frank P.A. Coolen (1998). (See my paper for more references.) Statisticians such as Frank Coolen have taken these ideas even further. One contribution of my paper is to provide some further philosophic defense of and guidelines for the overall approach.

Of course I make no claim that Ayn Rand would have endorsed my line of reasoning. Thinkers attempting to build on the work of Ayn Rand hold widely divergent ideas on induction.

Though presenting a correct theory or even a promising one is more important than refuting a false one, there also is value in refuting the Doomsday Argument. The argument, along with numerous offshoots and related arguments sometime referred to collectively as ‘anthropic’ arguments, has been the subject of intense discussion and debate among philosophers and scientists for nearly three decades. (See my paper for numerous references.) Wikipedia lists the Doomsday Argument as one of ten “Unsolved problems in statistics.”

Moreover, the Doomsday Argument has been used to further an environmentalist agenda. Consider, for example, the following excerpt—which does a good job of giving a non-technical summary of the Doomsday Argument—from an article in the popular magazine Discover in 2000, when the magazine was owned by Disney:

… 99 percent of all species that ever lived have gone extinct, including every one of our hominid ancestors. In 1983, British cosmologist Brandon Carter framed the “Doomsday argument,” a statistical way to judge when we might join them. If humans were to survive a long time and spread through the galaxy, then the total number of people who will ever live might number in the trillions. By pure odds, it’s unlikely that we would be among the very first hundredth of a percent of all those people. …

Human activity is severely disrupting almost all life on the planet, which surely doesn’t help matters. The current rate of extinctions is, by some estimates, 10,000 times the average in the fossil record. At present, we may worry about snail darters and red squirrels in abstract terms. But the next statistic on the list could be us.

My refutation of the Doomsday Argument can be summarized as follows. The Doomsday Argument conflates the ideas of total duration and future duration. That is, the Doomsday Argument’s Bayesian formalism is stated in terms of total duration, but all attempted real-life applications of the argument—with one exception, an application by Gott—actually plug in prior probabilities for future duration. Moreover, the Doomsday Argument’s ‘Self-Sampling Assumption’—which claims that one’s temporal birth rank among all N humans ever to be born is equally likely to have been any number from 1 to N—contradicts the prior probability density functions for past and future duration in all realistic scenarios including all realistic scenarios presented by defenders of the Doomsday Argument.

In my original paper, I write, “If the Doomsday Argument and the Self-Sampling Assumption are to be rejected, they must be rejected for the right reason, lest a hidden baby be thrown out with the bathwater—especially since that hidden baby might be the ability to assess the future from the past.” In my new version, I write also, “Not only are the Doomsday Argument and the Self-Sampling Assumption false, but they also obscure the real prior probability assessments that one might have about an uncertain past, and they obscure the real manner in which learning more about the past can indeed update one’s probability assessments regarding the future.”

References

Coolen, Frank P.A. (1998), “Low Structure Imprecise Predictive Inference For Bayes’ Problem”, Statistics & Probability Letters 36: 349-357.

Hill, Bruce M. (1968), “Posterior Distribution of Percentiles: Bayes’ Theorem for Sampling from a Population”, Journal of the American Statistical Association 63: 677-691.

Jeffreys, Harold (1932), “On the Theory of Errors and Least Squares”, Proceedings of the Royal Society of London Series A, 138: 48-55.

Rand, Ayn ([1966] 1990), “Introduction to Objectivist Epistemology” [Part I], The Objectivist 5(7): 1-11. Reprinted in Introduction to Objectivist Epistemology, Expanded Second Edition. Edited by Harry Binswanger and Leonard Peikoff. New York: Meridian: 1-18.

One thought on “The Doomsday Argument, Self-Sampling Assumption, and Self-Indication Assumption are wrong, but induction is alive and well.

Comments are closed.