Prizm Share

Prizm Share , updated 12/27/14, 7:53 PM

categoryOther
visibility107

Interesting documents from around the internet.

About Jack Berlin

Founded Accusoft (Pegasus Imaging) in 1991 and has been CEO ever since.

Very proud of what the team has created with edocr, it is easy to share documents in a personalized way and so very useful at no cost to the user! Hope to hear comments and suggestions at info@edocr.com.

Tag Cloud

The Singularity: A Philosophical Analysis
David J. Chalmers
1 Introduction1
What happens when machines become more intelligent than humans? One view is that this event
will be followed by an explosion to ever-greater levels of intelligence, as each generation of ma-
chines creates more intelligent machines in turn. This intelligence explosion is now often known
as the “singularity”.
The basic argument here was set out by the statistician I. J. Good in his 1965 article “Specula-
tions Concerning the First Ultraintelligent Machine”:
Let an ultraintelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines is one
of these intellectual activities, an ultraintelligent machine could design even better
machines; there would then unquestionably be an “intelligence explosion”, and the
intelligence of man would be left far behind. Thus the first ultraintelligent machine is
the last invention that man need ever make.
The key idea is that a machine that is more intelligent than humans will be better than humans
at designing machines. So it will be capable of designing a machine more intelligent than the most
intelligent machine that humans can design. So if it is itself designed by humans, it will be capable
of designing a machine more intelligent than itself. By similar reasoning, this next machine will
1This paper was published in the Journal of Consciousness Studies 17:7-65, 2010. I first became interested in this
cluster of ideas as a student, before first hearing explicitly of the “singularity” in 1997. I was spurred to think further
about these issues by an invitation to speak at the 2009 Singularity Summit in New York City. I thank many people
at that event for discussion, as well as many at later talks and discussions at West Point, CUNY, NYU, Delhi, ANU,
Tucson, Oxford, and UNSW. Thanks also to Doug Hofstadter, Marcus Hutter, Ole Koksvik, Drew McDermott, Carl
Shulman, and Michael Vassar for comments on this paper.
1
also be capable of designing a machine more intelligent than itself. If every machine in turn does
what it is capable of, we should expect a sequence of ever more intelligent machines.2
This intelligence explosion is sometimes combined with another idea, which we might call the
“speed explosion”. The argument for a speed explosion starts from the familiar observation that
computer processing speed doubles at regular intervals. Suppose that speed doubles every two
years and will do so indefinitely. Now suppose that we have human-level artificial intelligence
designing new processors. Then faster processing will lead to faster designers and an ever-faster
design cycle, leading to a limit point soon afterwards.
The argument for a speed explosion was set out by the artificial intelligence researcher Ray
Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.3 Eliezer Yudkowsky
gives a succinct version of the argument in his 1996 article “Staring at the Singularity”:
“Computing speed doubles every two subjective years of work. Two years after Ar-
tificial Intelligences reach human equivalence, their speed doubles. One year later,
their speed doubles again. Six months - three months - 1.5 months ... Singularity.”
The intelligence explosion and the speed explosion are logically independent of each other. In
principle there could be an intelligence explosion without a speed explosion and a speed explosion
without an intelligence explosion. But the two ideas work particularly well together. Suppose
that within two subjective years, a greater-than-human machine can produce another machine that
is not only twice as fast but 10% more intelligent, and suppose that this principle is indefinitely
extensible. Then within four objective years there will have been an infinite number of generations,
with both speed and intelligence increasing beyond any finite level within a finite time. This
process would truly deserve the name “singularity”.
Of course the laws of physics impose limitations here. If the currently accepted laws of rela-
tivity and quantum mechanics are correct—or even if energy is finite in a classical universe—then
we cannot expect the principles above to be indefinitely extensible. But even with these physi-
cal limitations in place, the arguments give some reason to think that both speed and intelligence
might be pushed to the limits of what is physically possible. And on the face of it, it is unlikely
that human processing is even close to the limits of what is physically possible. So the arguments
2Scenarios of this sort have antecedents in science fiction, perhaps most notably in John Campbell’s 1932 short story
“The Last Evolution”.
3Solomonoff also discusses the effects of what we might call the “population explosion”: a rapidly increasing
population of artificial AI researchers.
2
suggest that both speed and intelligence might be pushed far beyond human capacity in a relatively
short time. This process might not qualify as a “singularity” in the strict sense from mathematics
and physics, but it would be similar enough that the name is not altogether inappropriate.
The term “singularity” was introduced4 by the science fiction writer Vernor Vinge in a 1983
opinion article. It was brought into wider circulation by Vinge’s influential 1993 article “The
Coming Technological Singularity” and by the inventor and futurist Ray Kurzweil’s popular 2005
book The Singularity is Near. In practice, the term is used in a number of different ways. A loose
sense refers to phenomena whereby ever-more-rapid technological change leads to unpredictable
consequences.5 A very strict sense refers to a point where speed and intelligence go to infinity, as
in the hypothetical speed/intelligence explosion above. Perhaps the core sense of the term, though,
is a moderate sense in which it refers to an intelligence explosion through the recursive mechanism
set out by I. J. Good, whether or not this intelligence explosion goes along with a speed explosion
or with divergence to infinity. I will always use the term “singularity” in this core sense in what
follows.
One might think that the singularity would be of great interest to academic philosophers, cog-
nitive scientists, and artificial intelligence researchers. In practice, this has not been the case.6
Good was an eminent academic, but his article was largely unappreciated at the time. The sub-
sequent discussion of the singularity has largely taken place in nonacademic circles, including
Internet forums, popular media and books, and workshops organized by the independent Singu-
larity Institute. Perhaps the highly speculative flavor of the singularity idea has been responsible
for academic resistance.
I think this resistance is a shame, as the singularity idea is clearly an important one. The
argument for a singularity is one that we should take seriously. And the questions surrounding the
4As Vinge (1993) notes, Stanislaw Ulam (1958) describes a conversation with John von Neumann in which the term
is used in a related way: “One conversation centered on the ever accelerating progress of technology and changes in
the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race
beyond which human affairs, as we know them, could not continue.”
5A useful taxonomy of uses of “singularity” is set out by Yudkowsky (2007). He distinguishes an “accelerating
change” school, associated with Kurzweil, an “event horizon” school, associated with Vinge, and an “intelligence
explosion” school, associated with Good. Smart (1999-2008) gives a detailed history of associated ideas, focusing
especially on accelerating change.
6With some exceptions: discussions by academics include Bostrom (1998; 2003), Hanson (2008), Hofstadter
(2005), and Moravec (1988; 1998). Hofstadter organized symposia on the prospect of superintelligent machines at
Indiana University in 1999 and at Stanford University in 2000, and more recently, Bostrom’s Future of Humanity
Institute at the University of Oxford has organized a number of relevant activities.
3
singularity are of enormous practical and philosophical concern.
Practically: If there is a singularity, it will be one of the most important events in the history
of the planet. An intelligence explosion has enormous potential benefits: a cure for all known
diseases, an end to poverty, extraordinary scientific advances, and much more. It also has enor-
mous potential dangers: an end to the human race, an arms race of warring machines, the power to
destroy the planet. So if there is even a small chance that there will be a singularity, we would do
well to think about what forms it might take and whether there is anything we can do to influence
the outcomes in a positive direction.
Philosophically: The singularity raises many important philosophical questions. The basic
argument for an intelligence explosion is philosophically interesting in itself, and forces us to
think hard about the nature of intelligence and about the mental capacities of artificial machines.
The potential consequences of an intelligence explosion force us to think hard about values and
morality and about consciousness and personal identity. In effect, the singularity brings up some
of the hardest traditional questions in philosophy and raises some new philosophical questions as
well.
Furthermore, the philosophical and practical questions intersect. To determine whether there
might be an intelligence explosion, we need to better understand what intelligence is and whether
machines might have it. To determine whether an intelligence explosion will be a good or a
bad thing, we need to think about the relationship between intelligence and value. To determine
whether we can play a significant role in a post-singularity world, we need to know whether human
identity can survive the enhancing of our cognitive systems, perhaps through uploading onto new
technology. These are life-or-death questions that may confront us in coming decades or centuries.
To have any hope of answering them, we need to think clearly about the philosophical issues.
In what follows, I address some of these philosophical and practical questions. I start with
the argument for a singularity: is there good reason to believe that there will be an intelligence
explosion? Next, I consider how to negotiate the singularity: if it is possible that there will be a
singularity, how can we maximize the chances of a good outcome? Finally, I consider the place
of humans in a post-singularity world, with special attention to questions about uploading: can an
uploaded human be conscious, and will uploading preserve personal identity?
My discussion will necessarily be speculative, but I think it is possible to reason about spec-
ulative outcomes with at least a modicum of rigor. For example, by formalizing arguments for a
speculative thesis with premises and conclusions, one can see just what opponents need to be deny
in order to deny the thesis, and one can then assess the costs of doing so. I will not try to give
4
knockdown arguments in this paper, and I will not try to give final and definitive answers to the
questions above, but I hope to encourage others to think about these issues further.7
2 The Argument for a Singularity
To analyze the argument for a singularity in a more rigorous form, it is helpful to introduce some
terminology. Let us say that AI is artificial intelligence of human level or greater (that is, at least
as intelligent as an average human). Let us say that AI+ is artificial intelligence of greater than
human level (that is, more intelligent than the most intelligent human). Let us say that AI++ (or
superintelligence) is AI of far greater than human level (say, at least as far beyond the most intel-
ligent human as the most intelligent human is beyond a mouse).8 Then we can put the argument
for an intelligence explosion as follows:
1. There will be AI+.
2. If there is AI+, there will be AI++.
—————-
3. There will be AI++.
Here, premise 1 needs independent support (on which more soon), but is often taken to be
plausible. Premise 2 is the key claim of the intelligence explosion, and is supported by Good’s
reasoning set out above. The conclusion says that there will be superintelligence.
The argument depends on the assumptions that there is such a thing as intelligence and that it
can be compared between systems: otherwise the notion of an AI+ and an AI++ does not even
make sense. Of course these assumption might be questioned. Someone might hold that there is no
single property that deserves to be called “intelligence”, or that the relevant properties cannot be
measured and compared. For now, however, I will proceed with under the simplifying assumption
that there is an intelligence measure that assigns an intelligence value to arbitrary systems. Later
7The main themes in this article have been discussed many times before by others, especially in the nonacademic
circles mentioned earlier. My main aims in writing the article are to subject some of these themes (especially the claim
that there will be an intelligence explosion and claims about uploading) to a philosophical analysis, with the aim of
exploring and perhaps strengthening the foundations on which these ideas rest, and also to help bring these themes to
the attention of philosophers and scientists.
8Following common practice, I use ‘AI’ and relatives as a general term (“An AI exists”), an adjective (“An AI system
exists”), and as a mass term (“AI exists”).
5
I will consider the question of how one might formulate the argument without this assumption.
I will also assume that intelligence and speed are conceptually independent, so that increases in
speed with no other relevant changes do not count as increases in intelligence.
We can refine the argument a little by breaking the support for premise 1 into two steps. We
can also add qualifications about timeframe and about potential defeaters for the singularity.
1. There will be AI (before long, absent defeaters).
2. If there is AI, there will be AI+ (soon after, absent defeaters).
3. If there is AI+, there will be AI++ (soon after, absent defeaters).
—————-
4. There will be AI++ (before too long, absent defeaters).
Precise values for the timeframe variables are not too important. But we might stipulate that
“before long” means “within centuries”. This estimate is conservative compared to those of many
advocates of the singularity, who suggest decades rather than centuries. For example, Good (1965)
predicts an ultraintelligent machine by 2000, Vinge (1993) predicts greater-than-human intelli-
gence between 2005 and 2030, Yudkowsky (1996) predicts a singularity by 2021, and Kurzweil
(2005) predicts human-level artificial intelligence by 2030.
Some of these estimates (e.g. Yudkowsky’s) rely on extrapolating hardware trends.9 My own
view is that the history of artificial intelligence suggests that the biggest bottleneck on the path to
AI is software, not hardware: we have to find the right algorithms, and no-one has come close to
finding them yet. So I think that hardware extrapolation is not a good guide here. Other estimates
(e.g. Kurzweil’s) rely on estimates for when we will be able to artificially emulate an entire human
brain. My sense is that most neuroscientists think these estimates are overoptimistic. Speaking
for myself, I would be surprised if there were human-level AI within the next three decades.
Nevertheless, my credence that there will be human-level AI before 2100 is somewhere over one-
half. In any case, I think the move from decades to centuries renders the prediction conservative
rather than radical, while still keeping the timeframe close enough to the present for the conclusion
to be interesting.
9Yudkowsky’s web-based article is now marked “obsolete”, and in later work he does not endorse the estimate or
the argument from hardware trends. See Hofstadter (2005) for skepticism about the role of hardware extrapolation here
and more generally for skepticism about timeframe estimates on the order of decades.
6
By contrast, we might stipulate that “soon after” means “within decades”. Given the way that
computer technology always advances, it is natural enough to think that once there is AI, AI+ will
be just around the corner. And the argument for the intelligence explosion suggests a rapid step
from AI+ to AI++ soon after that. I think it would not be unreasonable to suggest “within years”
here (and some would suggest “within days” or even sooner for the second step), but as before
“within decades” is conservative while still being interesting. As for “before too long”, we can
stipulate that this is the sum of a “before long” and two “soon after”s. For present purposes, that is
close enough to “within centuries”, understood somewhat more loosely than the usage in the first
premise to allow an extra century or so.
As for defeaters: I will stipulate that these are anything that prevents intelligent systems (hu-
man or artificial) from manifesting their capacities to create intelligent systems. Potential de-
featers include disasters, disinclination, and active prevention.10 For example, a nuclear war might
set back our technological capacity enormously, or we (or our successors) might decide that a
singularity would be a bad thing and prevent research that could bring it about. I do not think
considerations internal to artificial intelligence can exclude these possibilities, although we might
argue on other grounds about how likely they are. In any case, the notion of a defeater is still
highly constrained (importantly, a defeater is not defined as anything that would prevent a sin-
gularity, which would make the conclusion near-trivial), and the conclusion that absent defeaters
there will be superintelligence is strong enough to be interesting.
We can think of the three premises as an equivalence premise (there will be AI at least equiv-
alent to our own intelligence), an extension premise (AI will soon be extended to AI+), and an
amplification premise (AI+ will soon be greatly amplified to AI++). Why believe the premises? I
will take them in order.
Premise 1: There will be AI (before long, absent defeaters).
One argument for the first premise is the emulation argument, based on the possibility of
brain emulation. Here (following the usage of Sandberg and Bostrom 2008), emulation can be
10I take it that when someone has the capacity to do something, then if they are sufficiently motivated to do it and
are in reasonably favorable circumstances, they will do it. So defeaters can be divided into motivational defeaters,
involving insufficient motivation, and situational defeaters, involving unfavorable circumstances (such as a disaster).
There is a blurry line between unfavorable circumstances that prevent a capacity from being manifested and those that
entail that the capacity was never present in the first place—for example, resource limitations might be classed on either
side of this line— but this will not matter much for our purposes.
7
understood as close simulation: in this case, simulation of internal processes in enough detail to
replicate approximate patterns of behavior.
(i) The human brain is a machine.
(ii) We will have the capacity to emulate this machine (before long).
(iii) If we emulate this machine, there will be AI.
—————-
(iv) Absent defeaters, there will be AI (before long).
The first premise is suggested by what we know of biology (and indeed by what we know of
physics). Every organ of the body appears to be a machine: that is, a complex system comprised
of law-governed parts interacting in a law-governed way. The brain is no exception. The second
premise follows from the claims that microphysical processes can be simulated arbitrarily closely
and that any machine can be emulated by simulating microphysical processes arbitrarily closely.
It is also suggested by the progress of science and technology more generally: we are gradu-
ally increasing our understanding of biological machines and increasing our capacity to simulate
them, and there do not seem to be limits to progress here. The third premise follows from the
definitional claim that if we emulate the brain this will replicate approximate patterns of human
behaviour, along with the claim that such replication will result in AI. The conclusion follows
from the premises along with the definitional claim that absent defeaters, systems will manifest
their relevant capacities.
One might resist the argument in various ways. One could argue that the brain is more than a
machine; one could argue that we will never have the capacity to emulate it; and one could argue
that emulating it need not produce AI. Various existing forms of resistance to AI take each of these
forms. For example, J.R. Lucas (1961) has argued that for reasons tied to Go¨del’s theorem, humans
are more sophisticated than any machine. Hubert Dreyfus (1972) and Roger Penrose (1994) have
argued that human cognitive activity can never be emulated by any computational machine. John
Searle (1980) and Ned Block (1981) have argued that even if we can emulate the human brain, it
does not follow that the emulation itself has a mind or is intelligent.
I have argued elsewhere that all of these objections fail.11 But for present purposes, we can set
many of them to one side. To reply to the Lucas, Penrose, and Dreyfus objections, we can note that
nothing in the singularity idea requires that an AI be a classical computational system or even that
it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not
8
an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies
on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following
symbolic system, but he allows that it may nevertheless be a mechanical system that relies on
subsymbolic processes (for example, connectionist processes). If so, then these arguments give
us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic
quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate
the human brain.
As for the Searle and Block objections, these rely on the thesis that even if a system duplicates
our behavior, it might be missing important “internal” aspects of mentality: consciousness, under-
standing, intentionality, and so on. Later in the paper, I will advocate the view that if a system
in our world duplicates not only our outputs but our internal computational structure, then it will
duplicate the important internal aspects of mentality too. For present purposes, though, we can
set aside these objections by stipulating that for the purposes of the argument, intelligence is to be
measured wholly in terms of behavior and behavioral dispositions, where behavior is construed
operationally in terms of the physical outputs that a system produces. The conclusion that there
will be AI++ in this sense is still strong enough to be interesting. If there are systems that pro-
duce apparently superintelligent outputs, then whether or not these systems are truly conscious or
intelligent, they will have a transformative impact on the rest of the world.
Perhaps the most important remaining form of resistance is the claim that the brain is not a
mechanical system at all, or at least that nonmechanical processes play a role in its functioning
that cannot be emulated. This view is most naturally combined with a sort of Cartesian dualism
holding that some aspects of mentality (such as consciousness) are nonphysical and nevertheless
play a substantial role in affecting brain processes and behavior. If there are nonphysical processes
like this, it might be that they could nevertheless be emulated or artificially created, but this is
not obvious. If these processes cannot be emulated or artificially created, then it may be that
human-level AI is impossible.
Although I am sympathetic with some forms of dualism about consciousness, I do not think
that there is much evidence for the strong form of Cartesian dualism that this objection requires.
The weight of evidence to date suggests that the brain is mechanical, and I think that even if
consciousness plays a causal role in generating behavior, there is not much reason to think that its
11For a general argument for strong artificial intelligence and a response to many different objections, see Chalmers
(1996, chapter 9). For a response to Penrose and Lucas, see Chalmers (1995). For a in-depth discussion of the current
prospects for whole brain emulation, see Sandberg and Bostrom (2008).
9
role is not emulable. But while we know as little as we do about the brain and about consciousness,
I do not think the matter can be regarded as entirely settled. So this form of resistance should at
least be registered.
Another argument for premise 1 is the evolutionary argument, which runs as follows.
(i) Evolution produced human-level intelligence.
(ii) If evolution produced human-level intelligence, then we can produce AI (before
long).
—————-
(iii) Absent defeaters, there will be AI (before long).
Here, the thought is that since evolution produced human-level intelligence, this sort of in-
telligence is not entirely unattainable. Furthermore, evolution operates without requiring any
antecedent intelligence or forethought. If evolution can produce something in this unintelligent
manner, then in principle humans should be able to produce it much faster, by using our intelli-
gence.
Again, the argument can be resisted, perhaps by denying that evolution produced intelligence,
or perhaps by arguing that evolution produced intelligence by means of processes that we cannot
mechanically replicate. The latter line might be taken by holding that evolution needed the help of
superintelligent intervention, or needed the aid of other nonmechanical processes along the way,
or needed an enormously complex history that we could never artificially duplicate, or needed an
enormous amount of luck. Still, I think the argument makes at least a prima facie case for its
conclusion.
We can clarify the case against resistance of this sort by changing “Evolution produced human-
level intelligence” to “Evolution produced human-level intelligence mechanically and nonmirac-
ulously” in both premises of the argument. Then premise (ii) is all the more plausible. Premise
(i) will now be denied by those who think evolution involved nonmechanical processes, supernat-
ural intervention, or extraordinary amounts of luck. But the premise remains plausible, and the
structure of the argument is clarified.
Of course these arguments do not tell us how AI will first be attained. They suggest at least
two possibilities: brain emulation (simulating the brain neuron by neuron) and artificial evolution
(evolving a population of AIs through variation and selection). There are other possibilities: direct
programming (writing the program for an AI from scratch, perhaps complete with a database of
10