I’m going off jobs this week, though this material is
strongly if peripherally related. Here
are some thoughts on Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, released last
year, and the follow-on Raffi Khatchadourian November 23rd New Yorker article, “The Doomsday
Invention”:
Point 1: What we have
been calling “artificial intelligence” really isn’t – it’s only
algorithmic. If A happens, do B. Although capabilities of what can be done
automatically have gone up exponentially since the BASIC computer language was
developed in the 1960s, almost anything that is electronically generated could still
be done, if need be, by using that form of code. We can’t seriously use that phrase in the
present tense until we break that boundary.
Point 2: Once we do
that, and get, as in the first Terminator movie, “autonomous goal-seeking” from
machines, what is to prevent them from the likes of what Arnold
Schwarzenegger’s character did, determining that the best way to achieve its preset
objective is to kill all the people? Can
we find a way of assuring that all post-algorithmic automata avoid that? Could terrorists program and release them
without that restriction?
Point 3: None of what
Bostrom writes about is a new fear. The
classic in the field is still Bill Joy’s now 16-year-old article, “Why the
Future Doesn’t Need Us.” It explains how
close we may be to letting nanotechnology, robotics, and genetic technology,
which unlike nuclear, chemical, and biological threats can be furthered by
forces much smaller than governments and large universities, get away from us
and possibly even kill off our species.
It’s still available from its original source, Wired magazine, at http://www.wired.com/2000/04/joy-2.
Point 4: We still
don’t know about Ray Kurzweil’s Singularity, in which human and electronic
intelligence merge and such nuisances as mortality go away, but between the
slowing of Moore’s Law and Robert J. Gordon’s lifestyle observations in his new
The Rise and Fall of American Growth,
it doesn’t look good.
Point 5: Bostrom,
according to the article, plans to be a corpsicle, in other words will have his
body frozen after death for future revival.
Per science fiction author Larry Niven, we have real doubts about
whether such semi-dead beings will even be welcome decades or centuries from
now – they may be seen as selfish liabilities who had their lives already and
whose bodies may be harvested for other purposes. With our fear of dying thoroughly justified,
I can’t blame him, but hoping to come back to life in that way may have the
same disadvantages of religion… and be more expensive.
Point 6: We have got
about nowhere on knowing the source of consciousness. Accordingly, we can never assume that machines
of any kind, even if they talk and act a good game, have anything behind
that. The same goes for people after being
teleported. As we know from the Turing
Test (the ability of computers to convincingly imitate people), it is possible,
even easy these days, for things to act human with all the consciousness of an
off-and-on air conditioner.
---
According to Bostrom, space probes travelling at 1% of the
speed of light, or 1,860 miles per second, could canvass the entire Milky Way
from a spot inside it within 20 million years.
Given that the galaxy contains, per Khatchadourian, ten billion
“Earth-like planets,” and per Bostrom they have precipitated “a sum total of zero alien civilizations that developed
technologically to the point where they become manifest to us earthly observers
(italics his),” why have we not been visited or even contacted by life forms
originating elsewhere?
Point 7: As Bostrom
suggested, they may just plain not exist, having been stopped from spacefaringness
by any of many “Great Filters,” such as not experiencing the life-starting
spark we still don’t understand, not progressing beyond single-celled
organisms, or succumbing to asteroid strikes or stellar disturbances.
Point 8: It is
possible that intelligent life, in effect, carries the seeds of its own
destruction, that certain types of technology sure to be developed at some point
will cause all life on a planet to become extinct? Could we ever know? Bostrom philosophized on this, as have others
in the past few decades.
Point 9: Sentient
creatures elsewhere could consistently be more like whales than humans, with little
or no use of tools and, though intelligent, bound to a water or other environment
hard to leave.
Point 10: Between
these limitations and the restriction of the speed of light, we simply won’t be
contacted all that often. It could be
tens of thousands of years, or more, between aliens’ appearances in our space.
Point 11: We have no
reason to assume that extremely advanced creatures from elsewhere would even be
visible to us. Even if they chose to
arrive in person, instead of watching us through the equivalent of
super-powerful telescopes (and if they were half of the galaxy away, they would
now be watching what we were doing in 48,000 BC), they could be cloaking
themselves by staying out of our light range.
We have no idea what beings could do after thousands of years of
post-Industrial Revolution progress, let alone millions, so we can’t rule
anything out.
---
That last sentence applies to artificial intelligence and
superintelligence as well. Don’t say I
didn’t warn you.
No comments:
Post a Comment