The idea of nonhuman intellectual capability has been around
since at least the 98 years ago that author Karel Capek introduced a now-familiar
word in his play R.U.R. (Rossum’s
Universal Robots). Since then, it has
often gone under the name “artificial intelligence.” As I have written before, that has been a
misnomer, as everything being accomplished in it had been simply algorithmic,
or limited to situations where something can be programmed to execute an “if a happens, then do b” sequence, which is not what we mean by intelligence.
Over the past few years, though, the field has become more
prominent, and its developments have reached many ordinary consumers. People have published articles about “machine
learning,” which, according to one definition, “is an application of artificial
intelligence (AI) that provides systems the ability to automatically learn and
improve from experience without being explicitly programmed.” That technology has been credited for, among
other things, reaching world-class status in go, a game which, along with
bridge, poker, and chess, the most skilled and motivated humans can spend a
lifetime learning without coming close to comprehensively solving. We have also seen a variety of
non-fundamental but large incremental improvements in home management devices
and children’s friendship-robot toys. Classic
questions, such as “Will Artificial Intelligence Become Conscious?” (Subhash
Kak, Live Science, December 10) and
“A.I. Will Transform the Economy. But
How Much, and How Soon?” (Steve Lohr, The
New York Times, November 30) have returned to the press, and salient concerns
about the latest playthings have been raised by, among others, MIT professor Sherry
Turkle, after decades still the leading figure on the social side of technology,
whose “Why these friendly robots can’t be good friends to our kids” (The Washington Post, December 7th)
was released almost simultaneously with “Should Children Form Emotional Bonds
With Robots?” (Alexis C. Madrigal, The
Atlantic, December).
All of this, though, is preliminary to three higher-level
questions. Is artificial intelligence
still algorithmic? What effect, beyond
the established factors of globalization and efficiency and what we have
already been able to project about automation, will it have on American and
world employment? And based on where it
is actually going, is it dangerous?
The first question’s answer is yes. Getting computers to program themselves is
revolutionary, and cutting out the slowest and least effective parts, the people,
will prove to be a huge improvement, and has already got us unquestionably fundamental
gains, with many more to follow in the next months, years, and decades. It can also be so fast and complex that many
such systems cannot explain their results in forms succinct enough for human
understanding. However, machine learning
itself is still limited to computational procedures, with nothing else under
the surface.
On the second issue, we have five different responses. In the October 29th Salon “Will the AI jobs revolution bring
about human revolt, too?,” Kentaro Toyama saw more to artificial intelligence
than actually there, and interpreted author Ray Kurzweil’s old Singularity
prediction as meaning that “by 2045, computer intelligence will match or exceed
human abilities in every way.” While it
may be true that even now a product can come up with “eerie, dream-like images
that seem genuinely creative and uncomfortably human,” that does not mean that
such things are truly either, and employment requiring authenticity there will
not go away. In ”Who’s afraid of the
A.I. wolf?” in the same publication seven days later, Crystal Point gave us the
opposite view, saying correctly that “scientists still haven’t pinpointed the
actual brain processes that make up awareness, and philosophers have not
reached a consensus on the nature of this baffling state.” In the November 9th Motley Fool, Chris Neiger went back to
the first position in “Artificial Intelligence Is Already Common – and It’s
About to Take Over,” naming a study claiming that 38% of American jobs could go
away to automation by 2030, a valid prediction in any event, and that
artificial intelligence could “be capable of performing any task currently done
by humans,” if not up and running in all of them, by 2060. Lohr’s article above presented both sides,
and a set of specific timelines, with 25% and 75% chances of success polled
from “hundreds of attendees at two well-regarded AI conferences,” turned up in “Human
obsolescence” in The Economist’s
December The World in 2018
issue. A few of the capabilities rated
included “fold laundry” (25th percentile of probability 2018, 50th
in 2022, 75th in 2031), “retail salesperson” (25th in
2022, 50th in 2029, and 75th in 2048), ending with “full
automation of labor” (25th percentile 2071, 50th
percentile 2141, and 75th percentile 2241). Despite the conference connection these
chances also include existing forms of automation, but are enough to tell us
that even those in the field do not think any kind of “singularity” is likely within
anything like 28 years.
On the third question, it is still a real concern that such
machines will program themselves with systems that take living beings to be
unnecessary, redundant, or even detrimental to their purposes. The movie The
Terminator may now be 33 years old, but its premise of “autonomous goal-seeking
programs” failing to incorporate basic human assumptions is as current as
today’s news. Those involved with
artificial intelligence must take a line from another 1980s popular culture
work, Donald Fagen’s song “I.G.Y.,” and ensure that if we have “a just machine
to make big decisions,” it is “programmed by fellows with compassion and
vision.” That is true whatever the
eventual effects of artificial intelligence.
Thank you very much!
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete