In some ways, this is an old topic. At least as far back as the 1960s, many
people have been concerned that computers, robots, and other technology
manifestations have potential to do more harm than good. It’s now 50 years since the cutting-edge
machine HAL was graphically and effectively portrayed in 2001: A Space Odyssey,
killing a crew member to follow its highest-priority directive of mission
success, and 34 since The Terminator
reinforced the dangers of what one of its characters accurately called
“autonomous goal-seeking programs.”
Since then it has crept into the mainstream, with products such as
Amazon Echo Show and Google Home, taking over small household tasks for
millions, constituting large jumps in the past year. Sixty-five-million-people Great Britain has a
chance, per Jeremy Kahn in the October 14th Bloomberg.com, to add the equivalent of $837 billion to its economy
with it over the next 17 years, and the potential in the United States is far
higher.
Even more than with driverless vehicle technology,
artificial intelligence has attracted efforts to regulate and limit it. As Andrew Burt put it in “Leave A.I. Alone” (The New York Times, January 4th),
“December was a big month for advocates of regulating artificial intelligence,”
with local and national bills setting the stage for its control. Yet such governing has not actually
happened. The best sources now on how we
should deal with it are the commentators.
Here are two.
The March 31st Economist titled a 12-page “special report” “GrAIt Expectations,”
which started with the observation that “artificial intelligence is spreading
beyond the technology sector, with big consequences for companies, workers and
consumers.” It touched on data mining, a
perfect application for this technology as while it seems intense it is truly
only computational, but, in only the second body-text paragraph, jumped the
rails by naming a consulting company, Accenture, using it to “pick the best
candidates,” something no machine, or human for that matter, can consistently
do. I liked better the section’s
supply-chain-progress heading “In algorithms we trust,” which is still what
artificial intelligence is all about, even if modern-day computing power can do
such things as determine optimal multi-stop routes, for which, with 25
locations to visit, there are 15 septillion
possibilities, and replace human customer service representatives with automata
in many ways better. The set of articles
touched on a rapidly brewing artificial-intelligence controversy, or what we will
do with information such as identifying sexual orientation, detecting unusual
opinions, and potentially determining that members of certain groups may be, in
general, less suited for specific employment or financial treatment. It was relatively easy for medical scientists
to disregard Nazi-experiment findings, since they were less valuable, but if
the latest and most powerful data mining resource were to “determine,” for
example, that even when controlling for income, family background, credentials,
and every other variable it can find, blacks are less successful at engineering
jobs, we would have hard decisions to make.
The second piece is Tad Friend’s May 8th New Yorker “How Frightened Should We Be
of A.I.?”. In this remarkably stunning
and comprehensive piece, Friend started with the difference between “artificial
narrow intelligence,” which harmlessly powers everything from Roombas to
refrigerators, and “artificial general intelligence,” the potential 2001 or Terminator-style version, with prospects alarming even to the likes
of Elon Musk, Stephen Hawking, and Alan Turing.
The strictly algorithmic nature of artificial narrow intelligence,
limited or not, has shown that intuition, long believed to be necessary for
success in Go, is at least sometimes computational and therefore within the
range of computers, including the one that beat a major champion at that game
two years ago.
That intuition finding puts one thing into doubt. That is the expectation that many tasks will
always require live people. Friend cited
computer scientist Larry Tesler as saying that “human intelligence “is whatever
machines haven’t done yet.”” As an
atheist might say that religion as commonly practiced fills in only current gaps,
that with today’s knowledge people no longer think that God moves planets, it
could be that only our failure to understand how to reduce all human abilities
to if-A-then-B thinking is stopping us from seeing that artificial intelligence
can someday handle anything. Indeed,
computers are already, per Friend’s citations and examples, passing the Turing test
by writing in the style of petulant 13-year-olds and otherwise pretending to
vary from the linear sequences we expect of them.
Three classic philosophic issues come forth as well in
Friend’s article. The presence or
absence of free will, or people choosing their actions themselves, may be
solved by further artificial intelligence achievements. The same is true of the differences between
feelings and logical thoughts. The
question of whether we would want to make our planet “into a highly enriched
zoo environment that’s really fun for humans to live in” may thus force itself
on us. There is much more here, and I
recommend anyone with interest in these topics to read it – it is at https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai
.
After a one-week break for the latest employment situation,
I will continue this topic on June 8th with
artificial-intelligence-related observations for employers, employees, and the
rest of us.
No comments:
Post a Comment