Five pieces published a few months ago helped us. Not by giving views we could all agree with,
but by adding to our necessary national dialogue and refreshing us on what’s
been happening in this field. Here’s
what they had to say on this two-in-one area.
In “Moguls and Killer Robots,” which took up most of page 1 of
the June 10th New York Times
business section and two-thirds of another one, Cade Metz set out to tell us what
happened between business information technology titans Mark Zuckerberg and
Elon Musk, and ended up providing information bearing repetition. That included Musk’s statements that
artificial intelligence was “potentially more dangerous than nukes,” and that
“we are headed toward either superintelligence or civilization ending,” of
which Zuckerberg, perhaps due to his experience being in a less physically
hazardous area, Facebook, than Musk’s space tourism at least partially disagreed. We also saw what can happen when business
decisions, in this case “a $9 million A.I. contract (Google) had signed with
the Pentagon,” conflict with the views of employees, who threatened a
“rebellion,” and the statement apparently not obvious to some from an Oxford
research director that “you can now talk about the risks of A.I. without
seeming like you are lost in science fiction,” echoed by concerns from a
Cornell computer science professor that “the kind of systems we are creating
are very powerful… and we cannot understand their impact.” Metz indirectly touched on the difference
between narrow AI, focused and benign, and general AI, which need not be either,
and reminded us how they can depart from the usual human ways of solving
problems, when such a facility was directed to maximize scores in a boat racing
computer game and did that “while spinning in circles, colliding with stone
walls and ramming other boats.” The founder
of Google’s Deep Mind, the effort creating the board-game Go program which beat
a major champion, Demis Hassabis, summarized our needs well by saying “we need
to use the downtime, when things are calm, to prepare for when things get
serious in the decades to come.” We can
disagree that all is tranquil now on the AI-danger front, but not that it will
at least threaten to get worse later.
“If the Robots Come for Our Jobs, What Should the Government
Do?” Neil Irwin posed this question in
article form in the June 12th New
York Times. He acknowledged that
“lots of smart technologists and futurists are convinced that we are on the
cusp of a world in which artificial intelligence, robotics and other
technologies will make a large portion of today’s jobs obsolete,” and
considered a guaranteed income, along with “overhauling intellectual property
law so that the companies that develop valuable patents and trademarks don’t
have such a lengthy monopoly on their innovations” (will Mickey Mouse be in the
2100 public domain?) and “work-sharing programs” by in effect removing 20% of
each of 500 positions instead of fully laying off 100 people (worthwhile). He quoted someone saying it was “increasingly
crucial that people continually upgrade their skills to keep up with changing
technology” (a clear strategy for individuals, but a cop-out for helping
employment in general), and advocating “making job benefits like health
insurance and retirement funds more “portable”” (a direction in which we have already
been moving). Now, if we only had
similar sets of proposals to deal with globalization and efficiency…
I published many of the points made by Mary Flanagan in “The
Rise of the “Automacene”: How Robots
Will Define the Next Epoch in Human History” (Salon, June 16) between book covers five and six years ago, but
they are also worth reiterating. Flanagan
cited research-driven estimates that automation, timeframe unspecified, has 98%
and 99% chances of spelling the end of positions as loan officers and tax
preparers respectively, but less than 1% to do the same for nursing assistants
and mental health social workers. To
prepare for a changing “future of work” and “a future in which unprecedented
unemployment is the norm,” the author recommends “an educated populace” (see my
comment above) and “ability to interconnect disparate ideas” (my personal
experience from a lifetime of having this skill, and everything I’ve read about
actual workplace trends, say that employers still almost never care about that). There’s much more to our “need to decouple
our jobs from the meaning and identity we expect them to provide us,” on which
I wrote an entire chapter and which has been dropping since the 1950s, but
there’s no arguing with Flanagan’s conclusion that “we’ll only be able to
navigate the upcoming tumultuous changes in society by embracing deep
conversations on what it means to be human in the era of machines,” even though
precious few Americans want to have such things.
Two more to follow, for next week along with conclusions.
No comments:
Post a Comment