Big topics, little communication. My cupboard is not quite bare, but has some odds and ends, which are worthwhile, even if they add up to one post instead of the three these matters seem well worth.
Artificial intelligence, or AI, seems to be springing leaks,
if not in how it is progressing but how people deal with it. A stern view on one, by George Maliha et al.
in Harvard Business Review on July 13th, was “To Spur Growth
in AI, We Need a New Approach to Legal Liability.” We hit the issue of which humans are legally
responsible for post-algorithmic technology with driverless cars, which haven’t
spread enough for anything resembling legal precedents, and here we have the
straightforward assertion that “the existing liability system in the United
States and other countries can’t handle the risks” it entails. The authors recommend “revising standards of
care” especially for medical AI applications, granting radiologists, for
example, immunity from malpractice if they provide secondary image reading
after AI provides the first; “changing who pays: insurance and indemnity” including insurers
giving better rates for professionals using favored AI systems; “revamping the rules: changing liability defaults” such as, if autonomous
cars are involved, not automatically blaming a human driver in the striking
vehicle for a rear-end collision;
“creating new adjudicators:
special courts and liability systems” using more sophisticated knowledge
than most judges have; and “ending
liability completely: total regulatory
schemes” or institutionalizing knowledge that in some cases nobody is at
fault. A good start, all of this.
Julian Jacobs addressed another future problem area in the Brookings
TechTank on November 22nd, with “Automation and the
radicalization of America.” Jacobs found
that combining one study assigning mechanization potential to occupations with
another giving demographic data on people working them told us that those more
likely to be replaced by machines “tend to have a dark and cynical view of
politics, the economy, the media, and humanity” and skew left on financial issues
but slightly right on “socio-cultural ones.”
He stopped short of predicting revolutionary activity among these
workers, but such, if they do indeed lose their jobs, could happen this decade
or next.
“Will Robots Really Destroy the Future of Work?” Peter Coy revisited this old overstatement in
the January 24th New York Times, featuring an interview with
labor economist David Autor, who “loves” both robots and unions and wants the
two to be coordinated better, ideally on better-paying jobs. Per Coy, such means realizing that “workers
need training so they can use automation, not be replaced by it.” I see no mention of the number of positions
that we can expect to be lost, and it seems naïve to think it will not be
substantial, even if some are created – the major point of mechanization is to
reduce labor costs, which would not happen if almost as many jobs are created
to work with it. Now, as opposed to two
years ago, we can better justify trading lower-paying positions for fewer
higher-compensated ones, but there is hardly a guarantee 3%+-unemployment will
last indefinitely. Destroy, no – damage
and change, yes.
The Writing on the Wall” was a long April 17th
Steven Johnson piece in the magazine section of The New York Times. The subtitle of sorts was “A.I. has begun to
master language, with profound implications for technology and society. But can we trust what it says?” We’re now at the point where such a system
can write good-looking essays proposing plausible solutions to complicated
problems in a second or so, through abilities to determine missing words and access
massive numbers of sites, not all truthful or prudent. The core of this issue is that the machines
themselves cannot judge written material and cannot always identify lies,
meaning human input is still needed. We
also are not avoiding the issue of what AI language modules may produce without
confidential influences, which could well offend or even upend modern
sensibility. Overall, Johnson’s view
that “the very premise that we are having a serious debate over how to instill
moral and civic values in our software should make it clear that we have
crossed an important threshold” seems appropriate – and solutions may depend on
specific assumptions such as “people are basically good” and “guns in houses
are safe enough,” which could be revealed to all. A long way to go we have, and this piece does
help.
Shrinking to a less general concern, we have Tanya Moore’s
April 19th New York Times “Can A.I. All but End Car
Crashes? The Potential Is There.” We don’t have many autonomous vehicles, but
there are plenty of others with related software – even my ordinary, year-old
Toyota Camry beeps when I cross a center line.
Moore mentioned various other mechanistic improvements, and others in
progress – this area is burgeoning. That
means that even if we don’t lose drivers, we will still gain a lot of safety
and save many lives.
I end with a robot application with smaller import, but the
kind which we can solidly expect. It’s
“Jack in the Box to pilot Miso Robotics’ Flippy 2, Sippy” (Lucas Manfredi, Fox
Business, April 26th). It
will start in only one of the fast-food chain’s locations, and not until late
this year, but the first of these “takes over the work for an entire fry
station” at a 30% production increase, and the second cuts drink spills as it
“efficiently moves cups,” “accommodates a range of cup sizes and groups cups by
order for easy delivery to customers.”
At today’s rates, Flippy 2’s $3,000 per month is less than only one
full-time fast-food worker, and will work many more hours. Like it or not, if the trial works, it will
propagate, help the business, and potentially save customers’ time and
money. Look for many more – and don’t forget
these growing and evolving issues, as, headlines or not, they won’t leave us
alone forever.