Yes, in amount of recent press, AI has gone past guaranteed
income, driverless cars, minimum wages, and even robotics. It’s sort of an oxymoron, as it still seems
to use only algorithms, which forces its developers to assume that what
knowledge they want must be linear.
We’ll start our set of dispatches from the last seven months
of 2018 with one from a series called “Dispatches.” This entry by Henry Kissinger, who, if he penned
this himself, is one of the world’s best 95-year-old writers, is titled “How
the Enlightenment Ends” and was published in the June 2018 Atlantic. Kissinger was
concerned that our “new, even sweeping technological revolution” has
“consequences we have failed to fully reckon with” – a sentiment around since
at least George Orwell in the 1940s, even if some of the problems he mentioned,
such as “the ability to target micro-groups” causing politicians to be
“overwhelmed by niche pressures,” are newer.
He warned of the dangers of “an AI program that is acting outside our
framework of expectation,” especially one that learns much quicker than humans
and can “surpass the explanatory powers of human language and reason,” and considered
the nature of consciousness, which in one of many views comes from computations
and is thus present in pocket calculators.
As opposed to the historical Enlightenment, in which the situation was
the opposite, Kissinger wrote that we now have “a potentially dominating
technology in search of a guiding philosophy.”
Well selected and well refined insights.
In the June 20th New York Times, Steve Lohr asked “Is There a Smarter Path to
Artificial Intelligence? Some Experts
Hope So.” The author first hinted at a
rising problem, that “a growing number of A.I. experts are warning that the
infatuation with deep learning may well breed myopia and overinvestment now –
and disillusionment later,” a view which may be driven as much by concerns
about the politically incorrect conclusions the deep learning, or massive data
analysis, systems Lohr discussed sometimes reach. Although they seem to “learn” by identifying
commonality, such schemes are ultimately only based on computations, running faster
but not making intuitive human-style connections, and indeed, per Lohr, “deep
learning comes from the statistical side of A.I.” That theme was also the point of Melanie
Mitchell’s “Artificial Intelligence Hits the Barrier of Meaning,” on November 5th
in the same publication, which said that “today’s A.I. systems sorely lack the
essence of human intelligence: understanding
the situations we experience, being able to grasp their meaning,” and pointed
out how vulnerable such schemes, which may need to learn from the beginning
when only small things are changed, are to hackers.
One section of artificial intelligence has hit a roadblock,
as Julie Creswell reported on June 20th, also in the New York Times, that “Orlando Pulls the
Plug on Its Amazon Facial Recognition Program.” That police-department effort,
powered by Amazon’s two-year-old Rekognition product, was ended in the wake of
protesters rightfully concerned that it could be used to track them “or others
whom authorities see as suspicious, rather than being limited to individuals
who are committing crimes.” This won’t,
though, be the end of large organizations vacuuming up millions of faces and
attaching them to credit reports, medical files, purchasing histories, cell
phone records, and so on, and it is unrealistic to think your face will not be
with your name in plenty of huge databases within ten years. There will be good and bad aspects to what is
already the means to attach a personal history to a high share of facial
photographs, making everyone an even faster and sharper data analyst than
Penelope Garcia on CBS’s television show Criminal
Minds. Even if such technology is
legally restricted, it will still be used.
Moving on to human resources issues, we have, also in the Times, “A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay” (Noam
Scheiber, December 6th). That
piece showed how companies can use a baseball-style Bill James or Moneyball
system to identify winning job candidates lacking conventionally expected
credentials, for whom, per a Columbia economist, the organization “doesn’t seem
to have to compete… as much.” A
fundamental improvement over the long-time use of automated scanning for key
words on resumes, the service Eightfold can look for related but not identical
experiences. One drawback Scheiber named
was that people might feel “a sense of unfairness… if a computer were to make
hiring, firing and promotion decisions” – that made me laugh, as those
verdicts, when made by humans, have been notoriously erratic since long before
the first computer was made. Just as
modern technology can put chubby or awkward-throwing but superbly hitting players
into major-league careers, it can do the same for those with credentials
slightly off balance in cubicle jobs.
We end on a pessimistic note from Timothy Egan, in the
December 7th New York Times.
As before, we’ve known about threats from “The Deadly Soul of a New
Machine” for a while, but it isn’t the whole story. Yes, October’s Lion Air flight disaster may
have been caused by an excessively dominant, in effect faulty, “advanced
electronic brain,” but such things have saved many more lives than that, and
saying that one driverless-car pedestrian death means that “there shouldn’t be
any rush… to hand over the steering wheel to a driver without a heartbeat,” and
that a good summary for artificially intelligent devices is “Our invention. Our folly,” is out of touch with the reality
of what such systems are doing now, let alone how they will perform when bugs
that caused both disasters are removed.
We don’t need naivete, but we can also do without looking only at the
worst. Artificial intelligence is still
only algorithmic, and will continue to present problems, but it is still overwhelmingly
positive. As for its future, stay
tuned.
No comments:
Post a Comment