Over the past few months the subject of human-replacing
technology articles has shifted. From
constant progress reports, autonomous vehicle news is down to a trickle, but artificial
intelligence (AI) issues are still drawing reporting. Here are four such stories, all from The New York Times.
The first, “How Do You Govern Machines That Can Learn? Policymakers Are Trying to Figure That Out,”
by Steve Lohr on January 20th, wasn’t specific. It reminded us, though, that “today’s
machine-learning systems are so complex, digesting so much data, that
explaining how they make decisions
may be impossible” (italics Lohr’s). I
still believe that AI is only algorithmic, but it is now developing its own computational
procedures often already too large to explain.
As to whether we need confidence in systems with methods more accurate
than ours, “an M.I.T. computer scientist and breast cancer survivor” said that
“you have to use” machine-generated algorithms predicting that disease if they
are objectively best. Yet, as we know
from recent driverless-car attitudes, not all will consent to that. Lohr also discussed two situations which he
and everyone else seems to conflate:
poorer AI recognition of female and nonwhite faces, which is a technical
issue requiring more work, and how to use controversial but correct data.
Next was Eduardo Porter’s February 24th
opinion-section “Don’t Fight the Robots.
Tax Them.” Important issues he touched
on were “how do you even define a robot to tax it?,” that before applying
levies on such things we should first withdraw accelerated depreciation and
other tax subsidies, and that reduced numbers of workers pay less income tax. His suggestions included robot-owning
businesses forfeiting taxes formerly paid by laid-off workers (good, as it
assigns cost to the cost-causer), and a per-robot tax (OK, if we agree on what robots
are). I think we would do better to
charge income tax on a sliding scale with companies with more full-time
equivalent jobs paying less, which, given Porter’s idea of taxing “the ratio of
a company’s profit to its employee compensation,” he almost proposed himself.
The last two were written by Cade Metz and published March 1st. The content of “Is Ethical A.I. Even
Possible?” didn’t support that headline, but focused on two concerns, of facial
recognition shortcomings as above and the growing unwillingness of AI
researchers to contribute to autonomous weapons systems. As Metz mentioned, the AI Now Institute, per
its website “an interdisciplinary research center dedicated to understanding
the social implications of artificial intelligence,” has been formed at New
York University. AI can certainly be
ethical, but we will not all agree on what is right to do with it and what is
not.
Finally, Metz’s “Seeking Ground Rules for A.I.” proposed ten
overarching principles in the field.
They were transparency (in design, intention, and use of the
technology), disclosure (to users), privacy (allowing users to refuse to have
their data collected), diversity (of the development teams, presumably in race
and sex), bias (in input data), trust (self-regulation), accountability (“a
common set of standards”), collective governance, regulation, and
complementarity (to limit AI as something for people to use instead of
something to replace them). A good
start, and may, or may not, go a long way without major changes.
Beyond all of these, we have a query to which AI will force
an answer. It is not a pleasant one, but
we must think about it. As Lohr almost
stated, blacks, whites, men, women, gays, and straights, to name the most
common but hardly all identity groups, do not have identical behavioral
compositions. As the systems determine
differences between sexes and races, they will use them to identify criminal
suspects, recommend hiring or not hiring, accept or refuse mortgage and other
loan requests, determine optimal housing, and make or contribute to an almost
infinite set of other large life-affecting decisions. When algorithms are assembled using contextless
data, it is inevitable that many will incorporate these factors. Even if these six categories and more like
them were expressly blocked from consideration, proxies such as geographical
location would bring them right back in.
So here is the question: What do
we do when the truth is racist, sexist, homophobic, or heterophobic? The answer we develop will mean more for the
future of AI than any further technical progress.
No comments:
Post a Comment