With the Capitol insurrection and consequent second impeachment, it may seem like political stability will be the most important long-term American problem as Covid-19 infections, deaths, and hospitalizations come slowly down. It is now more urgent, but should settle down almost completely by spring. What we can’t forget are the two issues that loomed largest before the virus spread.
In February, I wrote a three-part series on widespread
electronic surveillance, ending by recommending five courses of action. They were allowing people to opt out from
phone tracking and face comparisons, giving cellphone-system data the same
legal protection as those from landline telephones, banning location sharing by
phone apps, holding a referendum on what electronic information law enforcement
agencies may collect and use, and having a public service campaign educating
people on the existence of and their control of data-collection sources. Soon after that, we got two articles
furthering this issue – Mona Wang and Gennie Gebhart’s March 7th Truthout
“Schools Are Operating as Testbeds for Mass Surveillance,” and John Seabrook’s
March 9th New Yorker “Dressing for the Surveillance Age.” The first, while as much an editorial as a
news piece, informed us that some school districts send “”automated alerts” to
school administrators, and in some cases, local police” when students explore “sites
relating to drugs and violence, as well as terms about mental and sexual
health.” Online searches have long been
less private than we, and especially our children, might think.
The second, showing an example of how the market can speak,
asked “can stealth streetwear evade electronic eyes?” Seabrook concluded that yes, at least some of
the time with current technology, it can.
As with innocent looking stickers that fool driverless cars, clothing
that to human eyes make someone “impossible to miss” has been designed under
such names as “invisibility cloak” and “Jammer Coat.” Incredible as it may seem, the right patterns
can make someone seem transparent to artificial intelligence networks, with objects
behind them visible as ever. Unlike the
automated-vehicle deterrents, the garments are clearly legal and ethical, but it
seems only a matter of time until surveillance technology software catches
up.
News on the other concern, though, did not stop with the
coronavirus. “Even the Machines Are
Racist. Facial Recognition Systems
Threaten Black Lives,” by Eisa Nefertari Ulen in Truthout on March 4th,
summarized the main objection to use of this knowledge, that it gives erroneous
matches more often for nonwhites. That
problem hit to the core, as, on June 9th we saw that “IBM Says It Will Stop
Developing Facial Recognition Tech Due to Racial Bias” (Hannah Klein, Slate),
and that, one day later, “Amazon Pauses Police Use of Its Facial Recognition
Software” (Karen Weise and Natasha Singer, The New York Times). Both were for the same general reasons, problems
with “Asian and black faces” (Klein) and in response to “misidentifying people
of color” (Weise and Singer).
It is possible that such technology has been used more than
its level of reliability has justified, though it has had large successes. The core problem, though, may be something
many Americans may not be willing to accept.
While the people we call “whites” have origins all over Europe, the
Middle East, and beyond, those we call “African Americans” in this country are mainly
only from the western African coast, and were often bred together, and with
whites, after that. The vast majority of
Americans of eastern Asian descent are from the ethnic Han areas in Japan,
China, and Korea. It may be that the
faces of people in those groups simply vary less than those in others, so will
require more work to differentiate.
Why did IBM not focus on further improvement instead of
halting efforts? Why did Amazon not
continue using these tools for groups with which they have been more
effective? Why did I read, in “Facial
Recognition Technology Isn’t Good Just Because It’s Used to Arrest Neo-Nazis” (Joan
Donovan and Chris Gilliard, Slate, January 12th, 2021), that
“those who have looked deeply at the values underlying it see (this capability)
as deeply flawed, racist, and a debasement of human rights”? The reason is that the ability to
automatically recognize faces has been pulled into our national racial
impasse. That means, as in too many
other areas, that truth in their design and results is no longer universally
sought out, accepted, or even primarily valued.
Where will we go with identification and tracking of
people? We don’t know, and it is
important. Can we get the most from
allowing these things without letting them end our privacy forever? That is something we need to focus on, as
soon as politics and Covid-19 calm. For
now, though, we must be aware that letting these issues solve themselves may be
the worst resolution we could have.
No comments:
Post a Comment