When I got a message containing an annual summary of what happened this year in artificial intelligence from an organization I thought to be only an academic think tank, I was expecting to relay the good and the bad, along with many things of which neither of us were aware. I printed its 59 pages before the endnotes and only later looked it over.
That proved to be just about a waste of paper, which, unfortunately, would have concerned the 2019 Report’s authors more than any lives AI has saved or improved. When I started reading I found not the summary I had expected, but another one. The AI Now Institute is not quite what I had thought – true, it is an “interdisciplinary research institute,” but it is “dedicated to understanding the social implications of AI technologies.” That does not mean, though, that the group wanted to know about, say, the effect of Alexa devices on dinnertime conversations. Nowhere in the text did I see anything positive about AI’s effects, and when it wandered off their official purpose, which was usually into the subject of regulation, that did not change.
The Executive Summary had five bullet points. To start, “the spread of algorithmic management technology in the workplace is increasing the power asymmetry between workers and employers. AI threatens not only to disproportionately displace lower-wage earners, but also to reduce wages, job security, and other protections for those who need it most.” True, but the same goes for globalization, efficiency, and other forms of automation, which as I have written are trends unlikely to be reversed or even halted.
The second point was “community groups, workers, journalists, and researchers – not corporate AI ethics statements and policies – have been primarily responsible for pressuring tech companies and governments to set guardrails on the use of AI.” In the short term, yes, but only because larger organizations are slower to respond. Third, “efforts to regulate AI systems are underway, but they are being outpaced by government adoption of AI systems to surveil and control.” Yes, Washington has plenty of use for the technology, and here, as elsewhere, not only AI itself but policies around it are evolving as quickly as they can.
The fourth, “AI systems are continuing to amplify race and gender disparities via techniques like affect recognition, which has no sound scientific basis,” gets to the heart of what AI Now seems to be about, which is complaining about outcome differences between blacks and whites and men and women. All involved with AI would agree that affect recognition, described here as “a subset of facial recognition that claims to “read” our inner emotions by interpreting the micro-expressions on our face,” is in its infancy, and its being “deployed at scale” in employment interviews, if that is reasonable to say, would be more objectionable if interviews and resumes were not notoriously weak and, yes, similarly flawed, without it. In another year, leading-edge research on affect recognition will almost certainly point to methods different from those now in use.
Last, “growing investment in and development of AI has profound implications in areas ranging from climate change to the rights of healthcare patients to the future of geopolitics and inequities being reinforced in regions of the global South.” It is a stretch to call computer applications especially bad for the environment, and saying that “training just one AI model,” which can take months, produced as much carbon dioxide as “125 round-trip flights from New York to Beijing” did not refute that. Barring interconnections between health care information systems can cost not only convenience and money but lives, and “the global South,” whatever that is, will not always be left alone.
The 2019 Report also contains 12 recommendations, even sharper and more limiting than the main ideas above. They include: banning affect recognition “in important decisions that affect people’s lives and access to opportunities” (in other words, whenever it is used in production); the need to “halt all use of facial recognition in sensitive social and political contexts until the risks are fully studied and adequate regulations are in place” (determination of when these two conditions are met, presumably, left to the likes of the AI Now Institute); requiring “the AI industry” to “make significant structural changes to address systemic racism, misogyny, and lack of diversity,” apparently with meanings of those terms mandated, all protected groups represented at 100% of their population shares, and all by-group pay levels exactly equal; several recommendations already in progress; references to “historic injustices,” the undefinable “social harms,” and “diverse cultural approaches to health” (most likely only along black-white, male-female, and maybe gay-straight lines).
For one of many examples of a truer picture of AI, look at David Brooks’s June 24th New York Times “How Artificial Intelligence Can Save Your Life.” Though a short commercial journalistic piece, it told us that “researchers… were able this year to hook people up to brain monitors and generate natural-sounding synthetic speech out of mere brain activity” (and I doubt that would work only for white straight men) and that analyzing words used by people texting a suicide help line would greatly augment assessing their needs for emergency interventions (tendencies likewise valid for everyone), and concluded by conceding that “you can be freaked out by the privacy-invading power of A.I. to know you, but only A.I. can gather the data necessary to do this,” and “if it’s a matter of life and death, I suspect we’re going to” consent to such information use. That is an even-handed conclusion, which, public-university knowledge-seeking auspices or not, the AI Now Institute, with its other “most recent publications” titled the likes of “Discriminating Systems: Gender, Race, and Power on AI,” does not seem to offer. That should tell us something.
No comments:
Post a Comment