Before the pandemic struck, I called the use of artificial intelligence, after but related to electronic surveillance, the second most important current American issue. The problem is not AI itself, but what we will allow it to do, and how we will react when it uncovers information we are not happy learning. Except for its expected incremental progress, what has reached the press about it recently?
We got a level-setting summary on the February 23rd New York Times from Craig S. Smith, “A.I. Here, There, Everywhere.” Common now are “conversations” with devices which we order, in sentences reminiscent of those addressed to computer HAL 9000 in the now 54-year-old movie 2001: A Space Odyssey, to turn on lights, put on the heat, start the oven, and so on. Handy, but we may come to see today’s capabilities as “crude and cumbersome,” and, as devices learn our regular patterns and report deviations to systems or people which may pass them on when we don’t want them to, “privacy remains an issue.” AI is now being packaged into humanoid “realistic 2D avatars of people” which can be used for the likes of tutoring, and being used as in effect a fifth or sixth-level computer language by following commands to write software. Of course, we can expect much more.
Another AI application, in this case in place for a decade or more, has a growing set of countermeasures, some described in Julie Weed’s March 19th “Résumé -Writing Tips to Help You Get Past the A.I. Gatekeepers” in the New York Times. Weed recommended “tailoring your résumé, not just the cover letter, to each job you are applying for,” using the same keywords as in the advertisement, and to use “words like “significant,” “strong,” and “mastery.” The software will evolve over time, as will the applicants’ best responses.
The headline of Cade Metz’s March 15th piece, also in the Times, asked “Who Is Making Sure the A.I. Machines Aren’t Racist?” Metz asserted that AI “is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it,” and defends that with examples of systems poor at identifying faces of blacks, a six-year-old AI identification of a black man as a gorilla, and another set of programs being trained with an 80%-white set of faces, approximating the general population. All of that, if legitimate, has been, can, or will be repaired.
Ted Chiang, in the New Yorker on March 30th, addressed a large underlying AI issue in “Why Computers Won’t Make Themselves Smarter.” He invoked Ray Kurzweil’s Singularity, or the point, per Wikipedia, “at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization,” and questioned if that would ever happen. He cited an example of a certain roundworm, with a far lower number of brain neurons and other body cells than humans, on which scientists have “mapped every connection” but “still don’t completely understand its behavior.” While computer compilers have compiled themselves for many decades, improvement stops there, exemplifying the inability, conceptionally as well as so far empirically, for automata, in contrast with people, to learn from others. These issues are not as clear as Chiang made them seem, but his view is good enough to be either refuted or accepted.
The newest is from Frank Pasquale and Gianclaudio Malgieri, “If You Don’t Trust A.I. Yet, You’re Not Wrong,” in the July 30th New York Times. The authors, law professors, argued for more artificial intelligence regulation, but stumbled in explaining why. They seem to have missed the differences between private and public use, that it cannot be banned simply because it does not always make optimal conclusions, that discrimination against individuals with certain characteristics may be justified, that more pressing issues such as Covid-19 have caused it to “not appear to be a high-level Biden administration priority,” and that is useless to talk about “racially biased algorithms” or “pseudoscientific claptrap” if nobody can define those terms.
In a year or so, if the pandemic has faded to pre-2020 levels, we need to address artificial intelligence – if we can afford to wait that long. Ahead of them on the list of issues needing attention then, though, are two others, which, barring large breaking national developments, will be the subject of my next two posts.