Friday, December 20, 2024

Artificial Intelligence: A Visit to the Catastrophic Problems Café

One thing almost everyone creating, using, coding, regulating, or just plain writing or thinking about AI feels duty-bound to do is to consider the chance that the technology will destroy us.  I haven’t done that in print yet, so here I go.

In his broad-based On the Edge:  The Art of Risking Everything (Penguin Press, 2024), author Nate Silver devoted a 56-page chapter, “Termination,” to the chance that AI will obliterate humanity or almost so.  He said there was a wide range of what he called “p(doom)” opinions, or estimations of the chances of such an outcome.  He considered more precise definitions of doom – for example, does it mean that “every single member of the human species and all biological life on Earth dies,” or could it be only “the destruction of humanity’s long-term potential” or even “something where humans are kept in check” with “the people making the big calls” being “a coalition of AI systems”?  The averages Silver found for “domain experts” on AI itself, with it defined as “all but five thousand humans ceasing to exist by 2100,” were 8.8% by 2100, and 0.7% from “generalists who had historically been accurate when making other probabilistic predictions.”  The highest expert p(doom) named was “20 to 30 percent”, but there are certainly larger ones out there.

How would the technology do its dirty work?  One way was in “A.I. May Save Us, or May Construct Viruses to Kill Us” (The New York Times, July 27th).  Author Nicholas Kristof said that “for less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.”  That could happen through anything from bugs murdering indiscriminately all the way to something that “might be possible,” using DNA knowledge to create a product tailored to “kill or incapacitate” one specific person.  Kristof is a journalist, not a technician, but as much of AI thinking is conceptual now, his concerns are valid.

Another columnist in the New York Times soon thereafter came out with “Many People Fear A.I.  They Shouldn’t” (David Brooks, July 31st).  His view was that “many fears about A.I. are based on an underestimation of the human mind” – he cited “scholar” Michael Ignatieff as saying “what we do” was not algorithmic, but “a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”  He also wrote that while engineers claim to be “building machines that think like people,” per neuroscientists “that would be a neat trick, because we don’t know how people think.”

The next month, Greg Norman looked at the problem posed by Kristof above, in “Experts warn AI could generate ‘major epidemics or even pandemics’ – but how soon?” (Fox News, August 28th).  Stemming from “a paper published in the journal Science by co-authors from Johns Hopkins University, Stanford University and Fordham University,” exposure to “substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields” creates a worry.  Although “today’s AI models likely do not “substantially contribute” to biological risks,” the chance that “essential ingredients to create highly concerning advanced biological models may already exist or soon will” could cause problems.

All of this depends, though, on what AI is allowed to access.  It is and will be able to formulate detailed deadly plans, but what from there?  A Stanford undergraduate, John A. Phillips, in 1976 wrote and submitted a term paper giving detailed plans for assembling an atomic bomb, with all information from readily available public sources.  Although one expert said it would have had about an even chance of detonating, it was never built.  That, for me, is why my p(doom) is very low, less than a tenth of one percent.  There is no indication that AI models can build things by themselves in the physical world. 

So far, we are doing well at containing AI.  As for the future, Silver said that, if given a chance to “permanently and irrevocably” stop its progress, he would not, as, ultimately, “civilization needs to learn to live with the technology we’ve built, even if that means committing ourselves to a better set of values and institutions.”  We can deal with artificial intelligence – a vastly more difficult challenge we face is dealing with ourselves.  That’s the last word.  With that, it’s time to leave the café.

No comments:

Post a Comment