Since before the late 2022 ChatGPT busting out, people have been talking about what AI means, what it could do to us, and what about it we need to worry about. Perhaps strangely, that sort of thing has been appearing less in the press. Most commentators recently have been concerned with whether it is a “bubble,” which is still lacking a consistent definition, and the growing resistance to the building of its data centers. But there has been more.
Perhaps,
then, the title of Ross Douthat’s June 29th New York Times interview
piece, “Are We Dreaming Big Enough?” is appropriate. He wrote that venture capitalist Peter
Thiel’s “projects” had a common thread of a “focus on stagnation – meaning the
loss of ambition, the decline of invention, the collapse of faith in the future,”
to which AI is an exception. Interviewee
Thiel, more than anything else, wanted more – faster progress, opportunities
for people to change their entire bodies, religion to be ensconced as a
“friend” of science, a role for the internet that is not “stagnationist,” additional
“crazy experiments” from smart people, and beyond. Little of that seems on the way, AI progress
or not, during the rest of the half-century.
Next, “The AI
revolution means we need to redesign everything; it also means we get to
redesign everything” (Sebastian Buck, Fast Company, August 11th). That doesn’t just fall on professionals at
certain technology companies. “Technical
revolutions create windows of time when new social norms are created, and where
institutions and infrastructure is rethought.
This window of time will influence daily life in myriad ways, from how
people find dates, to whether kids write essays, to which jobs require applications,
to how people move through cities and get health diagnoses.” Easy, but “each of these are design
decisions, not natural outcomes. Who
gets to make these decisions? Every
company, organization, and community that is considering if – and how – to
adopt AI. Which almost certainly
includes you. Congratulations, you’re
now part of designing a revolution.”
What we accept or reject has an inexorable effect on what will happen,
so we are all, in some sense, on the hook for how it will turn out.
One person
with more influence than most asked for a new feature, as “’Godfather of AI’
warns machines could soon outthink humans, calls for ‘maternal instincts’ to be
built in” (Sophia Compton, Fox Business, August 13th). The requestor, Geoffrey Hinton, a “cognitive
psychologist and computer scientist,” thought artificial general intelligence
(AGI) “could be as little as just a few years away.” He compared our situation to being “in charge
of a playground of 3-year-olds” who were “smarter than us,” meaning that
“researchers should prioritize creating AI that genuinely cares about people…
with a drive to protect human life.”
Overall, he said “we need AI mothers rather than AI assistants.”
Maybe,
though, these concerns are too large.
Per David Wallace-Wells in the August 31st New York Times,
“Boosters of A.I. have spent years making it seem magical. But what if it’s just a “normal” technology –
with huge ramifications nonetheless?” The
author noted that “A.I. hype has evolved… passing out of its prophetic phase
into something more quotidian,” a view which now “seems more like an emergent
conventional wisdom.” A paper written by
two “Princeton-affiliated computer scientists,” titled “A.I. as Normal
Technology,” suggested “we should understand it as a tool that we can and
should remain in control of.” While the
technology’s effect on the stock market and construction (“we’re building
houses for A.I. faster than we’re building houses for humans”) have gone far
beyond expectations, we have also seen “the challenges of integrating A.I. into
human systems” and Microsoft’s CEO telling us that we were “all getting ahead
of ourselves” by anticipating AGI. In
conclusion, though, we don’t “have all that clear an idea of what’s coming
next.”
Agreeing
mostly, Gary Marcus, “a founder of two A.I. companies and the author of six
books on natural and artificial intelligence,” told us on September 3rd,
in the New York Times, that “The Fever Dream of Imminent
Superintelligence Is Finally Breaking.” He
started with OpenAI’s GPT-5 product which “fell short,” which Wallace-Wells
also mentioned, constituting “a step forward but nowhere near the A.I.
revolution many had expected.” Grok’s
Grok-4, “released in July, had 100 times as much training as Grok-2 had, but it
was only moderately better.” And Meta’s
“jumbo Llama 4 model… was mostly also viewed as a failure.” So if AGI requires products like these to
drastically improve, it won’t be close anytime even moderately soon. It is also missing “some core knowledge of
the world that sets us up to grasp more complex concepts” which human beings
are “born with.” In general, “we need a
new approach,” possibly involving newer and older ideas, and “a return to the
cognitive sciences.”
Every so
often, it’s worthwhile to hear that “A.I.’s Prophet of Doom Wants to Shut It
All Down” (Kevin Roose, The New York Times, September 12th). The diviner is still Eliezer Yudkowsky, for
years now mentioned as having one of the highest p(doom)’s, or his estimated
chance of AI destroying civilization, in the industry. He has a “new book” called If Anyone
Builds It, Everyone Dies which remakes his case. His reasons include “orthogonality” or “the
notion that intelligence and benevolence are separate traits, and that an A.I.
system would not automatically get friendlier as it got smarter,” and “instrumental
convergence – the idea that a powerful, goal-directed A.I. system could adopt
strategies that end up harming humans.”
Yudkowsky has been nowhere near the mainstream on this issue, and is
certainly failing at stopping or even slowing AI development.
The only
reasonably factual piece I have seen since is “Where Is A.I. Taking Us? Eight Leading Thinkers Share Their Visions” (The
New York Times, February 2nd).
There’s a lot here – over 100 paragraphs in response to questions asking
for AI’s impact on medicine, programming, scientific research, transportation,
education, mental health, and art and creativity, on what will happen with AGI,
and on AI’s future in general. The
diversity of answers show how clearly intelligent, informed people can reach
many different conclusions, as well as emphasize varying aspects of the
technology.
This last
piece shows us how our views on most aspects of AI are not close to being
unified. We don’t know and can’t
predict with any accuracy. My p(doom) is
about one tenth of one percent. What I
think that means is that we have time to understand artificial intelligence. We will, though the third digit of the year
when that happens will not be a 2 and may not even be a 3. Until then, will AI be closer in significance
to nuclear bombs or copiers? That is for
you to decide.