In some ways, AI is excelling, with a shoulder-high pile of successful specific applications. Yet in others, such as matching human cognitive abilities, it seems to be motionless. What do I mean?
First, “It’s
Smart, But for Now, People Are Still Smarter” (Cade Metz, The New York Times,
May 25th). This Sunday piece
by this paper’s most prominent AI writer, subtitled “The titans of the tech
industry say artificial intelligence will soon match the powers of human
brains. Are they underestimating us?,” addressed
the coming of artificial general intelligence (AGI), which OpenAI CEO Sam
Altman had told President Donald Trump “would arrive before the end of his
administration,” and Elon Musk “said it could be here before the end of the
year.” This AGI “has served as shorthand
for a future technology that achieves human-level intelligence,” but has “no
settled definition,” meaning that “identifying A.G.I. is essentially a matter
of opinion.” As of the article’s time,
“according to various benchmark tests, today’s technologies are improving at a
consistent rate in some notable areas, like math and computer programming,” yet
”these tests describe only a small part of what people can do,” such as knowing
“how to deal with a chaotic and constantly changing world,” at which AI has not
excelled. There is no clear reason why it
should be able to jump from huge specific competences to matching overall human
intelligence, which “is tied to the physical world,” and “that is why many…
scientists say no one will reach A.G.I. without a new idea – something beyond
the neural networks that merely find patterns in data.” In the four months since this article came
out, I have seen no signs of any such notion.
On a related
point, De Kai wrote on “Why AI today is more toddler than Terminator” (freethink.com,
June 9th). That scarily
predictive 1980s movie series provided the blueprint in many people’s minds for
how AI could be relentlessly effective at what it called “autonomous
goal-seeking,” at which it was amoral and lapsed into evil, and has underlaid
people’s fears about it ever since. Yet
AI “relies less on human labor to write out digital logic and much more on
automatic machine learning, which is analog,” meaning that “today’s AIs
are much more like us than we want to think they are,” and “are already
integral, active, influential, learning, imitative, and creative members of our
society.” Overall, “AIs are our
children. And the crucial question
is: How is our – your – parenting?”
A new major
ChatGPT release is gigantic news in the AI industry. Soon after the last one, we read “How GPT-5
caused hype and heartbreak” (Alex Hern, Simply Science, The Economist,
August 13th). It did well in
some ways, “hitting new highs” by “showing improvements over its predecessors
in areas including coding ability, STEM knowledge and the quality of its health
advice,” but some users of its older 40 system “have bonded with what they
perceived as its friendly and encouraging personality” which GPT-5 did not
share, making the change they were steered to “a very personal loss.” Apparently, the company did not expect that.
A wider-scope
issue appeared in “MIT report: 95% of generative AI pilots at companies are
failing” (Sheryl Estrada, Fortune, August 18th). “Despite the rush to integrate powerful new
models, about 5% of AI pilot programs achieve rapid revenue acceleration; the
vast majority stall, delivering little to no measurable impact on
P&L.” The study blamed “flawed
enterprise integration,” as the software doesn’t “learn from or adapt to
workflows.” Working with vendors functioned
better than “internal builds,” and “back-office automation,” such as
“eliminating business process outsourcing, cutting external agency costs, and
streamlining operations,” fared better than “sales and marketing tools,” which
were absorbing half of generative AI budget money.
How
reasonable is it to consider that “A.I. May Just Be Kind of Ordinary” (David
Wallace-Wells, The New York Times, August 20th)? The author contrasted 2023 thoughts by
“one-third to one-half of top A.I. researchers” that “there was at least a 10
percent chance the technology could lead to human extinction or some equally
bad outcome,” with “A.I. hype… passing out of its prophetic phase into
something more quotidian,” similar to what happened with “other charismatic
megatraumas” such as “nuclear proliferation, climate change and pandemic risk.” An April paper by two “Princeton-affiliated
computer scientists and skeptical Substackers” claimed that we should see artificial
intelligence “as a tool that we can and should remain in control of, and… that
this goal does not require drastic policy interventions or technical
breakthroughs.” Given what it has done
already, though, “the A.I. future we were promised, in other words, is both
farther off and already here.” With so
much available right now and so much going nowhere, that is a better summary of
this post than I can write, so I’ll stop there.
No comments:
Post a Comment