From March 6th to March 20th, there was a string of articles – two about areas of true AI achievement, one questioning what may or may not have happened earlier this year, and two trying to persuade us about its ability to reach certain lofty heights.
In the first,
we saw how “Taco Bell shows off AI ‘coach’ following massive digital tech
investment” (Pilar Arias, Fox Business, March 6th). Already, “about 500 Taco Bell U.S. locations
have AI voice technology to take drive-thru orders,” up from 100 nine months
before, and its Byte software, at least partially in use at 25,000 Taco Bell
and KFC restaurants, “is widely scaled and enables operational consistency and
restaurant manager efficiency,” including “online and mobile app ordering,
point of sale, kitchen and delivery optimization, menu management, inventory
and labor management, and team member tools.”
We will look for information no later than next year on how well that is
working out.
The other
told us, about a patient, that “A.I. Saved His Life by Discovering New Uses for
Old Drugs” (Kate Morgan, The New York Times, March 20th). The wife of a man “battling a rare blood
disorder” and looking at near-certain death contacted a doctor they had met,
who used an AI model to propose “an unconventional combination of chemotherapy,
immunotherapy and steroids previously untested.” The patient was “responding to treatment”
within a week, and within months was healthy enough for the “stem cell
transplant” he needed to put him into remission. In other cases as well the technology can
provide “a systematic way” of assessing “a treasure trove of medicine that
could be used for so many other diseases,” which a Harvard Medical School
professor called an “enticing alternative” for treating rare afflictions. Other successes include an AI proposal to use
isopropyl alcohol inhalation for nausea which “worked instantly,”
ADHD-dedicated amphetamines to help “periodic paralysis in children with a rare
genetic disorder,” a drug used for Parkinson’s helping “patients with a
neurological condition” to “move and speak,” and more. This immensely valuable sort of thing,
assimilating existing information on side effects, which in these cases were
desirable, and other medication properties, more voluminously than humans could
do, may be a perfect task for AI.
“How long
will the DeepSeek euphoria last?” (Don Weinland, The Economist, March 15th). The author said that in that case “many China
watchers are now trying to figure out what is real and what is froth,” as “it
is worth keeping in mind that China is prone to asset bubbles,” and tension
between China and the US could mean that “some of the green shoots might be
short-lived.” We still don’t have the
well-established truth about DeepSeek’s asserted ability to produce AI modules
at a small fraction of what other companies have paid, and until we do, any
comments about its future are premature.
The first tall
peak was the subject of “Lila Sciences Uses A.I. to Turbocharge Scientific
Discovery” (Steve Lohr, The New York Times, March 10th). Per Lohr, “the big, inspiring A.I.
opportunity on the horizon, experts agree, lies in accelerating and
transforming scientific discovery and development.” Well, there are others, one of which we will
rediscover in the next paragraph. The
Cambridge, Massachusetts startup in the title “had worked in secret for two
years to build scientific superintelligence to solve humankind’s greatest
challenges.” It has “already… generated
novel antibodies to fight disease and (has) developed new materials for
capturing carbon from the atmosphere,” and “turned those experiments into
physical results in its lab within months, a process that most likely would
take years with conventional research.” As
a result, “many scientists” think “that A.I. will soon make the
hypothesis-experiment-test cycle faster than ever before,” and it “could even
exceed the human imagination with inventions.”
Although “the early projects are still a long way from market-ready
products,” the company “will work with partners to commercialize the ideas
emerging from its lab.” So there is
plenty more to do, and while seeing reason for real optimism, comprehensive success
is still an unknown amount of time away.
Even more
optimistic was New York Times technology columnist Kevin Roose, as he
showed in that publication’s “Why I’m Feeling the A.G.I.,” published on March
14th and in the Sunday, March 16th print edition. He wrote “I believe that very soon – probably
in 2026 or 2027, but possibly as soon as this year – one or more A.I. companies
will claim they’ve created an artificial general intelligence, or A.G.I., which
is usually defined as something like “a general-purpose A.I. system that can do
almost all cognitive tasks a human can do.””
That means “we are losing our monopoly on human-level intelligence, and
transitioning to a world with very powerful A.I. systems in it.” “Powerful A.I.” will also “generate trillions
of dollars in economic value and tilt the balance of political and military
power toward the nations to control it.”
Reasons he names for his position are that “the insiders are alarmed,”
“the A.I. models keep getting better,” and “overpreparing is better than
underpreparing.” This viewpoint harkens
back to two years ago, before most people came to see AI’s doubts, impediments,
and shortcomings more clearly. There
have been recent calls for those producing the technology to focus more on
specific applications, some as strong as those described in the first two
pieces here, and less on the hope of AGI, and this piece does not cancel them
out. Therefore, we have controversy
between experts, and the rest of us don’t know either. When we get much closer, then, I will keep
you posted. In the meantime, this may
have been the best all-around month for AI since February 2023.