A lot has
happened with AI over the past several weeks.
I’m not talking about projections, assumptions, justified or other
worries, 100-to-1 price-to-earnings ratio stock run-ups, market
capitalizations, self-serving representations, CEO hijinks, and other things at
the fringes of substantive news making up the great bulk of writing on the
technology. There is real stuff here, so
much that I’m calling this piece an expanded edition, appropriate since I won’t
be posting next week.
The oldest
article is “The great AI power grab” (The Economist, May 11th). It addresses the “awful lot of electricity”
the software will need, and asks “where will it come from?.” With “Dominion Energy, one of America’s
biggest utilities” being “frequently” asked for “several gigawatts,” when the
company has only 34 installed, it’s getting up there. That power is consumed “at a steady rate,”
regardless of sunlight and wind conditions.
It has already started affecting AI companies’ choices of location, and
will do so more as long as anyone anywhere has what they need.
It took some
digging to show, on my last AI post, what actual sales were, but how about
number of transactions? That is the
metric used in “The 10 most popular AI companies businesses are paying for”
(Jordan Hart, Business Insider, May 12th). The list, which includes “specialized tools”
as well as “generative AI,” in order, are OpenAI, Midjourney, Anthropic,
Firefiies.ai, ElevenLabs, Perplexity AI, Instill AI, Instantly.ai,
Beautiful.ai, and Pinecone. I found it noteworthy
how many are not household words, even in my house. It shows not only that there are firms being
quietly effective, but that some with the noisiest press releases aren’t
selling to many people at all.
“As A.I.
search ramps up, publishers worry” (Andrew Ross Sorkin, New York Times
DealBook, May 15th) shows cause for concern among those using an
“ad-focused business model.” They are fearful,
as “AI Overviews will give more prominence to A.I.-generated results,
essentially pushing website links farther down the page, and potentially
depriving those non-Google sites of traffic.”
They will need to work this out, as the presence of AI does not please
everyone.
Something
remarkably lost in the outpouring of manufacturer claims got its own article: “Silicon
Valley’s A.I. Hype Machine” (Julia Angwin, The New York Times, May 19th). Although in early 2023 “leading researchers
asked for a six-month pause in the development of larger systems of artificial
intelligence, fearing that the systems would become too powerful,” now “the
question is… whether A.I. is too stupid and unreliable to be useful.” Results have lagged the previous intensity,
but corporate statements haven’t – for example, OpenAI CEO Sam Altman, the week
before, had “promised he would unveil “new stuff” that “feels like magic to
me,”” but delivered only “a rather routine update.” One “cryptocurrency researcher” asserted that
AI companies “do a poor job of much of what people try to do with them” and “can’t
do the things their creators claim they one day might.” The author agreed that “some of A.I.’s
greatest accomplishments,” such as its 2023 law bar exam performance critical
to perception of AI being amazingly high quality turning out to be in the 48th
percentile instead of the 90th as stated, “seem inflated.” The technology, per Angwin, “is feared as an
all-powerful being,” but now seems “more like a bad intern.” There will be growing discontent about
blatant exaggerations as products fail to meet the stunning standards we were told
to expect by now.
The big May
story, which many probably confused with higher AI sales, was “Nvidia, Powered
by A.I. Boom, Reports Soaring Revenues and Profits” (Don Clark, The New York
Times, May 22nd). For
this leading supplier, “revenue was $26 billion for the three months that ended
in April, surpassing its $24 billion estimate in February and tripling sales
from a year earlier for the third consecutive quarter. Net income surged sevenfold to $5.98
billion.” These numbers show how much
more companies such as OpenAI have bought than they have sold.
A related area
was the subject of “OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance”
(Kevin Roose, The New York Times, June 4th). Per “a group” there, the firm, “racing to
build the most powerful A.I. systems ever created,” “published an open letter…
calling for leading A.I. companies, including OpenAI, to establish greater
transparency and more protections for whistle-blowers.” That firm, “still recovering from an
attempted coup last year” and “facing legal battles with content creators who
have accused it of stealing copyrighted works to train its models,” has big
issues to go with its big chip purchases.
For June, the
largest news item so far has been “Apple Jumps Into A.I. Fray With Apple
Intelligence” (Tripp Mickle, The New York Times, June 10th). This company, ancient and entrenched by the
standards of its industry, “revealed plans to bring (AI) to more than a billion
iPhone users around the world,” including “a major upgrade for Siri, Apple’s
virtual assistant.” This business
decision has a huge possible upside for AI, as it could “add credibility to a
technology that has more than a few critics, who worry that it is mistake-prone
and could add to the flood of misinformation already on the internet.” That would close some of OpenAI’s
sales-to-purchases gap – I don’t say “will,” since, per Forbes Daily on
June 12th, “to utilize these AI features, iPhone users will have to
wait until the iOS 18 operating system becomes available later this year,”
which, per the Angwin story above, seems less than a sure thing.
It has been
slow on the national regulatory front lately, so we are seeing “States Take Up
A.I. Regulation Amid Federal Standstill” (Cecilia Kang, The New York Times,
June 10th). Although, per the
Institute for Technology Law and Policy’s director, “clearly there is a need
for harmonized federal legislation,” current and anticipated violations are
prodding other governments to quicker action.
“Lawmakers in California last month advanced about 30 new measures on
artificial intelligence aimed at protecting consumers and jobs,” including
“rules to prevent A.I. tools from discriminating in housing and health care
services” and ones that “also aim to protect intellectual property and
jobs.” Legislation has already passed in
Colorado and Tennessee, the first against “discrimination,” and the second,
through the snappily named “ELVIS act,” guarding “musicians from having their
voice and likenesses used in A.I.-generated content without their explicit
consent.”
Two AI
achievements have reached the present, as described in “This is, like, really
nice” (Vlad Savov, Bloomberg Tech Daily, June 11th). Here, even though “the breathless bluster
about AI changing industries, jobs and lifestyles has obviously not been met by
reality,” it has come up with the “Descript editing tool” for audio files,
which “eliminates pauses, verbal fillers like “like” and “um,” redundant
retakes and anything else that’s not essential.” Listening to its end results, the author
“couldn’t tell where the seams were,” and noted that when “everything takes far
longer to edit than its actual running time,” “automating the process is
invaluable.” The second was “AI noise
cancelling” with “the Audeze Filter,” “a smartphone-sized Bluetooth conference
speaker” that “effectively cancels even unpredictable and high-pitched noises,
such as the crying of a baby,” and in a demonstration “a cacophonous café was
made tranquil with the flip of a switch.”
Not world domination, but perhaps it can help with wedding audiotapes
damaged by unwanted sounds.
To end a bit
lighter, new technologies get us new word usages, with one of the latest in “First
Came ‘Spam.’ Now, With A.I., We’ve Got
‘Slop’” (Benjamin Hoffman, The New York Times, June 11th). The author identified that as “a broad term
that has developed some traction in reference to shoddy or unwanted A.I.
content in social media, art, books, and, increasingly, in search
results.” As a two-years-ago “early
adopter of the term” was quoted, “Society needs concise ways to talk about
modern A.I. – both the positives and the negatives. ‘Ignore that email, it’s spam,’ and ‘Ignore
that article, it’s slop,’ are both useful lessons.” And so it will be. Beyond that, though, we have no idea.