When the AI story broke in the spring, most articles expressed amazement, and tried to cover the matter without hype. Soon thereafter, the possibility of a major AI-caused disaster took over, and people wrote long, hazy pieces speculating on the chance of humanity going extinct under ChatGPT’s virtual boot. For the next couple of months, AI dominated the technology-related press, to the point where I could not find any material for the regular robotic subsegments on my radio gigs. It weakened in late May and early June, when commentators seemed to be talked out, and reappeared with a different tone last month, as a combination of reservations, limitations, negative thoughts, and legal reactions, combined with a lack of new AI achievements, took over. Here I will go through 12 published things from the past eight weeks.
It's a blessing when a powerful group admits they don’t
understand something, so it was good to read that “Congress seeks crash course
on AI from insiders,” by Cat Zakrzewski and Cristiano Lima, in the June 18th
Washington Post. Legislators hope
to achieve, per Representative Mark Takano of California, “a repository of
expertise that is more in an anticipatory mode, that has quicker turnarounds,
that can deliver responses more quickly,” and “is not tainted or connected to
commercial interests.” Let us wish them
success.
Five days later we read in Engadget.com, “US lawyers
fined $5,000 after including fake case citations generated by ChatGPT” (Sarah
Fielding). AI laziness is already a
phenomenon, and this kind of result – both errors and a judge discovering them
– could sink the technology for many valuable-seeming purposes. As people have fooled driverless cars through
their data-collecting systems, they could large language models, by publishing
bogus legal precedents and other “factual” accounts of things which did not
happen. Confusing reality with fiction
is also why we need to see “How AI will revolutionize politics in 2024, and why
voters must be vigilant” (Brian Athey, Fox News, June 2nd). Beyond ability to generate high-quality
videos and other campaign tools, it can produce “misinformation, deep fakes and…
imagery.” As we learned in the past two
elections about voters absorbing untruths, we can see that there will be plenty
more next year, some of which may be blocked, as “A.I.’s Use in Elections Sets
Off a Scramble for Guardrails” (Tiffany Hsu and Steven Lee Myers, The New
York Times, June 25th).
In five days there were three actions taken against the
technology. Per Riddhi Setty in Bloomberg
Law on July 10th, “Sarah Silverman, Authors Hit OpenAI, Meta
With Copyright Suits.” We’re in the
early stages of automata not only absorbing proprietary material but circulating
it uncredited. These class-action
efforts were filed by three authors, and could become landmarks. In The Economist’s July 14th
“The World in Brief,” we saw that “America’s Federal Trade Commission opened an
inquiry into OpenAI’s handling of consumer data and security practices…
reportedly probing whether the startup’s products – principally ChatGPT – could
cause reputational damage by publishing misleading information about real people,”
and “that training artificial intelligence on personal information could be a
form of fraud.” The same source reported
that the Hollywood writers and actors now on strike, among other things, are
“also asking for guarantees that AI will not be used to replace them,” which
may be the first time surging unions took on burgeoning technology but surely
won’t be the last.
The common thread here is AI’s use of incoming data. That has recently gone further elsewhere, as
“’Not for Machines to Harvest’: Data Revolts Break Out Against A.I.” (Sheera
Frenkel and Stuart A. Thompson, July 15th, The New York Times.) The lead example is from an author who
discovered that “a data company had copied her stories and fed them into the
artificial intelligence technology underlying ChatGPT.” Nothing should be controversial here – for
decades, routine copyright notices have proscribed putting such content into electronic
banks – and this, if anything, is worse.
Expanding even more is “Big Tech took your data to train AI. We’re suing them for it” (Ryan Clarkson, Fox
News, July 19th). It’s no
certainty that companies can use personal information without clear permission. Overall, we’re heading for a huge legal
collision, and it doesn’t look good for AI.
One odd thing about AI has been its practitioners asking for
more regulation. That doesn’t seem like
a way of getting a competitive advantage, such as McDonald’s supporting higher
minimum wages knowing they can withstand them better than their competitors,
but shows not only uncertainty about its safety but also the tendency of
experts closer to a field being more pessimistic than general prognosticators,
as documented in “Bringing down the curtain” (The Economist, July 15th).
That may account for the newly visible AI startup, Anthropic, (“Inside the
White-Hot Center of A.I. Doomerism,” Kevin Roose, The New York Times,
July 11th), with huge office signs saying THINK SAFETY on display, workers
talking freely about their products’ severe dangers, and many reading about
nuclear bomb history and comparing “themselves to modern-day (bomb-inventing)
Robert Oppenheimers.”
Yet the last word, for now, goes to something very American. In “A Blessing and a Boogeyman: Advertisers
Warily Embrace A.I.” (July 18th, The New York Times), in
which authors Tiffany Hsu and Yiwen Lu related how products are mentioning the
technology in their pitches, from the aforementioned restaurant chain citing a
ChatGPT endorsement-of-sorts to “digital avatars” of famous people hawking
products. There will be much more –
there need be no existential uncertainty about that.
I will have no post next week. I will return with the jobs report and AJSN
two weeks from this morning.
No comments:
Post a Comment