Not a lot has changed this year in the laws around AI, but we’ve spent eight months getting into position for what could be a big year for that.
First, a look
at “Where the legal battle stands around copyright and AI training” (Patrick
Kulp, Emerging Tech Brew, April 21st). The short answer is unsettled, as although
the Supreme Court will probably eventually hear a related case, “intellectual
property lawyers say there aren’t yet many signs of where courts will
land.” As Anthropic seems to have used
at least one of my books without permission, I was offered membership in a
group to be compensated in a “$1.5 Billion Proposed Class Action Settlement.” This may go through, and there may be similar
resolutions offered by other AI companies.
Next, “Why
the A.I. Race Could be Upended by a Judge’s Decision on Google” (David McCabe, The
New York Times, May 1st).
Although “a federal judge issued a landmark ruling last year, saying
that Google had become a monopolist in internet search,” that did not resolve whether
it “could use its search monopoly to become the dominant player in A.I.” A hearing had started in April 2025 to settle
that issue; at its conclusion four months later, in the views of Kate Brennan
of Tech Policy Press, the “Decision in US vs. Google Gets it Wrong on
Generative AI” (September 11th).
The judge of the hearing considered AI, unlike search engines, to be a
competitive field, and rejected “many of the Department of Justice’s bold,
structural remedies to unseat Google’s search monopoly position.” That could be a problem, as “Google maintains
control over key structural chokepoints, from AI infrastructure to pathways to
the consumer.” This conflict, though, may
not be completely settled, as the extent to which that company can absorb more
of the AI field with its Gemini product is unknown.
In “Trump
Wants to Let A.I. Run Wild. This Might
Stop Him” (Anu Bradford, The New York Times, August 18th), we
see that our presidential administration produced an “A.I. Action Plan, which
looks to roll back red tape and onerous regulations that it says paralyze A.I.
development.” The piece says that while
“Washington may be able to eliminate the rules of the road at home… it can’t do
so for the rest of the world.” That
includes the European Union, which follows its “A.I. Act,” which “establishes
guardrails against the possible risks of artificial intelligence, such as the
loss of privacy, discrimination, disinformation and A.I. systems that could
endanger human life if left unchecked.”
If Europe “will take a leading role in shaping the technology of the
future” by “standing firm,” it could effectively limit AI companies around the
world.
From there, “Status
of statutes” (Patrick Kulp, Jordyn Grzelewski, and Annie Sanders, Tech Brew,
October 3rd) told us that that week California passed “major AI
legislation… establishing some of the country’s strongest safety regulations”
there, which “will require developers of the most advanced AI models to publish
more details about safety steps taken in development and create more
protections for whistleblowers at AI companies.” Comments, both ways, were that the law “is a
good start,” “doesn’t necessarily go far enough,” and “is too focused on large
companies.” It may, indeed, be changed,
and other states considering such efforts will learn from California’s
experience.
Weeks later,
“N.Y. Law Could Set Stage for A.I. Regulation’s Next ‘Big Battleground’” (Tim
Balk, The New York Times, November 29th). It “became the first state to enact a law
targeting a practice, typically called personalized pricing or surveillance
pricing, in which retailers use artificial intelligence and customers’ personal
data to set prices online.” Companies
using such in New York will now need to post “THIS PRICE WAS SET BY AN
ALGORITHM USING YOUR PERSONAL DATA.” As
of article time, there were “bills pending in at least 10 states that would
either ban personalized pricing outright or require disclosures.” Expect more.
After a
“federal attempt to halt state AI regulations,” “State-level AI rules survive –
for now – as Senate sinks moratorium despite White House pressure” (Alex
Miller, Fox News, December 6th). Although “the issue of a blanket AI
moratorium, which would have halted states from crafting their own AI
regulations, was thought to have been put to bed over the summer,” it “was
again revived by House Republicans.”
Would this be constitutional, as AI is not mentioned as an area to be overseen
by the federal government? Or would it
just be another power grab?
The latest
article here, fittingly, is “Fox News Poll:
Voters say go slow on AI development – but don’t know who should steer”
(Victoria Balara, Fox News, December 18th). “Eight in ten voters favor a careful approach
to developing AI,” but “voters are divided over who should oversee the new
technology, splitting between the tech industry itself (28%), state governments
(26%), and Congress (24%).” Additionally, 11% “think the president should
regulate it… while about 1 in 10 don’t think it should be regulated at
all.” That points up how contentious the
artificial intelligence regulation issue is – and tells us that, urgent need or
not, it may take longer to be resolved than we may think. We will do what we can, but once again it
won’t be easy.
Merry
Christmas, happy Hanukkah, happy Kwanzaa, and happy new year. I’ll see you again on January 2nd.