Controlling AI has been more fun for some people than using it or thinking about what it can do. What’s the story been?
Before the
current regime, we watched as the “Biden Administration Adopts Rules to Guide
A.I.’s Global Spread” (Ana Swanson, The New York Times, January 13th). The “sweeping rules… governing how A.I. chips
and models can be shared with foreign countries,” included various limitations
on the number of A.I. chips that companies can send to different countries,
with no bounds on those going domestically or to “18 of (our) closest partners,”
with those “already subject to U.S. arms embargoes” barred, and all others
“subject to caps restricting the number of A.I. chips that can be
imported.” There were also rules
governing how much American companies can sell chips they have acquired
elsewhere.
Is it true
that “The Rush to A.I. Threatens National Security” (Heidy Khlaaf and Sarah
Myers West, January 27th, The New York Times)? The authors claimed that “now that Donald
Trump is taken office, the tech industry is moving full steam ahead in its push
to integrate A.I. products across the defense establishment, which could make a
dangerous situation even more perilous.”
Companies involved in the “slew of new partnerships and initiatives to
integrate A.I. technology into deadly weaponry” included OpenAI, Anduril,
Palantir, Meta, and Scale AI. Potential
problems include hallucinations, “cybersecurity vulnerabilities,” and data that
could manipulate software, issues that the authors did not think could be
solved.
What was “The
Dangerous A.I. Nonsense That Trump and Biden Fell For” (Zeynep Tufekci, The
New York Times, February 5th)?
“America’s approach to A.I. safety and regulations,” which “was largely
nonsense,” as “it was never going to be possible to contain the spread of this
powerful emergent technology, and certainly not just by placing trade
restrictions on components like graphics chips.” “Instead… the government and the industry
should be preparing our society for the sweeping changes that are soon to come.” Specifically, “it’s time to harden our
networked infrastructure,” “to start thinking clearly about how corporations
and governments could use the A.I. that’s available right now to entrench their
dominance, erode our rights, worsen inequality,” and determine “what we can do
so that this powerful technology with so much potential for good can benefit
the public.” Perhaps regulation, Tufekci
seems to be saying, is futile.
Some
companies don’t mind that attitude, as “Emboldened by Trump, A.I. Companies
Lobby for Fewer Rules” (Cecilia Kang, again The New York Times, March 24th). When under Biden “they wanted Washington to
regulate them,” citing “the potential to disrupt national security and
elections” and the chance to “eventually eliminate millions of jobs,” starting
in late January AI companies have made “bold requests of government to stay out
of their way,” by saying “it is legal for them to use copyrighted material to
train their A.I. models” and “asking for the federal government to pre-empt
states from creating A.I. laws.” The
Trump Administration has at least symbolically taken the less-regulation side
through executive orders, statements supporting fewer laws, and invoking the
value of “America’s global A.I. dominance.”
One form of
regulation took effect April 2nd, as “Deceptive deepfake media now a
crime in N.J.” (Associated Press in Advertiser-News North, April 10th). New Jersey governor Phil Murphy “signed
legislation… making the creation and dissemination of so-called deceptive
deepfake media a crime punishable by up to five years in prison,” joining “at
least 20 states.” This version “defines
a deepfake as any video or audio recording or image that appears to a
reasonable person to realistically depict someone doing something they did not
actually do,” and “establishes civil penalties that would permit victims to
pursue lawsuits.” Will the number of
violations be small enough to allow enforcement? Will such laws damage our need to assess
veracity ourselves? Will they turn out
to be just adjuncts to others barring child sexual imagery? I don’t think these questions will be easy to
answer. That makes this example, along
with the others here, an artificial intelligence output which we will need to
judge, accept, or reject. It will take a
while.
No comments:
Post a Comment