Friday, November 8, 2024

Artificial Intelligence Regulation – Disjointed, and Too Soon

Over the past three months, there have been several reports on how, or even whether, AI should be legally constrained.  What did they say?

On the issue of its largest supplier, there was “As Regulators Close In, Nvidia Scrambles for a Response” (Tripp Mickle and David McCabe, The New York Times, August 6th).  It’s not surprising that this company, which not only is doing a gigantic amount of business but “by the end of last year… had more than a 90 percent share of (AI-building) chips sold around the world,” had drawn “government scrutiny.”  It has come from China, the United Kingdom, and the European Union as well as the United States Justice Department, causing Nvidia to start “developing a strategy to respond to government interest.”  Although, per a tech research firm CEO, “there’s no evidence they’re doing anything monopolistic or anticompetitive,” “the conditions are right because of their market leadership,” and “in the wake of complaints about Nvidia’s chokehold on the market, Washington’s concerns have shifted from China to competition, with everyone from start-up founders to Elon Musk grumbling about the company’s influence.”  It will not be easy for either the company or the governments.

Meanwhile, “A California Bill to Regulate A.I. Causes Alarm in Silicon Valley” (Cade Metz and Cecilia Kang, The New York Times, August 14th).  The legislation “that could impose restrictions on artificial intelligence,” was then “still winding its way through the state capital,” and “would require companies to test the safety of powerful A.I. technologies before releasing them to the public.”  It could also, per its opposition, “choke the progress of technologies that promise to increase worker productivity, improve health care and fight climate change” and are in their infancies, pointing toward real uncertainty in how they will affect people.  Per leginfo.com, it was vetoed by state governor Gavin Newsom, who said “by focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.  Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”  Expect a different but related bill in California soon.

A thoughtful overview, “Risks and regulations,” came out in the August 24th Economist.  It stated that “artificial intelligence needs regulation.  But what kind, and how much?,” and came up with various ideas.  It started with the point that AI’s “best-known risk is embodied by the killer robots in the “Terminator” films – the idea that AI will turn against its human creators,” the kind of risk that some people think is “largely speculative,” and others think is less important than “real risks posed by AI that exist today, such as bias, discrimination, AI-generated disinformation and violation of intellectual-property rights.”  With Chinese authorities most wanting to “control the flow of information” and the European Union’s now-the-law AI Act which “is mostly a product-safety document which regulates applications of the technology according to how risky they are,” “different governments take different approaches to regulating AI.”  Combined with most American legislation being from states, international and even national accord seem a long way off.

What can we gain from “Rethinking ‘Checks and Balances’ for the A.I. Age” (Steve Lohr, The New York Times, September 24th)?  Recalling the Federalist Papers, a Stanford University project, now with 12 essays known as the Digitalist Papers, “contends that today is a broadly similar historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements.”  The writings’ “overarching concern” is that “a powerful new technology… explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions,” therefore “citizens need to be more involved in determining how to regulate and incorporate A.I. into their lives.”  This effort seems designed to be a starting point, as, before, we have no more idea how, if it meets its high-importance expectations, AI will affect society than we did about cars in 1900.

Overall, per Garrison Lovely in the September 29th New York Times, it may be that “Laws Need to Catch Up to Artificial Intelligence’s Unique Risks.”  Or not.  Over the past year, OpenAI has been in controversy about its safety practices, and, per Lovely, federal “protections are essential for an industry that works so closely with such exceptionally risky technology.”  As before, we do not have enough agreement between governments to do that now, but the day will come.  Sooner?  Later?  We do not know, but someday, we hope, we can get together on this potentially critical issue.

No comments:

Post a Comment