Friday, March 29, 2024

Artificial Intelligence Today: Combating Problems, and Drawing on the Morrow

As market capitalization surges and the press tells us about more and more noncurrent things, what is happening with AI now?  Mostly it is causing trouble. 

In “The AI Culture Wars Are Just Getting Started” (Will Knight, Wired, February 29th), we got a recap on and views of difficulties in the front of the news several weeks ago.  As I documented in my last AI post, Google and Alphabet’s products were showing white male historical figures as female and darker-skinned.  Both CEOs apologized, Alphabet’s internally and Google’s publicly, and Knight suggested one mishap was because “Google rushed Gemini,” but a Mozilla fellow held “that efforts to fix bias in AI systems have tended to be Band-Aids rather than deep systemic solutions,” which makes sense, as fully retraining the products would require great masses of data and money.  Whether either Google or Alphabet were trying to make ideological statements is irrelevant now – they have huge business problems many customers will not tolerate.

“Silicon dreamin’,” in the March 2nd Economist, was about the situation where “AI models make stuff up,” and asked “Can researchers limit hallucinations without curtailing AI’s power?.”  Such responses can be “confident, coherent, and just plain wrong,” so “cannot be trusted.”  Unfortunately, “the same abilities that allow models to hallucinate are also what make them so useful,” and “fine-tuning” them may make nonsense answers more common.  Overall, “any successful real-world deployment of these models will probably require training humans how to use and view AI models as much as it will require training the models themselves.”

Another way artificial intelligence may be making enemies come forward is in “A New Surge in Power Use Is Threatening U.S. Climate Goals” (Brad Plumer and Nadja Popovich, The New York Times, March 14th).  With subtitle “a boom in data centers and factories is straining electric grids and propping up fossil fuels,” the piece documented that demand for electricity grew steeply from 1989 to 2007, was mainly level between 2007 and 2022, but is now forecast by companies selling it to trend upwards through 2033.  The main causes are electric vehicles, more manufacturing, and “a frenzied expansion of data centers” with “the rise of artificial intelligence… poised to accelerate that trend.”  While renewable energy sources can cover much of the increase, they cannot reasonably be expected to cover all of it.

On the user side, we were asked to “Keep these tips in mind to avoid being duped by AI-generated deepfakes” (Fox News, March 21st).  Such things of ever higher quality have been in the news for months, and “video and image generators like DALL-E, Midjourney and OpenAI’s Sora make it easy for people without any technical skills to create deepfakes – just type a request and the system spits it out.”  Such things “may seem harmless,” “but they can be used to carry out scams and identity theft or propaganda and election manipulation.”  Fewer of these now have “obvious errors, like hands with six fingers or eyeglasses that have differently shaped lenses” or “unnatural blinking patterns among people in deepfake videos.”  Advice here was “looking closely at the edges of the face” for skin tone inconsistent with elsewhere on the person, “lip movements” not agreeing with audio, unrealistic teeth, and “context,” showing “a public figure doing something that seems “exaggerated, unrealistic or not in character,” or not corroborated by “legitimate sources.”  Generally, we need to be more skeptical about photos and video clips of all kinds – that will happen remarkably quickly.  In the meantime, though, “Fretting About Election-Year Deep Fakes, States Roll Out New Rules for A.I. Content” (Neil Vigdor, The New York Times, March 26th), such as advertisers being required to disclose AI-generated content.  There are now, per Voting Rights Lab, “over 100 bills in 40 state legislatures.”

Beyond these and other troubles, how much AI is out there in corporate settings?  In “Meet your new copilot” (The Economist, March 2nd), we learned that, despite massive company valuations, “big tech’s sales of AI software remain small.”  Specifically, “in the past year AI has accounted for only about a fifth of the growth in revenues at Azure, Microsoft’s cloud-computing division, and related services.”  As well, “Alphabet and Amazon do not reveal their AI-related sales, but analysts suspect they are lower than those of Microsoft.”  Two different consulting firms were not taken with current usage, as one said, “it will take at least two years to “move beyond the hype,” and another determined that “adoption of AI “has not necessarily translated into higher levels of productivity – yet.””

Per Oxford Languages, “morrow” is an “archaic” word meaning tomorrow, “the time following an event,” or “the near future.”  I have seen “drawing on the morrow” used as recently as the last half-century to mean getting a questionably justified advance.  I propose we use it that way for artificial intelligence progress.  AI may turn out to be a gigantically important business product, but it isn’t yet.  No matter how much its producers’ stock is worth, we can’t give AI credit for what it may or may not accomplish someday.  That would be drawing on the morrow.  That’s not how we want to deal with reality.

No comments:

Post a Comment