Friday, October 27, 2023

Artificial Intelligence is an Awfully Wobbly Juggernaut

For something that was supposed to be taking over the world, AI is tottering.  I can’t speak for the status of the technology itself, but that’s not the issue.  How it progresses and what it ends up doing will be decided in other realms:  legal, financial, social, regulatory, and more.  What’s been happening with AI prospects over the past two and a half months?

The first clue was in Futurism, updated on August 11th, “AI Is Starting to Look Like the Dot Com Bubble.”  This piece, modified by Maggie Harrison, started “as the AI industry’s market value continues to balloon, experts are warning that its meteoric rise is eerily similar to that of a different – and significant – moment in economic history:  the dot com bubble of the late 1990s.”  It is not that no companies are profitable – the hugest ones of Microsoft, Meta, and Amazon are – but there are many others, getting venture capital, which “have yet to even introduce a discernable product.”  It is time to realize that while there may well be Fords and Chevrolets, there will be Stutzes and Hupmobiles as well.

Per Ian Prasad Philbrick in the August 27th New York Times, “Regulating A.I. Requires Congress to Act Nimbly.”  The author pointed out that “major federal regulation” has taken as long as 90 years to materialize after “invention or patenting,” with nuclear energy, with the shortest interval, still taking four years, and airplanes and automobiles 20 and 70 respectively.  Although our senators and representatives have attended informational sessions, it is a challenge, and will take “perhaps a decade or more.”

A strong indicator of how people’s attitudes can trump technical achievements was in the September 10th New York Times, Kashmir Hill’s “Anonymity Is Over.  Big Tech Tried to Save It.”  It related how, six years ago, Facebook technologists worked out a way to identify faces and attach names and other information to them.  A related thing had been developed by Google in 2011.  Neither was released, as Google “decided to stop” what it was doing, and Facebook considered it “too dangerous to make widely available.”  Some are now using related facilities, but far fewer than their utility and technical merits would justify.

On the recent idea that artificial intelligence will at least help productivity, Aaron Mok, in Business Insider on October 2nd, relayed that “OpenAI’s ChatGPT can actually make workers perform worse, a new study found.”  The research, from Boston Consulting Group, found that when people used ChatGPT with GPT-4 for work requiring capabilities the software was known to have, such as “brainstorming innovative… concepts, or coming up with a thorough business plan,” the tool excelled, but on “more open-ended tasks” such as offering business recommendations, it would make large errors, dangerous “because the  consultants with AI were found to indiscriminately listen to its output – even if the answers were wrong.”  It will be a challenge for businesses to distinguish between these two rough categories.

There were things to think about in “Knowledge vs. intelligence amid the hype and hysteria over AI” (Mihai Naden, Fox News, October 2nd).  Naden considered intelligence to require not only evidence of ability to perform, but also what of two resources they required.  As “to win a game of chess at the expense of energy that a small town consumes in a week is unsustainable,” “artificial entities could justifiably claim intelligence if, in executing a task, they would use as much energy or less, and as much data or less, than a living entity performing the same task.”  That may be the criterion we need.

On the positive side, we have Paul Krugman’s October 3rd New York Times “A.I. could be a big deal for the economy (and for the deficit too).”  Although he saw generative AI as “souped-up autocorrect,” he thought it could massively improve productivity in that role.  He included a Goldman Sachs chart with 23 different industries, each divided into “no automation,” “AI complement,” and “likely replacement” of workers.  The field faring best was “building and grounds cleaning and maintenance,” with about 95% of employees in the first category, and “legal” the worst, ripe for a 40% job loss.  Some areas, such as sales, education, social services, and computers, were 100% complemented.  Of course, this does not include other sources of automation, globalization, and efficiency.

Per Ed Zitron in Scientific American on October 17th, “AI Is Becoming a Band-Aid over Bad, Broken Tech Industry Design Choices.”  He said iPhones came with 38 apps, 27 of which could be removed, and users would likely add more, but Apple is relying on AI interfaces instead of ones users could handle themselves, leaving “a Matryoshka of bolted-on features.”  Other vendors, according to Zitron, are similarly at fault.  Not an AI problem as such, but something to affect its reputation.

On October 18th in the New York Times, Kevin Roose said that “Maybe We Will Finally Learn More About How A.I. Works.”  Developers have communicated poorly about how the software was formed, including its use of copyrighted material and how it shares data.  GPT-4 got a 40% “transparency score,” not far off the 54% maximum among ten popular models.  Is it true or false that “we can’t have an A.I. revolution in the dark.  We need to see inside the black boxes of A.I., if we’re going to let it transform our lives”?  That is for us to decide.

Most recent is a reminder that, even for the largest companies, “Long on Hype, A.I. Is No Guarantee for Profits” (Andrew Ross Sorkin et al., The New York Times, October 25th).  Although both are deep into the technology, Microsoft has done far better than Alphabet since their earnings reports the day before, with a one-day 3.9% increase instead of a 6.2% decrease.  And Meta’s stock suffered the same day for other reasons.  So, it’s not enough to identify Ford or General Motors by their products – they must make money as well.  About that, we still, ample attention notwithstanding, do not know.  By the same token, we cannot see where artificial intelligence will wind up – regardless of our hopes and fears. 

No comments:

Post a Comment