As I predicted at the end of last year, AI has found a home in many niches. But does it seem capable of justifying its $1 trillion economy?
Per
“Artificial intelligence is losing hype” on August 19th, The
Economist had concerns – or did it?
The editorial piece’s subtitle, “For some, that is proof that the tech
will in time succeed. Are they right?,”
leaves open that AI expectations, especially those not backed up by reasonable
data and judicious use of extrapolation, calming down may even predict its
broader triumph. It told us that
“according to the latest data from the Census Bureau, only 5.1% of American
companies use AI to produce goods and services, down from a high of 5.4% early
this year.” The article compared AI
investments with 1800’s British “railway fever,” which, only after it caused an
investment bubble, was justified as firms, “using the capital they had raised
during the mania, built the track out, connecting Britain from top to bottom
and transforming the economy.” Could
that happen with AI?
On September
21st, the same publication, in “The breakthrough AI needs,”
considered what might be required for AI to be comprehensively and gigantically
successful, and came up with using more “creativity” to end “resource
constraints,” and “from giving ideas and talent the space to flourish at home,
not trying to shut down rivals abroad,” as “the AI universe could contain a
constellation of models, instead of just a few superstars.” It is indeed clear that now picking the most
successful AI companies of 2050 could be no more accurate than using 1900
information to determine the premier automakers during the industry’s mid-1920s
gains.
Most
pessimistic of all was “Will A.I. Be a Bust?
A Wall Street Skeptic Rings the Alarm” (Tripp Mickle, The New York
Times, September 23rd). The
doubter, Goldman Sachs stock research head Jim Covello, wrote three months
before “that generative artificial intelligence, which can summarize text and
write software code, makes so many mistakes that it was questionable whether it
would ever reliably solve complex problems.”
The “co-head of the firm’s geopolitical advisory business… urged him to
be patient,” resulting in “private bull-and-bear debates” between the two, but
the issue, within as well as outside Goldman Sachs, remained partisan and
unresolved.
Back to The
Economist, where on November 9th appeared “A nasty case of
pilotitis,” subtitled “companies are struggling to scale up generative
AI.” Although, per the piece, “fully 39%
of Americans now say they use” AI, the share of companies remained near 5%,
many of which appeared “to be suffering from an acute form of pilotitis,
dilly-dallying with pilot projects without fully implementing the technology.” Managements seemed to become “embarrassed if
they moved too quickly and damaged their firm’s reputation(s),” and have also
been held back by cost, “messy data” needing consolidation, and an AI-skill
shortage. Deloitte research “found that
the share of senior executives with a “high” or “very high” level of interest
in generative AI had fallen to 63%, down from 74% in the first quarter of the
year, suggesting that the “new-technology shine” may be wearing off,” and one
CIO’s “boss told him to stop promising 20% productivity improvements unless he
was first prepared to cut his own department’s headcount by a fifth.”
Another AI
issue, technical instead of organizational, was described in “Big leaps to baby
steps” in the November 12th Insider Today, and started with
“OpenAI’s next artificial intelligence model, Orion, reportedly isn’t showing
the massive leap in improvement previous versions have enjoyed.” Company testers said Orion’s improvement was
“only moderate and smaller than what users saw going from GPT-3 to GPT-4.” With high costs and power and data
limitations still looming, a shrinking capability exponent could serve to
eliminate future releases.
Six days
later, the same source described “A Copilot conundrum,” in that, even one year
after its release, Microsoft’s so-named “flagship AI product” has been “coming
up short on the big expectations laid out for it.” An executive there told a Business Insider AI
expert “that Copilot offers useful results about 10% of the time.” Yet the software does have its adherents,
including Lumen Technologies’ management forecasting “$50 million in annual
savings from its sales team’s use of Copilot.”
An overall
problem stemming from the above is that “Businesses still aren’t fully ready
for AI, surveys show” (Patrick Kulp, Tech Brew, November 22nd). “Indices attempting to gauge how companies
have fared at reworking operations around generative AI have been piling up
lately – and the verdict is mixed.”
While AI’s shortcomings are real and documentable, many firms “are still
organizing their IT infrastructure.”
Reasons mentioned here were “culture and data challenges, as well as a
lack of necessary talent and skills,” causing “nearly half of companies” to
“report that AI challenges have fallen short of expectations across top
priorities.” So, if AI is overall now a
failure, more than its producers are to blame.
A final AI
course proposal came from Kai-Fu Lee in Wired.com on November 26th:
“How Do You Get to Artificial General Intelligence: Think Lighter.” The idea here was to build “models and apps”
that are “purpose-built for commercial applications using leaner models and
innovative architecture,” thereby costing “a fraction to train and achieve
levels of performance good enough for consumers and enterprises,” instead of making
massive, comprehensive large language models which end up costing vastly more
per query to use. It may even be that
different apps can use different AI sources which can somehow be combined. That would be more difficult to organize, but
the stakes are high.
This final
article points up the thesis of the September 21st piece above – AI will
need creativity in ways less emphasized in the industry. Companies will need to think outside the
boxes they have built and maintained.
There are real opportunities for those doing that best to earn billions
or more. Then, and only then, may
artificial intelligence reach its potential.
Designers and executives stopped from exiting through the sides by the
massive issues above will need to find ways of escaping through the top or
bottom – or through another dimension.
Can they do that? We will see.