Friday, November 29, 2024

Artificial Intelligence – Three-Plus Months of Problems and Perceptions, With Hope

As I predicted at the end of last year, AI has found a home in many niches.  But does it seem capable of justifying its $1 trillion economy?

Per “Artificial intelligence is losing hype” on August 19th, The Economist had concerns – or did it?  The editorial piece’s subtitle, “For some, that is proof that the tech will in time succeed.  Are they right?,” leaves open that AI expectations, especially those not backed up by reasonable data and judicious use of extrapolation, calming down may even predict its broader triumph.  It told us that “according to the latest data from the Census Bureau, only 5.1% of American companies use AI to produce goods and services, down from a high of 5.4% early this year.”  The article compared AI investments with 1800’s British “railway fever,” which, only after it caused an investment bubble, was justified as firms, “using the capital they had raised during the mania, built the track out, connecting Britain from top to bottom and transforming the economy.”  Could that happen with AI?

On September 21st, the same publication, in “The breakthrough AI needs,” considered what might be required for AI to be comprehensively and gigantically successful, and came up with using more “creativity” to end “resource constraints,” and “from giving ideas and talent the space to flourish at home, not trying to shut down rivals abroad,” as “the AI universe could contain a constellation of models, instead of just a few superstars.”  It is indeed clear that now picking the most successful AI companies of 2050 could be no more accurate than using 1900 information to determine the premier automakers during the industry’s mid-1920s gains.

Most pessimistic of all was “Will A.I. Be a Bust?  A Wall Street Skeptic Rings the Alarm” (Tripp Mickle, The New York Times, September 23rd).  The doubter, Goldman Sachs stock research head Jim Covello, wrote three months before “that generative artificial intelligence, which can summarize text and write software code, makes so many mistakes that it was questionable whether it would ever reliably solve complex problems.”  The “co-head of the firm’s geopolitical advisory business… urged him to be patient,” resulting in “private bull-and-bear debates” between the two, but the issue, within as well as outside Goldman Sachs, remained partisan and unresolved.

Back to The Economist, where on November 9th appeared “A nasty case of pilotitis,” subtitled “companies are struggling to scale up generative AI.”  Although, per the piece, “fully 39% of Americans now say they use” AI, the share of companies remained near 5%, many of which appeared “to be suffering from an acute form of pilotitis, dilly-dallying with pilot projects without fully implementing the technology.”  Managements seemed to become “embarrassed if they moved too quickly and damaged their firm’s reputation(s),” and have also been held back by cost, “messy data” needing consolidation, and an AI-skill shortage.  Deloitte research “found that the share of senior executives with a “high” or “very high” level of interest in generative AI had fallen to 63%, down from 74% in the first quarter of the year, suggesting that the “new-technology shine” may be wearing off,” and one CIO’s “boss told him to stop promising 20% productivity improvements unless he was first prepared to cut his own department’s headcount by a fifth.”

Another AI issue, technical instead of organizational, was described in “Big leaps to baby steps” in the November 12th Insider Today, and started with “OpenAI’s next artificial intelligence model, Orion, reportedly isn’t showing the massive leap in improvement previous versions have enjoyed.”  Company testers said Orion’s improvement was “only moderate and smaller than what users saw going from GPT-3 to GPT-4.”  With high costs and power and data limitations still looming, a shrinking capability exponent could serve to eliminate future releases. 

Six days later, the same source described “A Copilot conundrum,” in that, even one year after its release, Microsoft’s so-named “flagship AI product” has been “coming up short on the big expectations laid out for it.”  An executive there told a Business Insider AI expert “that Copilot offers useful results about 10% of the time.”  Yet the software does have its adherents, including Lumen Technologies’ management forecasting “$50 million in annual savings from its sales team’s use of Copilot.”

An overall problem stemming from the above is that “Businesses still aren’t fully ready for AI, surveys show” (Patrick Kulp, Tech Brew, November 22nd).  “Indices attempting to gauge how companies have fared at reworking operations around generative AI have been piling up lately – and the verdict is mixed.”  While AI’s shortcomings are real and documentable, many firms “are still organizing their IT infrastructure.”  Reasons mentioned here were “culture and data challenges, as well as a lack of necessary talent and skills,” causing “nearly half of companies” to “report that AI challenges have fallen short of expectations across top priorities.”  So, if AI is overall now a failure, more than its producers are to blame.

A final AI course proposal came from Kai-Fu Lee in Wired.com on November 26th: “How Do You Get to Artificial General Intelligence:  Think Lighter.”  The idea here was to build “models and apps” that are “purpose-built for commercial applications using leaner models and innovative architecture,” thereby costing “a fraction to train and achieve levels of performance good enough for consumers and enterprises,” instead of making massive, comprehensive large language models which end up costing vastly more per query to use.  It may even be that different apps can use different AI sources which can somehow be combined.  That would be more difficult to organize, but the stakes are high.

This final article points up the thesis of the September 21st piece above – AI will need creativity in ways less emphasized in the industry.  Companies will need to think outside the boxes they have built and maintained.  There are real opportunities for those doing that best to earn billions or more.  Then, and only then, may artificial intelligence reach its potential.  Designers and executives stopped from exiting through the sides by the massive issues above will need to find ways of escaping through the top or bottom – or through another dimension.  Can they do that?  We will see.

Friday, November 15, 2024

Seven Weeks of Artificial Intelligence Investments, Revenue, and Spending, and What They Tell Us

A massive amount of money is being spent on developing, preparing for, buying, and implementing AI.  What has it caused, and how does AI now look overall?

Before the articles below came out “Is the AI bubble actually bursting?” (Patrick Kulp, Tech Brew, August 8th).  Concerns here were that “a stock market rout and big questions about spending continue to stoke worries,” that “some high-profile reports this summer questioned AI’s money-making potential relative to its enormous cost,” that “Microsoft, Alphabet, and Meta didn’t do much to soothe investors seeking temperance in AI capital expenditures,” and that we have reason to expect “a “major course correction” in AI hype as revenues fail to keep pace with spending.”

Since then, there have been strong and weak AI financial outcomes.  On August 23rd, Courtney Vien told us, in CFO Brew, “How Walmart’s seen ROI on gen AI.”  “During its last earnings call, the giant retailer reported 4.8% revenue growth, bolstered by 21% growth in its e-commerce function,” which “Walmart executives credited… to several factors… but one stood out:  generative AI.”  The technology had helped with “populating and cleaning up” the company’s gargantuan “product catalog,” of which the new version has also “given Walmart more insight into its customers.”  AI has also been “driving its impulse sales” through improved “cross-category search.”

Another such success story was the subject of “Nvidia’s earnings beat Wall Street’s estimates as AI momentum continues” (Eric Revell, Fox Business, August 28th).  In its “second-quarter earnings report,” earnings per share reached $0.68 instead of the projected $0.64, and revenue came in at $30.04 billion instead of $28.70 billion.  Although it started production of a new AI-dedicated chip, the Blackwell, demand for the current Hopper version has “remained strong.”

A major consumer of Nvidia’s chips rates to buy many more, as “OpenAI Is Growing Fast and Burning Through Piles of Money” (Mike Isaac and Erin Griffith, The New York Times, September 27th).  Although that firm “has been telling investors that it is making billions from its chatbot,” “it has not been quite so clear about how much it is losing.”  While “OpenAI’s monthly revenue hit $300 million in August, it “expects to lose roughly $5 billion this year after paying for costs related to running its services and other expenses like employee salaries and office rent.”  It spends most, though, on “the computing power it gets through a partnership with Microsoft, which is also OpenAI’s primary investor.”  Even if company projections showing a much brighter future will come to pass, OpenAI’s financial present is dark.

On industry results, Matt Turner reported those from the previous week from five of the largest companies in the November 3rd “Insider Today” in Business Insider.  Overall, he said they were “beating estimates and committing billions to AI.”  Alphabet’s Google-branded “cloud business benefited from AI adoption, posting a 35% year-over-year increase in revenues.”  Amazon did the same, with AI-assisted cloud revenues growing 19%.  Apple’s loss of Chinese revenue “left investors underwhelmed,” and it is uncertain if “new Apple Intelligence features help juice sales.”  “Meta beat estimates, though user growth came in below expectations,” and CEO Mark Zuckerberg “promised to keep spending on AI.”  Microsoft also did better than they expected, “but concerns around capacity constraints in AI” hurt investor reactions.  Overall, AI seemed to be producing real money for these firms, but related revenue growth has hardly been explosive.

A useful summary, “How companies are spending on AI right now,” by Patrick Kulp, came out on November 12th in Tech Brew.  In an in-effect response to the first article above, also written by Kulp, the piece started with “Despite some worry about a possible AI bubble earlier in the year, businesses are continuing to spend on generative technology – and investors are still eyeing it as a growth area.”  Another conclusion here was that of “AI becoming an office staple” with 38% third-quarter-on-second-quarter growth of “business spending on AI vendors.”  Although “half of the top 10 fastest growing enterprise software vendors on the platform were AI startups,” “OpenAI’s ChatGPT still reigns supreme,” but companies buying that have been increasingly likely to get other firms’ products as well.  Additionally, we have “AI still fueling VC growth,” as “three-quarters of limited partners surveyed… said they plan to increase AI investments in the next 12 months, with cybersecurity, predictive analytics, and data centers garnering the most interest.”  Note that “autonomous vehicles and computer vision ranked last for sub-fields of AI catching investor attention.”  Yet, per an Accenture report, there has been a “productivity flatline,” despite more AI use, over the past year.

What does all this reveal about artificial intelligence?  It is not vaporware.  Demand for it is real, in fact huge.  For some applications it is strongly objectively beneficial.  But it still has problems, with, along with many more mentioned in previous posts, profitability and productivity.  We don’t know how comprehensive its advantages will turn out to be.  But it is real, and it is progressing.  From there, we will just need to stay tuned.

Friday, November 8, 2024

Artificial Intelligence Regulation – Disjointed, and Too Soon

Over the past three months, there have been several reports on how, or even whether, AI should be legally constrained.  What did they say?

On the issue of its largest supplier, there was “As Regulators Close In, Nvidia Scrambles for a Response” (Tripp Mickle and David McCabe, The New York Times, August 6th).  It’s not surprising that this company, which not only is doing a gigantic amount of business but “by the end of last year… had more than a 90 percent share of (AI-building) chips sold around the world,” had drawn “government scrutiny.”  It has come from China, the United Kingdom, and the European Union as well as the United States Justice Department, causing Nvidia to start “developing a strategy to respond to government interest.”  Although, per a tech research firm CEO, “there’s no evidence they’re doing anything monopolistic or anticompetitive,” “the conditions are right because of their market leadership,” and “in the wake of complaints about Nvidia’s chokehold on the market, Washington’s concerns have shifted from China to competition, with everyone from start-up founders to Elon Musk grumbling about the company’s influence.”  It will not be easy for either the company or the governments.

Meanwhile, “A California Bill to Regulate A.I. Causes Alarm in Silicon Valley” (Cade Metz and Cecilia Kang, The New York Times, August 14th).  The legislation “that could impose restrictions on artificial intelligence,” was then “still winding its way through the state capital,” and “would require companies to test the safety of powerful A.I. technologies before releasing them to the public.”  It could also, per its opposition, “choke the progress of technologies that promise to increase worker productivity, improve health care and fight climate change” and are in their infancies, pointing toward real uncertainty in how they will affect people.  Per leginfo.com, it was vetoed by state governor Gavin Newsom, who said “by focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.  Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”  Expect a different but related bill in California soon.

A thoughtful overview, “Risks and regulations,” came out in the August 24th Economist.  It stated that “artificial intelligence needs regulation.  But what kind, and how much?,” and came up with various ideas.  It started with the point that AI’s “best-known risk is embodied by the killer robots in the “Terminator” films – the idea that AI will turn against its human creators,” the kind of risk that some people think is “largely speculative,” and others think is less important than “real risks posed by AI that exist today, such as bias, discrimination, AI-generated disinformation and violation of intellectual-property rights.”  With Chinese authorities most wanting to “control the flow of information” and the European Union’s now-the-law AI Act which “is mostly a product-safety document which regulates applications of the technology according to how risky they are,” “different governments take different approaches to regulating AI.”  Combined with most American legislation being from states, international and even national accord seem a long way off.

What can we gain from “Rethinking ‘Checks and Balances’ for the A.I. Age” (Steve Lohr, The New York Times, September 24th)?  Recalling the Federalist Papers, a Stanford University project, now with 12 essays known as the Digitalist Papers, “contends that today is a broadly similar historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements.”  The writings’ “overarching concern” is that “a powerful new technology… explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions,” therefore “citizens need to be more involved in determining how to regulate and incorporate A.I. into their lives.”  This effort seems designed to be a starting point, as, before, we have no more idea how, if it meets its high-importance expectations, AI will affect society than we did about cars in 1900.

Overall, per Garrison Lovely in the September 29th New York Times, it may be that “Laws Need to Catch Up to Artificial Intelligence’s Unique Risks.”  Or not.  Over the past year, OpenAI has been in controversy about its safety practices, and, per Lovely, federal “protections are essential for an industry that works so closely with such exceptionally risky technology.”  As before, we do not have enough agreement between governments to do that now, but the day will come.  Sooner?  Later?  We do not know, but someday, we hope, we can get together on this potentially critical issue.

Friday, November 1, 2024

Today’s Jobs Report Didn’t Go Much of Anywhere – AJSN Latent Demand Down To 15.9 Million on Lower Number of Expatriates

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to show a greatly reduced number of net new nonfarm payroll positions, but at 12,000 it didn’t even approach the 110,000 and 115,000 published estimates.  How did the other figures turn out?

Seasonally adjusted and unadjusted unemployment stayed the same, at 4.1% and 3.9% respectively, with the adjusted count of jobless up 200,000 to 7 million.  Of those there were 1.6 million long-term, or without work for 27 weeks or longer, down 100,000.  Those working part-time for economic reasons, or holding short-hours positions while seeking full-time ones, remained at 4.6 million.  The two measures of how common it is for Americans to be working or officially unemployed, the labor force participation rate and the employment-population ratio, both worsened, coming in at 62.6% and 60.0% for drops of 0.1% and 0.2%.  The unadjusted number of employed was off 108,000 to 161,938,000.  Better, though, were private nonfarm payroll wages, which gained 10 cents, more than inflation, to $35.46 per hour. 

The American Job Shortage Number or AJSN, the metric showing how many additional positions could be quickly filled if all knew they would be easy to get, fell 736,000, almost all from a much-reduced estimate of the number of Americans living outside the United States, as follows:


The share of the AJSN from official unemployment rose 2.3% to 37.6%.  Compared with a year before, the loss of 900,000 from the expatriates’ contribution was mostly offset by 480,000 more from unemployment and 154,000 from those not looking for work for the previous year, with other changes small, for a 247,000 fall. 

What happened this time?  To judge that, we next look at the measures telling us how many people left or entered the workforce.  Those were a 469,000 rise in the count of those claiming no interest in a job, and 219,000 more overall not in the labor force.  There was also a consistent shrinkage in the categories of marginal attachment, the 3rd through 6th and 8th rows above.  Those departing workers were why our unemployment rates didn’t worsen, given fewer new positions than our population increase could absorb.  October’s deficiency, possibly created mostly from storms and sudden layoffs, may well greatly reverse itself next time, but it is in the books.  Accordingly, I saw the turtle take a small step backwards.