Friday, August 30, 2024

Artificial Intelligence’s Limitations and Clear Current Problems

We’re marching through months and years since last year’s AI awakening.  We can’t fairly say that the shortcomings it has are permanent, but, as of now, what are they?

First, although computer applications have excelled at many games, such as chess, where they are vastly better than any human ever, and checkers, which was electronically solved 17 years ago, they have not done the same with bridge.  Per BBO Weekly News on July 21st, Bill Gates said, correctly, that “bridge is one of the last games in which the computer is not better.”  Artificial intelligence progress has done nothing, so far, to change that, and it is noteworthy that even in a closed system with completely defined rules, objectives, and scoring, it has not been able to take over. 

Not only has it not replaced huge numbers of jobs, but “77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds” (Bryan Robinson, Forbes, July 23rd).  The effort, “in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers.”  It found that “the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees,” to the point where, in contrast with 96% of C-suite executives expecting AI to boost productivity… 77% of employees using AI say it has added to their workload and created challenges,” and has been “contributing to employee burnout.”  Also, 47% “of employees using AI say they don’t know how to achieve the expected productivity gains their employers expect, and 40% feel their company is asking too much of them when it comes to AI.”  This is what we used to call a disconnect.  The author recommended employers get outside help with AI efforts and measuring productivity differently, and workers to generally “embrace outside expertise.”

A similarly negative view was the subject of “Machines and the meaning of work” (Bartleby, The Economist, July 27th).  The pseudonymous writer cited a paper claiming that although “in theory, machines can free up time for more interesting tasks; in practice, they seem to have had the opposite effect.”  Although in health care, automation can allow more time with patients, in others, as “the number of tasks that remain open to humans dwindles, hurting both the variety of work and people’s understanding of the production process,” “work becomes more routine, not less.”  Overall, “it matters whether new technologies are introduced in collaboration with employees or imposed from above, and whether they enhance or sap their sense of competence.”

Similarly, Emma Goldberg, in the New York Times on August 3rd, asked “Will A.I. Kill Meaningless Jobs?”  If it does, it would make workers happier in the short run, but it could also contribute to “the hollowing out of the middle class.”  Although the positions that AI could absorb might be lacking in true significance, many “have traditionally opened up these white-collar fields to people who need opportunities and training, serving as accelerants for class mobility:  paralegals, secretaries, assistants.”  These roles could be replaced by ones with “lower pay, fewer opportunities to professionally ascend, and – even less meaning.”  Additionally, “while technology will transform work, it can’t displace people’s complicated feelings toward it.”  So we don’t know – but breaking even is not good enough for what is often predicted to be a trillion-dollar industry.

Back to the issue of perceived AI value is “A lack-of-demand problem” (Dan DeFrancesco, Insider Today, August 8th).  “A chief marketing officer” may have been justified in expecting that the Google AI tools it introduced would “be an easy win,” as “in the pantheon of industries set to be upended by AI, marketing is somewhere near the top,” as the technology could “supercharge a company’s marketing department in plenty of ways,” such as by providing “personalized emails” and “determining where ads should run.” Unfortunately, per the CMO, “it hasn’t yet,” as “one tool disrupted its advertising strategy so much they stopped using it,” “another was no better at the job than a human,” and one more “was only successful about 60% of the time.”  Similar claims appear here from Morgan Stanley and “a pharma company.”  In all, “while it’s only fair to give the industry time to work out the kinks, the bills aren’t going to slow down anytime soon.”

In the meantime, per “What Teachers Told Me About A.I. in School” (Jessica Grose, The New York Times, August 14th), AI is causing problems there, per examples of middle school students, lacking “the background knowledge or… intellectual stamina to question unlikely responses,” turning in assignments including the likes of “the Christian prophet Moses got chocolate stains out of T-shirts.”  Teachers are describing AI-based cheating as “rampant,” but are more concerned about students not learning how to successfully struggle through challenging problems.  Accordingly, they are “unconvinced of its transformative properties and aware of its pitfalls,” and “only 6 percent of American public school teachers think that A.I. tools produce more benefit than harm.”

I do not know how long these AI failings will continue.  With massive spending on the technology by its providers continuing, they will be under increasing pressure to deliver useful and accurate products.  How customers react, and how patient they will be, will eventually determine how successful artificial intelligence, as a line of business, will be over the next several years.  After some length of time, future promises will no longer pacify those now dissatisfied.  When will we reach that point?

Friday, August 23, 2024

Seven Weeks on Artificial Intelligence Progress: Real, Questioned, Disappointing, and Baked into the Investment Cake

What recent substantial contributions has AI recently made?  What big weakness does it still have?  What has happened to its great prospects?  Can we know its true inherent advancement?  And what forecasts do today’s AI-related stock prices include?

The first report is “A sequence of zeroes” (The Economist, July 6th), subtitled “What happened to the artificial-intelligence revolution?”  “Move to San Francisco and it is hard not to be swept up by mania over artificial intelligence… The five big tech firms – Alphabet, Amazon, Apple, Meta and Microsoft… this year… are budgeting an estimated $400bn for capital expenditures, mostly on AI-related hardware.”  However, “for AI to fulfil its potential, firms everywhere  need to buy the technology, shape it to their needs and become more productive as a result,” and although “investors have added more than $2trn to the market value of the five big tech firms in the past year… beyond America’s west coast, there is little sign that AI is having much of an effect on anything.”  One reason for the non-progress is that “concerns about data security, biased algorithms and hallucinations are slowing the roll-out” – an example here is that “McDonald’s… recently canned a trial that used AI to take customers’ drive-through orders after the system started making errors, such as adding $222 worth of chicken nuggets to one diner’s bill.”  Charts here show that the portion of American jobs that are “white collar” has still been marching steadily upward, and, disturbingly, that share prices of “AI beneficiaries” have stayed about even since the beginning of 2019 while others have on average risen more than 50%.  Now, “investors anticipate that almost all of big tech’s earnings will arrive after 2032.”

“What if the A.I. Boosters Are Wrong?” (Bernhard Warner and Sarah Kessler, The New York Times, July 13th), and not even premature?  MIT labor economist Daron Acemoglu’s “especially skeptical paper” described how “A.I. would contribute only “modest” improvement to worker productivity, and that it would add no more than 1 percent to U.S. economic output over the next decade.”  The economist “sees A.I. as a tool that can automate routine tasks… but he questioned whether the technology alone can help workers “be better at problem solving, or take on more complex tasks.””  Indeed, AI may fall victim to the same problem which got 3D printing out of the headlines in the 2010s – lack of a massively beneficial, large-scale application.

In real contrast to common concerns, especially from last year, “People aren’t afraid of A.I. these days.  They’re annoyed by it” (David Wallace-Wells, The New York Times, July 24th).  One issue “has inspired a… neologistic term of revulsion, “AI slop”: often uncanny, frequently misleading material, now flooding web browsers and social-media platforms like spam in old inboxes.”  Some delightful examples cited here are X’s and Google’s pronouncements that “it was Kamala Harris who had been shot… that only 17 American presidents were white… that Andrew Johnson, who became president in 1865 and died in 1875, earned 13 college degrees between 1947 and 2012… that geologists advise eating at least one rock a day,” and “that Elmer’s glue should be added to pizza sauce for thickening.”  Such “A.I. “pollution”” is causing “plenty of good news from A.I.” to be “drowned out.”  With Google’s CEO admitting “that hallucinations are “inherent” to the technology,” they don’t look like they’ll be going away soon.

Even given the disappointments above, “Getting the measure of AI” (Tom Standage, The Economist, July 31st) is not easy.  One way “is to look at how many new models score on benchmarks, which are essentially standardised exams that assess an AI model’s capabilities.”  One such metric is “MMLU, which stands for “massive multi-task language understanding,”” contains “15,908 multiple-choice questions, each with four possible answers, across 57 topics including maths, American history, science and law,” and has been giving scores “between 88% and 90%” to “today’s best models,” compared with barely better than the pure-chance 25% in 2020.  There will be more, and it will be useful to see how they improve from here.

On the constructive side, “A.I. Is Helping to Launch New Businesses (and Not Just A.I. Businesses)” (Sydney Ember, The New York Times, August 18th).  A Carnegie Mellon University professor who for 14 years has been having “groups of mostly graduate students start businesses from scratch,” said, after advising the use of generative AI extensively, that he’d “never seen students make the kind of progress that they made this year.”  The technology helped them to “write intricate code, understand complex legal documents, create posts on social media, edit copy and even answer payroll questions.” As well, one budding entrepreneur said, “I feel like I can ask the stupid questions of the chat tool without being embarrassed.”  That counts also, and while none of these are, as a Goldman Sachs researcher quoted in the Wallace-Wells article asked about, a $1 trillion problem that AI could solve, they collectively are of real value.

Is it reasonable to think that AI stocks will roughly break even from here if lofty expectations go unrealized?  No, according to Emily Dattilo, in Barron’s on August 19th: “Apple Is Set to Win in AI.  How That’s ‘Already Priced In.’”  Analysts at Moffett Nathanson, for example, pronounced that, although Apple was “on track to win in artificial intelligence,” the “bad news” was “that’s exactly what’s already priced in.”  I suspect that’s happening with the other AI stocks as well.  If the technology not only grows in scope but does so more than currently expected, share prices may rise, but if it only gets moderately larger, they could drop.  That can be called another problem with artificial intelligence – if enough investors realize this situation, the big five companies above, Nvidia, and others may have already seen their peaks.  Small-scale achievements such as startup business help will not be enough to sustain tremendous financial performance.  What goes up does not always come down, but here it just might.  And the same thing goes for AI hopes.

Friday, August 16, 2024

Artificial Intelligence’s Data Needs: Can They Be Met Legally and Logistically?

Three of the problems I identified with AI in previous posts concern the data used to train its large language models.  One is the sheer volume of information it needs to create more advanced capabilities.  Second is data’s legal status, which has caused several large lawsuits, and doubtless many more small ones, charging copyright infringement.  The third is distortion from chatbots taking in output from themselves or others.  What has been in the press lately about these issues, and what does it mean not only about this aspect of AI but about AI in general?

Apparently, “Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI” (Annie Gilbertson, WIRED, July 16th).  The problem has been that “tech companies are turning to controversial tactics to feed their data-hungry artificial intelligence models, vacuuming up books, websites, photos, and social media posts, often unbeknownst to their creators.”  Everyone anywhere near the field, let alone companies’ legal personnel, should know that electronic versions of books and published articles are as subject to copyright laws as hardcopy editions, long documented in statements such as “no part of this book may be reproduced in any form or by any means without the prior written permission of the Publisher, excepting brief quotes…” which I got from a random 1968 paperback – but it is understandable for lay people not to know if that also applies to the likes of videos and other less formally protected online material.  It also may be difficult, in these data-absorbing efforts, to avoid off-limits products, but the problem still must be solved.

That’s why, at least per Nico Grant and Cade Metz, in the New York Times on July 19th, we are seeing or should see “The Push to Develop Generative A.I. Without All the Lawsuits.” The partial copyrighted-information solution here is those owning the rights to data “building A.I. image generators with their own data,” and then selling AI-development access.  Two companies already starting that are “the major stock photo suppliers Getty Images and Shutterstock,” which will pay photographers when their work is thus used.  Fair play, or so it seems.

Otherwise, “The Data That Powers A.I. Is Disappearing Fast” (Kevin Roose, The New York Times, July 19th).  Although, per research “by the Data Provenance Initiative, an M.I.T.-led research group,” “three commonly used A.I. training data sets” had restricted only 5% of their data (though “25 percent… from the highest-quality sources”), but the operation is in progress.  Conclusive definition of legal information use is not here yet, as “A.I. companies have claimed that their use of public web data is legally protected under fair use.”  Perhaps, per the author, “if you take advantage of the web, the web will start shutting its doors.”

Another way out was described in Forbes Daily on July 24th: “The Internet Isn’t Big Enough To Train AI.  One Fix?  Fake Data.”  “OpenAI’s ChatGPT, the chatbot that helped mainstream AI, has already been trained on the entire public internet, roughly 300 billion words including all of Wikipedia and Reddit” (italics in original), meaning that “at some point, there will be nothing left.”  A company, Gretel, wants to provide AI firms with “fake data made from scratch,” which is not totally new, as “Anthropic, Meta, Microsoft and Google have all used synthetic data in some capacity to train their models.”  Two issues with it are that “it can exaggerate biases in an original dataset and fail to include outliers,” which “could make AI’s tendency to hallucinate even worse.”  If, that is, it does not “simply fail to produce anything new.”  We will find out, probably within the year, if artificial data is a worthwhile partial or complete substitute.

To the point of the final first-paragraph problem is “What happens when you feed AI-generated content back into an AI model?  Put simply:  absolute chaos” (Maggie Harrison Dupre, Futurism.com, July 26th).  Per a recent study, “AI models trained on AI-generated material will experience rapid “model collapse” … as an AI model cannibalizes AI-generated data, its outputs become increasingly bizarre, garbled, and nonsensical.”  The problem is out there now, as “there are thousands of AI-powered spammy “news” sites cropping up in Google; Facebook is quickly filling with bizarre AI imagery… Very little of this content is marked as AI-generated, meaning that web scraping, should AI companies continue to attempt to gather their data from the digital wilds, is  becoming a progressively dubious means of collecting AI training data.”

Despite the hope in the second story above, none of this looks good for future AI releases.  These problems will not be easy to solve.  We already have the issue that AI is nowhere near ready to produce even page-length writing releasable without human scrutiny – the concerns here will, most likely, keep that capability at bay.  Until then, AI will fail to even approximate the utility expected by its customers and backers.  That means, even without regard to other obstacles such as insufficient power for fundamentally more advanced releases, that artificial intelligence is in deep trouble.  All should govern themselves accordingly.

Friday, August 9, 2024

Artificial Intelligence Investments and Revenue – A Month’s Worth

AI, you’ve done it to me.

Even when eliminating most of the news items about it, those focusing on the future instead of the present and on other factors irrelevant to what is happening with the technology itself, there are too many for even weekly posts.  So I have divided it into subtopics, of which this is the first to get publication. 

The oldest here is “Microsoft’s Steep AI Investments Raise Questions About Returns” (Patrick Seitz, Investor’s Business Daily, July 10th).  As crisply put by a Morgan Stanley analyst, “with Microsoft’s capital expenditures poised to nearly double from $32 billion in fiscal 2023 to our forecasted $63 billion in fiscal 2025, the question of monetization against these investments rises to the forefront for many investors.”  The analyst still rated the stock highly, but, per the author, “after digging into the question of return on investment,” “was left with more questions than answers” because of a “lack of visibility.”

The first item in the July 19th Goldman Sachs Briefings was “Will the $1 trillion in AI spending pay off?.”  The determination here was “AI may prove far less promising than many business leaders and investors expect,” per an MIT professor who “estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years – which means that AI will impact less than 5% of all tasks and boost US productivity by only 0.5% and GDP growth by 0.9% cumulatively over the next decade.”  A Goldman Sachs research head went “a step further” by saying that “there’s not a single thing that this is being used for that’s cost-effective at this point.”

On the other hand, “Alphabet Reports 20% Jump in Profit as A.I. Efforts Begin to Pay Off” (Nico Grant, The New York Times, July 23rd).  “The company has incorporated generative A.I. into all of its products and increased its spending on data centers and associated hardware to underpin the technology.  Now, Google executives said, those investments have started to bear fruit,” with second-quarter profit of $23.6 billion.  However, Victor Tangermann, in “Investors Are Suddenly Getting Very Concerned That AI Isn’t Making Any Serious Money” in Futurism.com on July 27th, reported that “investment bankers are… starting to become wary of Big Tech’s ability to actually turn the tech into a profitable business," saying that Google’s earning report was “failing to impress investors with razor-thin profit margins and surging costs related to training AI models.”  Still pertinently, a “tech stock analyst” wrote back in March that “capital continues to pour into the AI sector with very little attention being paid to company fundamentals… a sure sign that when the music stops there will not be many chairs available.” 

Perhaps in reaction to these pieces and the thoughts behind them, “Tech Bosses Preach Patience as They Spend and Spend on A.I.” (Karen Weise, The New York Times, August 2nd).  “The tech industry’s biggest companies have made it clear over the last week that they have no intention of throttling their stunning levels of spending on artificial intelligence, even though investors are getting worried that a big payoff is further down the line than once thought.”  On a week ago Thursday, Mark Zuckerberg of Meta said he would spend “at least $37 billion” on “new tech infrastructure,” and “would spend even more next year.”  “In the last quarter alone, Apple, Amazon, Meta, Microsoft, and Google’s parent company Alphabet spent a combined $59 billion on capital expenses, 63 percent more than a year earlier and 161 percent more than four years ago.”  Another Goldman Sachs executive asked “What $1 trillion problem with A.I. solve?,” and implied that “replacing low wage jobs with tremendously costly technology” was not sufficient justification.  Yet leaders of the other largest IT firms also emphasized the need to be at the front of AI progress, and that true business success there might take a long time.

Is it true that “The AI supply chain is in jeopardy” (Guy Scriven, The Economist, August 3rd)?  “About a year ago,” he “started to ask analysts what it would take to stop the… (AI) bull run.  Most suggested it would end if big tech firms failed to deliver meaningful AI-related sales for a couple of quarters in a row.  It took more than a couple of quarters, but over the past two weeks, as the tech giants reported their earnings, that scenario has started to play out.”  Adding to general murkiness is that Microsoft, where “growth in this revenue stream has been slowing,” “is still the only big tech firm that puts a figure to its AI sales.”  He echoed that “scepticism has set in among investors,” and opined that “the boom looks somewhat precarious,” as “the larger risk is that demand peters out.”

Will it, or won’t it?

Friday, August 2, 2024

July Jobs Report: Downward Trend Continues – AJSN 600,000 Higher, With Latent Demand Over 17.8 Million

 I saw three published estimates of the number of net new nonfarm payroll positions in this morning’s Bureau of Labor Statistics Employment Situation Summary.  All three were 175,000. 

It didn’t make it – the number was 114,000.  That’s still decent, as American population increased only 163,000, but was the worst in months. Even poorer in relation to recent outcomes was the seasonally adjusted unemployment rate, up from 4.1% to 4.3%.  Other results inferior to June’s were unadjusted joblessness at 4.5% instead of 4.3%, the number of unemployed up 400,000 to 7.2 million, the employment-population ratio from 60.1% to 60.0%, and the count of those working part-time for economic reasons, or thus far unsuccessfully seeking full-time positions while keeping shorter-hours ones, up a disturbing 400,000 to 4.6 million.  Average hourly nonfarm private payroll earnings again roughly matched inflation, up 8 cents to $35.07, and the number of long-term unemployed, those jobless for 27 weeks or longer, remained at 1.5 million.  On the good side, the number claiming no interest in work lost 804,000 for a 1.41 million drop over two months, to reach 92,972,000, and the labor force participation rate gained 0.1% to get to 62.7%.

The American Job Shortage Number or AJSN, the measure showing how many additional positions could be quickly filled if all knew they would be as easy to get as a lottery ticket, gained 602,000 as follows:



The share of the AJSN from those officially jobless was 38.7%, up from 37.7%.  Compared with a year before, the metric is almost 1.2 million higher, almost matching the difference from more unemployment but also reflecting gains in those discouraged and those wanting work but not looking for it for a year or more. 

Some jobs reports are confusing and contradictory, but this one was not.  People are still joining the labor force in large numbers, shown by the 1.41 million above and by the count of those employed actually going up, 264,000 to 162,038,000.  When you look at the discouraged and did-not search outcomes, along with the unemployment results, you can see that many Americans are either losing work or not getting it.  The labor force is growing faster than the number of positions.  Inflation is now at 2.5%, reasonable in every way except in relation to the Federal Reserve’s target, but we clearly need more jobs – especially full-time jobs.  The Fed missed a chance to cut interest rates this week; at next month’s meeting, they shouldn’t, and a nominal quarter point, unless the data above substantially improves, will not be enough.  For now, the turtle stayed right where he was.