Friday, December 27, 2024

What Artificial Intelligence Users Are Doing with It – And Shouldn’t Be

It’s not only technical capabilities that give AI its meaning, but also what’s being done with the software itself.  What have we seen?

One result is that “Recruiters Are Getting Bombarded With Crappy, AI-Generated CVs” (Sharon Adarlo, Futurism.com, August 16th).  Now that AI has shown itself useful for mass job applications, and for cover letters as well as for identifying suitable openings, it’s no surprise that hopefuls are using it for resumes too, and the results aren’t as favorable.  Without sufficient editing, “many of them are badly written and generic sounding,” with the language “clunky and generic” that fails “to show the candidate’s personality, their passions,” or “their story.”  The piece blames this problem on AI itself, but all recruiters need to do is to disregard applications with resumes showing signs of AI prefabricating.  Since resumes are so short, it is not time-consuming to carefully revise those initially written by AI, and failure to do that can understandably be thought of as showing what people would do on the job.

In something that might have appealed to me in my early teens, “Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram” (Matt Burgess, Wired.com, October 15th).  The article credited “deepfake expert” Henry Ajder as finding a “telegram bot” that “had been used to generate more than 100,000 explicit photos – including those of children.”  Now there are 50 of them, with “more than 4 million “monthly users”” combined.  The problem here is that there is no hope of stopping people from creating nude deepfakes, and therefore not enough reason for making them illegal.  Those depicting children, when passed to others, can be subject to the laws covering child pornography, but adults will need to understand that anyone can create such things from pictures of them clothed or even only of their faces, so we will all need to realize that such images are likely not real.  Unless people copyright pictures of themselves, it is time to accept that counterfeits will be created.

Another problem with fake AI creations was the subject of “Florida mother sues AI company over allegedly causing death of teen son” (Christina Shaw, Fox Business, October 24th).  In this, Character.AI was accused of “targeting” a 14-year-old boy “with anthropomorphic, hypersexualized, and frighteningly realistic experiences” involving conversations described as “text-based romantic and sexual interactions.”  As a result of a chatbot that “misrepresented itself as a real person,” and then, when he became “noticeably withdrawn” and “expressed thoughts of suicide,” the chatbot “repeatedly encouraged him to do so” – after which he did.  Here we have a problem with allowing children access to such features.  Companies will need to stop that, whether it is convenient or not.

How about this one: “Two Students Created Face Recognition Glasses.  It Wasn’t Hard.” (Kashmir Hill, The New York Times, October 24th).  A Harvard undergraduate student fashioned a pair that “relied on widely available technologies, including Meta glasses, which livestream video to Instagram… Face detection software, which captures faces that appear on the livestream… a face search engine called PimEyes, which finds sites on the internet where a person’s face appears,” “a ChatGPT-like tool that was able to parse the results from PimEyes to suggest a person’s name and occupation” and other data.  The creator, at local train stations, found that it “worked on about a third of the people they tested it on,” giving recipients the experience of being identified, along with their work information and accomplishments.  It turned out that Meta had already “developed an early prototype,” but did not pursue its release “because of legal and ethical concerns.”  It is hard to blame any of the companies providing the products above – indeed, for example, after the publicity this event received, “PimEyes removed the students’ access… because they had uploaded photos of people without their consent” – and, if AI is one of them, there will be many people combining capabilities to invade privacy by discovering information.  This, conceptually, seems totally unviable to stop.

Meanwhile, “Office workers fear that AI use makes them seem lazy” (Patrick Kulp, Tech Brew, November 12th).  A Slack report invoked the word “stigma,” saying there was one of those for using AI at work, and that was hurting “workforce adoption,” which slowed this year from gaining “six points in a single quarter” to 1%, ending at 33% “in the last five months.”  A major issue was that employees had insufficient guidance on when they were allowed to use AI, which many had brought to work themselves.  A strange situation, and one that clearly calls for management involvement.

Finally, there were “10 things you should never tell an AI chatbot” (Kim Komando, Fox News, December 19th).  They are “passwords or login credentials,” “your name, address or phone number” (likely to be passed on), “sensitive financial information,” “medical or health data” as “AI isn’t HIPAA-compliant,” “asking for illegal advice” (may get you “flagged” if nothing else), “hate speech or harmful content” (likewise), confidential work or business info,” “security question answers,” “explicit content” which could also “get you banned”), and “other people’s personal info.”  Overall, “don’t tell a chatbot anything you wouldn’t want made public.”  As AI interfaces get cozier and cuddlier, it will become easier to overshare to them, but that is more dangerous than ever.

My proposed solutions above may not be acceptable forever, and are subject to laws.  Perhaps this will long be a problem when dealing with AI – that conceptually sound ways of handling appearing issues may clash with real life.  That is a challenge – but, as with so many other aspects of artificial intelligence, we can learn to handle it effectively.

Friday, December 20, 2024

Artificial Intelligence: A Visit to the Catastrophic Problems Café

One thing almost everyone creating, using, coding, regulating, or just plain writing or thinking about AI feels duty-bound to do is to consider the chance that the technology will destroy us.  I haven’t done that in print yet, so here I go.

In his broad-based On the Edge:  The Art of Risking Everything (Penguin Press, 2024), author Nate Silver devoted a 56-page chapter, “Termination,” to the chance that AI will obliterate humanity or almost so.  He said there was a wide range of what he called “p(doom)” opinions, or estimations of the chances of such an outcome.  He considered more precise definitions of doom – for example, does it mean that “every single member of the human species and all biological life on Earth dies,” or could it be only “the destruction of humanity’s long-term potential” or even “something where humans are kept in check” with “the people making the big calls” being “a coalition of AI systems”?  The averages Silver found for “domain experts” on AI itself, with it defined as “all but five thousand humans ceasing to exist by 2100,” were 8.8% by 2100, and 0.7% from “generalists who had historically been accurate when making other probabilistic predictions.”  The highest expert p(doom) named was “20 to 30 percent”, but there are certainly larger ones out there.

How would the technology do its dirty work?  One way was in “A.I. May Save Us, or May Construct Viruses to Kill Us” (The New York Times, July 27th).  Author Nicholas Kristof said that “for less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.”  That could happen through anything from bugs murdering indiscriminately all the way to something that “might be possible,” using DNA knowledge to create a product tailored to “kill or incapacitate” one specific person.  Kristof is a journalist, not a technician, but as much of AI thinking is conceptual now, his concerns are valid.

Another columnist in the New York Times soon thereafter came out with “Many People Fear A.I.  They Shouldn’t” (David Brooks, July 31st).  His view was that “many fears about A.I. are based on an underestimation of the human mind” – he cited “scholar” Michael Ignatieff as saying “what we do” was not algorithmic, but “a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”  He also wrote that while engineers claim to be “building machines that think like people,” per neuroscientists “that would be a neat trick, because we don’t know how people think.”

The next month, Greg Norman looked at the problem posed by Kristof above, in “Experts warn AI could generate ‘major epidemics or even pandemics’ – but how soon?” (Fox News, August 28th).  Stemming from “a paper published in the journal Science by co-authors from Johns Hopkins University, Stanford University and Fordham University,” exposure to “substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields” creates a worry.  Although “today’s AI models likely do not “substantially contribute” to biological risks,” the chance that “essential ingredients to create highly concerning advanced biological models may already exist or soon will” could cause problems.

All of this depends, though, on what AI is allowed to access.  It is and will be able to formulate detailed deadly plans, but what from there?  A Stanford undergraduate, John A. Phillips, in 1976 wrote and submitted a term paper giving detailed plans for assembling an atomic bomb, with all information from readily available public sources.  Although one expert said it would have had about an even chance of detonating, it was never built.  That, for me, is why my p(doom) is very low, less than a tenth of one percent.  There is no indication that AI models can build things by themselves in the physical world. 

So far, we are doing well at containing AI.  As for the future, Silver said that, if given a chance to “permanently and irrevocably” stop its progress, he would not, as, ultimately, “civilization needs to learn to live with the technology we’ve built, even if that means committing ourselves to a better set of values and institutions.”  We can deal with artificial intelligence – a vastly more difficult challenge we face is dealing with ourselves.  That’s the last word.  With that, it’s time to leave the café.

Friday, December 13, 2024

Electric Vehicles: Sparse Commentary, But Still Worthwhile

Since August, I’ve been trying to find enough articles to justify an EV post, but not much is coming out.  We have news about Norway, with its shorter driving distances and politically liberal and freely government-obeying people, moving toward banning or restricting, maybe prohibitively, internal combustion ones, but those things don’t apply in the United States, so such policies can’t reasonably be seen as role models for us.  In the meantime, what’s the best of the slim pickings of second-half 2024’s published pieces?

The oldest was Jack Ewing’s “Electric Cars Help the Climate.  But Are They Good Value?” (July 29th, The New York Times).  The author addresses “factors to consider, many of which depend on your driving habits and how important it is to you to reduce your impact on the environment.”  He wrote that “it’s hard to say definitively how long batteries will remain usable,” and although they “do lose range over time,” “the degradation is very slow.”  As for “resale value,” “on average, electric vehicles depreciated by 49 percent over five years, compared to 39 percent for all,” hurt by “steep price cuts” on new ones.  Fueling and maintenance, though, can both be cheaper, as, per the Environmental Protection Agency, one electric pickup truck model will cost less than half as much to fuel as its gasoline or diesel version.  Such vehicles also do not need oil changes, spark plugs, or muffler replacements, and overall “have fewer moving parts to break down,” although with heavy batteries they need tires more often.  Ewing, though, did not mention driving range, certainly a concern varying greatly between consumers.

I have advocated hybrid vehicles as the best of both worlds, so was surprised and disappointed to see “Why the hype for hybrid cars will not last” (The Economist, September 17th).  The piece does not consider vehicles not needing external electric charging, dealing only with those here called “plug-in electric vehicles (PHEVS).”  As of press time, “carmakers have been cooling on (non-hybrid electrics) and warming to hybrids, which are especially profitable, with buyers thinking of them as “cheap,” as they need much smaller batteries.  The uncredited author expected that they will be less common as California and the entire European Union prevent, in the next decade, their sale.

Another advantage of electric cars we may not have anticipated is that “It Turns Out Charging Stations Are Cash Cows For Nearby Businesses” (Rob Stumpf, InsideEVs.com, September 24th).  I passed along years ago the idea that if driverless vehicles took over, smoking would drop, as so many people buy cigarettes at gas stations – this is sort of the other side.  “EV charging stations aren’t just better for the environment; they’re also money printers.  And it’s not just charging network providers who see green – so do nearby shops.”  The facilities seem to be benefiting businesses as far as a mile away, with “coffee shops” and other “places where people can kill 20 or so minutes” doing especially well because of this “dwell time.”  Watch this trend – it will become more and more focused, unless, somehow, charging takes no longer than a fill-up does today.  And perhaps coffee consumption will go up.

Last, we have “6 Common EV Myths and How to Debunk Them” (Jonathan N. Gitlin, Wired.com, November 16th).  How true are these actually seven areas of concern?  Gitlin wrote that “charging an EV takes too long” is invalid, since those unhappy with 18 to 20-minute times can be “curmudgeons,” and people can recharge at home while the car is idle.  However, “I can’t charge it at home” is reasonable, as “if you cannot reliably charge your car at home or at work – and I mean reliably (in bold) – you don’t really have any business buying a plug-in vehicle yet.”  “An EV is too expensive” fails since “75 percent of American car buyers buy used cars,” and “used EVs can be a real bargain.”  Weather concerns are no worse than with other vehicles, but he admitted that “I need 600 miles of uninterrupted range” doesn’t have “a good rebuttal,” though at least one electric model is now good for almost 500.  “They’re bad for the environment” does not apply to carbon dioxide emissions or in localities with little electricity coming from coal.  “We don’t have enough electricity” – well, yes, we do, for everything except artificial intelligence model creation.

Overall, very little has been decided about the future of electric and hybrid vehicles in America.  But, with time, there will be more.  Stay tuned.

Friday, December 6, 2024

More Jobs but a Down Report – AJSN Showed Latent Demand Off a Bit To 15.8 Million

This morning’s Bureau of Labor Statistics Employment Situation Summary was touted as important, with last time’s tiny nonfarm payroll positions gain expected to strongly rebound and the whole thing affecting the Federal Reserve’s interest rate decision less than two weeks away. 

On the number of jobs, the report delivered.  The five estimates I saw were all between 200,000 and 215,000, and the result was 227,000.  Otherwise, outcomes were largely a sea of red.  Seasonally adjusted and unadjusted unemployment each gained 0.1% to get to 4.2% and 4.0%.  The total adjusted number of jobless rose 100,000 to 7.1 million, with the unadjusted variety up 75,000 to 6,708,000.  The unadjusted count of employed was off 482,000 to 161.456 million.  There were 100,000 more long-term unemployed – 1.7 million out of work for 27 weeks or longer.  The two statistics best showing how common it is for Americans to be working or one step away, the employment-population ratio and the labor force participation rate, worsened 0.2% and 0.1% to get to 59.8% and 62.5%.  The only two of the front-line results I track to improve were the count of people working part-time for economic reasons, or keeping such arrangements while looking thus far unsuccessfully for full-time ones, which lost 100,000 to 4.5 million, and average hourly private nonfarm payroll earnings, up 15 cents, well over inflation, to $35.61.

The American Job Shortage Number, the measure showing how many additional positions could be quickly filled if all knew they would be easy to get, dropped just over 60,000 to reach the following:


The largest change since October was the 215,000 effect of the 269,000 fall of those wanting work but not searching for it for a year or more.  The contribution of those officially unemployed was up 69,000, and no other statistic above affected the AJSN more than 39,000.  The share from joblessness was 38.1%, rising 0.5%.

Compared with a year before, the AJSN gained 180,000, the most activity from mostly offsetting higher unemployment and fewer expatriates. 

How can we summarize this report?  Torpid.  Despite the 227,000 net new jobs, fewer people are working, and we see what looks like many previously not trying to get jobs for a year or more declaring themselves uninterested.  We know how flexible that category really is, and this time it flexed up, with much more than the aging population responsible.  We really did not go anywhere this time, so the Fed clearly should grab the final 2024 interest rate cut.  As for the turtle, he did not move.

Friday, November 29, 2024

Artificial Intelligence – Three-Plus Months of Problems and Perceptions, With Hope

As I predicted at the end of last year, AI has found a home in many niches.  But does it seem capable of justifying its $1 trillion economy?

Per “Artificial intelligence is losing hype” on August 19th, The Economist had concerns – or did it?  The editorial piece’s subtitle, “For some, that is proof that the tech will in time succeed.  Are they right?,” leaves open that AI expectations, especially those not backed up by reasonable data and judicious use of extrapolation, calming down may even predict its broader triumph.  It told us that “according to the latest data from the Census Bureau, only 5.1% of American companies use AI to produce goods and services, down from a high of 5.4% early this year.”  The article compared AI investments with 1800’s British “railway fever,” which, only after it caused an investment bubble, was justified as firms, “using the capital they had raised during the mania, built the track out, connecting Britain from top to bottom and transforming the economy.”  Could that happen with AI?

On September 21st, the same publication, in “The breakthrough AI needs,” considered what might be required for AI to be comprehensively and gigantically successful, and came up with using more “creativity” to end “resource constraints,” and “from giving ideas and talent the space to flourish at home, not trying to shut down rivals abroad,” as “the AI universe could contain a constellation of models, instead of just a few superstars.”  It is indeed clear that now picking the most successful AI companies of 2050 could be no more accurate than using 1900 information to determine the premier automakers during the industry’s mid-1920s gains.

Most pessimistic of all was “Will A.I. Be a Bust?  A Wall Street Skeptic Rings the Alarm” (Tripp Mickle, The New York Times, September 23rd).  The doubter, Goldman Sachs stock research head Jim Covello, wrote three months before “that generative artificial intelligence, which can summarize text and write software code, makes so many mistakes that it was questionable whether it would ever reliably solve complex problems.”  The “co-head of the firm’s geopolitical advisory business… urged him to be patient,” resulting in “private bull-and-bear debates” between the two, but the issue, within as well as outside Goldman Sachs, remained partisan and unresolved.

Back to The Economist, where on November 9th appeared “A nasty case of pilotitis,” subtitled “companies are struggling to scale up generative AI.”  Although, per the piece, “fully 39% of Americans now say they use” AI, the share of companies remained near 5%, many of which appeared “to be suffering from an acute form of pilotitis, dilly-dallying with pilot projects without fully implementing the technology.”  Managements seemed to become “embarrassed if they moved too quickly and damaged their firm’s reputation(s),” and have also been held back by cost, “messy data” needing consolidation, and an AI-skill shortage.  Deloitte research “found that the share of senior executives with a “high” or “very high” level of interest in generative AI had fallen to 63%, down from 74% in the first quarter of the year, suggesting that the “new-technology shine” may be wearing off,” and one CIO’s “boss told him to stop promising 20% productivity improvements unless he was first prepared to cut his own department’s headcount by a fifth.”

Another AI issue, technical instead of organizational, was described in “Big leaps to baby steps” in the November 12th Insider Today, and started with “OpenAI’s next artificial intelligence model, Orion, reportedly isn’t showing the massive leap in improvement previous versions have enjoyed.”  Company testers said Orion’s improvement was “only moderate and smaller than what users saw going from GPT-3 to GPT-4.”  With high costs and power and data limitations still looming, a shrinking capability exponent could serve to eliminate future releases. 

Six days later, the same source described “A Copilot conundrum,” in that, even one year after its release, Microsoft’s so-named “flagship AI product” has been “coming up short on the big expectations laid out for it.”  An executive there told a Business Insider AI expert “that Copilot offers useful results about 10% of the time.”  Yet the software does have its adherents, including Lumen Technologies’ management forecasting “$50 million in annual savings from its sales team’s use of Copilot.”

An overall problem stemming from the above is that “Businesses still aren’t fully ready for AI, surveys show” (Patrick Kulp, Tech Brew, November 22nd).  “Indices attempting to gauge how companies have fared at reworking operations around generative AI have been piling up lately – and the verdict is mixed.”  While AI’s shortcomings are real and documentable, many firms “are still organizing their IT infrastructure.”  Reasons mentioned here were “culture and data challenges, as well as a lack of necessary talent and skills,” causing “nearly half of companies” to “report that AI challenges have fallen short of expectations across top priorities.”  So, if AI is overall now a failure, more than its producers are to blame.

A final AI course proposal came from Kai-Fu Lee in Wired.com on November 26th: “How Do You Get to Artificial General Intelligence:  Think Lighter.”  The idea here was to build “models and apps” that are “purpose-built for commercial applications using leaner models and innovative architecture,” thereby costing “a fraction to train and achieve levels of performance good enough for consumers and enterprises,” instead of making massive, comprehensive large language models which end up costing vastly more per query to use.  It may even be that different apps can use different AI sources which can somehow be combined.  That would be more difficult to organize, but the stakes are high.

This final article points up the thesis of the September 21st piece above – AI will need creativity in ways less emphasized in the industry.  Companies will need to think outside the boxes they have built and maintained.  There are real opportunities for those doing that best to earn billions or more.  Then, and only then, may artificial intelligence reach its potential.  Designers and executives stopped from exiting through the sides by the massive issues above will need to find ways of escaping through the top or bottom – or through another dimension.  Can they do that?  We will see.

Friday, November 15, 2024

Seven Weeks of Artificial Intelligence Investments, Revenue, and Spending, and What They Tell Us

A massive amount of money is being spent on developing, preparing for, buying, and implementing AI.  What has it caused, and how does AI now look overall?

Before the articles below came out “Is the AI bubble actually bursting?” (Patrick Kulp, Tech Brew, August 8th).  Concerns here were that “a stock market rout and big questions about spending continue to stoke worries,” that “some high-profile reports this summer questioned AI’s money-making potential relative to its enormous cost,” that “Microsoft, Alphabet, and Meta didn’t do much to soothe investors seeking temperance in AI capital expenditures,” and that we have reason to expect “a “major course correction” in AI hype as revenues fail to keep pace with spending.”

Since then, there have been strong and weak AI financial outcomes.  On August 23rd, Courtney Vien told us, in CFO Brew, “How Walmart’s seen ROI on gen AI.”  “During its last earnings call, the giant retailer reported 4.8% revenue growth, bolstered by 21% growth in its e-commerce function,” which “Walmart executives credited… to several factors… but one stood out:  generative AI.”  The technology had helped with “populating and cleaning up” the company’s gargantuan “product catalog,” of which the new version has also “given Walmart more insight into its customers.”  AI has also been “driving its impulse sales” through improved “cross-category search.”

Another such success story was the subject of “Nvidia’s earnings beat Wall Street’s estimates as AI momentum continues” (Eric Revell, Fox Business, August 28th).  In its “second-quarter earnings report,” earnings per share reached $0.68 instead of the projected $0.64, and revenue came in at $30.04 billion instead of $28.70 billion.  Although it started production of a new AI-dedicated chip, the Blackwell, demand for the current Hopper version has “remained strong.”

A major consumer of Nvidia’s chips rates to buy many more, as “OpenAI Is Growing Fast and Burning Through Piles of Money” (Mike Isaac and Erin Griffith, The New York Times, September 27th).  Although that firm “has been telling investors that it is making billions from its chatbot,” “it has not been quite so clear about how much it is losing.”  While “OpenAI’s monthly revenue hit $300 million in August, it “expects to lose roughly $5 billion this year after paying for costs related to running its services and other expenses like employee salaries and office rent.”  It spends most, though, on “the computing power it gets through a partnership with Microsoft, which is also OpenAI’s primary investor.”  Even if company projections showing a much brighter future will come to pass, OpenAI’s financial present is dark.

On industry results, Matt Turner reported those from the previous week from five of the largest companies in the November 3rd “Insider Today” in Business Insider.  Overall, he said they were “beating estimates and committing billions to AI.”  Alphabet’s Google-branded “cloud business benefited from AI adoption, posting a 35% year-over-year increase in revenues.”  Amazon did the same, with AI-assisted cloud revenues growing 19%.  Apple’s loss of Chinese revenue “left investors underwhelmed,” and it is uncertain if “new Apple Intelligence features help juice sales.”  “Meta beat estimates, though user growth came in below expectations,” and CEO Mark Zuckerberg “promised to keep spending on AI.”  Microsoft also did better than they expected, “but concerns around capacity constraints in AI” hurt investor reactions.  Overall, AI seemed to be producing real money for these firms, but related revenue growth has hardly been explosive.

A useful summary, “How companies are spending on AI right now,” by Patrick Kulp, came out on November 12th in Tech Brew.  In an in-effect response to the first article above, also written by Kulp, the piece started with “Despite some worry about a possible AI bubble earlier in the year, businesses are continuing to spend on generative technology – and investors are still eyeing it as a growth area.”  Another conclusion here was that of “AI becoming an office staple” with 38% third-quarter-on-second-quarter growth of “business spending on AI vendors.”  Although “half of the top 10 fastest growing enterprise software vendors on the platform were AI startups,” “OpenAI’s ChatGPT still reigns supreme,” but companies buying that have been increasingly likely to get other firms’ products as well.  Additionally, we have “AI still fueling VC growth,” as “three-quarters of limited partners surveyed… said they plan to increase AI investments in the next 12 months, with cybersecurity, predictive analytics, and data centers garnering the most interest.”  Note that “autonomous vehicles and computer vision ranked last for sub-fields of AI catching investor attention.”  Yet, per an Accenture report, there has been a “productivity flatline,” despite more AI use, over the past year.

What does all this reveal about artificial intelligence?  It is not vaporware.  Demand for it is real, in fact huge.  For some applications it is strongly objectively beneficial.  But it still has problems, with, along with many more mentioned in previous posts, profitability and productivity.  We don’t know how comprehensive its advantages will turn out to be.  But it is real, and it is progressing.  From there, we will just need to stay tuned.

Friday, November 8, 2024

Artificial Intelligence Regulation – Disjointed, and Too Soon

Over the past three months, there have been several reports on how, or even whether, AI should be legally constrained.  What did they say?

On the issue of its largest supplier, there was “As Regulators Close In, Nvidia Scrambles for a Response” (Tripp Mickle and David McCabe, The New York Times, August 6th).  It’s not surprising that this company, which not only is doing a gigantic amount of business but “by the end of last year… had more than a 90 percent share of (AI-building) chips sold around the world,” had drawn “government scrutiny.”  It has come from China, the United Kingdom, and the European Union as well as the United States Justice Department, causing Nvidia to start “developing a strategy to respond to government interest.”  Although, per a tech research firm CEO, “there’s no evidence they’re doing anything monopolistic or anticompetitive,” “the conditions are right because of their market leadership,” and “in the wake of complaints about Nvidia’s chokehold on the market, Washington’s concerns have shifted from China to competition, with everyone from start-up founders to Elon Musk grumbling about the company’s influence.”  It will not be easy for either the company or the governments.

Meanwhile, “A California Bill to Regulate A.I. Causes Alarm in Silicon Valley” (Cade Metz and Cecilia Kang, The New York Times, August 14th).  The legislation “that could impose restrictions on artificial intelligence,” was then “still winding its way through the state capital,” and “would require companies to test the safety of powerful A.I. technologies before releasing them to the public.”  It could also, per its opposition, “choke the progress of technologies that promise to increase worker productivity, improve health care and fight climate change” and are in their infancies, pointing toward real uncertainty in how they will affect people.  Per leginfo.com, it was vetoed by state governor Gavin Newsom, who said “by focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.  Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”  Expect a different but related bill in California soon.

A thoughtful overview, “Risks and regulations,” came out in the August 24th Economist.  It stated that “artificial intelligence needs regulation.  But what kind, and how much?,” and came up with various ideas.  It started with the point that AI’s “best-known risk is embodied by the killer robots in the “Terminator” films – the idea that AI will turn against its human creators,” the kind of risk that some people think is “largely speculative,” and others think is less important than “real risks posed by AI that exist today, such as bias, discrimination, AI-generated disinformation and violation of intellectual-property rights.”  With Chinese authorities most wanting to “control the flow of information” and the European Union’s now-the-law AI Act which “is mostly a product-safety document which regulates applications of the technology according to how risky they are,” “different governments take different approaches to regulating AI.”  Combined with most American legislation being from states, international and even national accord seem a long way off.

What can we gain from “Rethinking ‘Checks and Balances’ for the A.I. Age” (Steve Lohr, The New York Times, September 24th)?  Recalling the Federalist Papers, a Stanford University project, now with 12 essays known as the Digitalist Papers, “contends that today is a broadly similar historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements.”  The writings’ “overarching concern” is that “a powerful new technology… explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions,” therefore “citizens need to be more involved in determining how to regulate and incorporate A.I. into their lives.”  This effort seems designed to be a starting point, as, before, we have no more idea how, if it meets its high-importance expectations, AI will affect society than we did about cars in 1900.

Overall, per Garrison Lovely in the September 29th New York Times, it may be that “Laws Need to Catch Up to Artificial Intelligence’s Unique Risks.”  Or not.  Over the past year, OpenAI has been in controversy about its safety practices, and, per Lovely, federal “protections are essential for an industry that works so closely with such exceptionally risky technology.”  As before, we do not have enough agreement between governments to do that now, but the day will come.  Sooner?  Later?  We do not know, but someday, we hope, we can get together on this potentially critical issue.

Friday, November 1, 2024

Today’s Jobs Report Didn’t Go Much of Anywhere – AJSN Latent Demand Down To 15.9 Million on Lower Number of Expatriates

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to show a greatly reduced number of net new nonfarm payroll positions, but at 12,000 it didn’t even approach the 110,000 and 115,000 published estimates.  How did the other figures turn out?

Seasonally adjusted and unadjusted unemployment stayed the same, at 4.1% and 3.9% respectively, with the adjusted count of jobless up 200,000 to 7 million.  Of those there were 1.6 million long-term, or without work for 27 weeks or longer, down 100,000.  Those working part-time for economic reasons, or holding short-hours positions while seeking full-time ones, remained at 4.6 million.  The two measures of how common it is for Americans to be working or officially unemployed, the labor force participation rate and the employment-population ratio, both worsened, coming in at 62.6% and 60.0% for drops of 0.1% and 0.2%.  The unadjusted number of employed was off 108,000 to 161,938,000.  Better, though, were private nonfarm payroll wages, which gained 10 cents, more than inflation, to $35.46 per hour. 

The American Job Shortage Number or AJSN, the metric showing how many additional positions could be quickly filled if all knew they would be easy to get, fell 736,000, almost all from a much-reduced estimate of the number of Americans living outside the United States, as follows:


The share of the AJSN from official unemployment rose 2.3% to 37.6%.  Compared with a year before, the loss of 900,000 from the expatriates’ contribution was mostly offset by 480,000 more from unemployment and 154,000 from those not looking for work for the previous year, with other changes small, for a 247,000 fall. 

What happened this time?  To judge that, we next look at the measures telling us how many people left or entered the workforce.  Those were a 469,000 rise in the count of those claiming no interest in a job, and 219,000 more overall not in the labor force.  There was also a consistent shrinkage in the categories of marginal attachment, the 3rd through 6th and 8th rows above.  Those departing workers were why our unemployment rates didn’t worsen, given fewer new positions than our population increase could absorb.  October’s deficiency, possibly created mostly from storms and sudden layoffs, may well greatly reverse itself next time, but it is in the books.  Accordingly, I saw the turtle take a small step backwards.

Friday, October 25, 2024

Autonomous Vehicles in 3Q24: A Pause, A Scandal, and a Fine

They’re still in the news, so what has been going on with driverless cars?

On July 23rd, we had two stories on the status of one manufacturer’s efforts.  In “GM indefinitely pauses Cruise Origin autonomous vehicle while it refocuses unit” (Daniella Genovese, Fox Business), the word was that General Motors would be “focusing their next autonomous vehicle on the next-generation Chevrolet Bolt, instead of the Origin, which had been facing regulatory uncertainty because of its unique design.”  Later that day, though, we saw that “G.M. Will Restart Cruise Taxi Operations” (Neal E. Boudette, The New York Times), a report that “General Motors said on Tuesday that its Cruise driverless-taxi division has restarted test operations in three Sun Belt cities, using self-driving cars with human safety drivers who will monitor the vehicles and intervene if needed.”  The second half of that sentence is important to note, as is the word “test,” necessary since “the vehicles will not carry paying passengers for now.”  No clear progress here.

How about one of GM’s competitors?  They produced a nuisance, as “Endless Honking of Waymo’s Driverless Taxis Wakes a Neighborhood” (Sara Ruberg, The New York Times, August 14th).  This was about a Waymo-rented San Francisco parking lot used for the vehicles to “idle in then they weren’t making trips or charging.  But because the vehicles are programmed to honk when nearing other vehicles and then change directions, the more crowded the lot became, the more honks erupted.”  Whoops.  The company has said it has since “updated the software.”  Three weeks later, another piece asked, since “Waymo’s Robot Taxis Are Almost Mainstream.  Can They Now Turn a Profit?” (Eli Tan, The New York Times, September 4th).  Lost in the autonomous follies has been news that “Waymo is now completing over 100,000 rides in San Francisco, Phoenix, and Los Angeles – double the number in May.”  That’s highly favorable news, even if “robot taxi services are not profitable right now.”  As for other locations, “autonomous vehicle experts” see potential in New York, Chicago, Atlanta, Las Vegas, Hoboken, Westchester County, and “even Long Island.”

I’m calling this a scandal since the public was deceived, but perhaps it wasn’t, given the results above: “When Self-Driving Cars Don’t Actually Drive Themselves” (Cade Metz, The New York Times, published September 11th and updated September 21st.  The author, a long-time writer on autonomous transportation, reported that he had taken “his first ride in a self-driving car nearly a decade ago,” and then “felt a deep sense of awe that machines had mastered a skill that once belonged solely to humans.”  He realized afterwards that he was wrong, that such cars “could not yet match the power of the human brain,” and as of the article date “they still can’t.”  He checked out a Zoox “command center” which provided “help from human technicians” when “the company’s self-driving vehicles… struggle to drive themselves.”  Among other things, these as-needed operators rerouted impeded cars, and “all robot taxi companies operate command centers like” that one.  We knew about the shortcomings Metz also mentioned, such as the similarly misleading Tesla “full self-driving” technology that wasn’t, but more such misrepresentations will only serve to discourage people from thinking the current state is as good as it is.  Perhaps all companies need to do is to label it accurately.  For now, though, the author has discovered that the capabilities of autonomous vehicles “and so many other forms of artificial intelligence… are not as powerful as they first seem.  When we, the people, see a bit of human behavior in a machine, we tend to think, subconsciously, that it can do everything we can do.  But we should give ourselves more credit,” a point also made in similar-topic article “Self-driving cars have a dirty little secret,” by Frank Landymore on September 14th in Futurism.com. 

A sad follow-up to General Motors’ problems above was that “Cruise, G.M.’s Self-Driving Unit, Will Pay $1.5 Million Federal Fine” (Jack Ewing, The New York Times, September 30th).  That was for “failing to properly report an accident in which one of its self-driving taxis severely injured a pedestrian last year.”  That’s not a lot of money in this industry, and I hope it will not slow the company.  We’re still looking at almost 40,000 driver-caused deaths per year, and we need to stay focused on ending that.

Friday, October 18, 2024

Kamala Harris for President

After most of this one-of-a-kind campaign, reminiscent of 1968 but hardly the same, we’re 18 days away from making our presidential decisions.  The right choice is no foggier than it was last time.

The Republican nominee in 2020 and election winner in 2016 runs again as the first major-party choice to stand for a third time since Franklin D. Roosevelt in 1940.  That’s not the problem, and his age, 78, which would make him older on Inauguration Day than any other president, isn’t the main one either.  Donald Trump has more flaws and disqualifying characteristics than any Democratic or Republican nominee I have seen since I started following campaigns with Nixon-Humphrey.  He is a convicted felon who has consistently shown he does not want to follow our laws and Constitution.  He has a bizarre, apparently insatiable sweet tooth for lying, way beyond any of his competitors even in this often-sordid profession.  He has shown affinity with the world’s dictators, while saying many things indicating he would strive to be one himself.  He has threatened legal and even military retribution against those taking lawful measures to stop him.  His attitudes toward women, over a wide spectrum of areas, are disastrous.  His delusions, such as him being the true 2020 winner, which he has often insisted be supported by those working with him, have persisted.  And on jobs and the economy, his proposed extensive and expensive tariffs would drastically worsen both.

Nothing reported in news sources has been able to reverse the Trump tide.  His advocates have remained impervious to these issues, even when they are shown to be the truth.  And others have expressed a willingness to pollsters to join his side.  The reasons for his popularity will be discussed for decades or centuries to come, but common sense or prudent judgment will not be among them.  Some of his heavy contributors are extremely wealthy, expecting to save tens or hundreds or millions of dollars on his likelihood of taxing them less, but that does not make them worthy of emulation by the rest of us.  Perhaps the largest lesson of the 20th century was that those who people find charismatic may lead us in devastatingly wrong directions, and caution about Trump seems a clear response.

His opponent, Kamala Harris, is a former district attorney who is currently the vice president.  She has shortcomings, but has shown in public appearances to be sober, reasonable, lawful, and almost always truthful.  We don’t know exactly how well she would work out, but it is obvious that her downside is vastly smaller.  As contrasted with her opponent, who has been described as a weak man’s idea of a strong man, Harris is forceful without being abrasive, and will work with politicians on both sides.  That is what we need in 2025 and beyond.

The best justification for a Trump vote I have heard was from one who said he was a reprehensible person, but she was not choosing a friend.  I don’t buy that, since the world is too dangerous, and our allies too valuable, for us to pick someone who needs to be contained.  And we still have a large nuclear arsenal with which the president would have great scope.  Junkyard dogs can be mean, but presidents need not be.

I have not mentioned Harris’s or Trump’s running mates, but both seem fair choices.  Either Tim Walz or J.D. Vance would rate to be adequate if circumstances put them into the top job, and, in the case of Vance, would get the presidency in steadier hands.  There is also little here about either candidate’s meager list of proposed policies, since that is not what this election is about. 

As of yesterday morning, the PredictIt site showed its contributors giving a combined five-point winning-percentage advantage to Trump.  We can do better, and massive amounts of safety and prosperity may depend on whether we do.  We need look only at what news and information sources, even including those generally favorable to his cause, say the realities are.  If this registered Republican who might have chosen a conservative nominee from that party can avoid him, so can you.  And please vote. 

Royal Flush Press endorses Kamala Harris for president.

Friday, October 4, 2024

A Strong Jobs Report Gathered Before the Interest Rate Cut, with AJSN Showing Latent Demand Almost a Million Lower

Commentary I read before this morning’s Bureau of Labor Statistics Employment Situation Summary’s release said that it would be a critical installment, mainly because of the effect it would have on the Federal Reserve’s two remaining 2024 interest rate decisions.

It turned out to show real improvement.  The number of net new nonfarm payroll positions exceeded its 150,000 consensus estimate with 254,000.  Seasonally adjusted unemployment dropped another 0.1% to 4.1%, the same place it was three months before.  There were 6.8 million unemployed people, down 300,000, and the unadjusted rate fell from 4.4% to 3.9%, some but not all due to seasonality.  The count of people working part-time for economic reasons, or keeping such jobs while thus far unsuccessfully seeking longer-hours ones, erased the last report’s 200,000 gain, going back to 4.6 million.  Those officially unemployed and looking for work for 27 weeks or longer, though, gained 100,000 to 1.6 million.  The unadjusted number of employed grew 700,000 to 162,046,000.  The two best measures of how many people are working or one step away, the employment-population ratio and the labor force participation rate, gained 0.2% and stayed the same to reach 60.2% and 62.7%.  Average private nonfarm payroll earnings increased 15 cents, almost double the effect of inflation, to $35.36.  More people continued to leave the labor force, with those claiming no interest gaining almost 600,000 to add to last time’s 1.3 million, reaching 94,920,000.

The American Job Shortage Number or AJSN, the Royal Flush Press measure showing how many additional positions could be quickly filled if all knew they would be easy to get, lost 980,000, as follows:



The effect of fewer people officially jobless was responsible for 800,000 of the drop, and those interested but not looking for a year or more cut off another 340,000.  Gains in the second through sixth categories above offset that by 150,000.  The share of the AJSN from those unemployed fell 2.6% to 35.3%.  Compared with a year before, the AJSN has increased 433,000, almost exactly that amount from those officially unemployed. 

What happened here?  Still many more new positions than we can expect, and that along with continued workforce departures assured our unemployment-rate’s lowering.  The job market is healthy, but hardly overheated.  That means the Federal Reserve ball will go back to the inflation court, and then back to the next jobs report on November 1st, five days before the next Fed meeting starts.  We are very much in the hunt for another quarter-point decrease, but more than that, considering the progress above, is less likely.  The turtle did, this time, take a moderate step forward.

Friday, September 27, 2024

Remote Work: The Pendulum Has Swung Back to the Office

As I have written repeatedly before, employer attitudes on working from home have oscillated back and forth over the past three-plus decades.  In the 2010s, hybrid labor, or putting in time in some combination between in the office and elsewhere, was getting reestablished, with glowing reviews of home productivity gains as well as work-life balance encouraging organizations to allow a large portion of time to be spent out of sight, and, as always, greatly out of mind.  By early 2020 the pendulum was moving toward not allowing that, but the pandemic necessitated it, with not only physical proximity issues but a greatly tightening labor market facilitating too many people to leave if they did not get the schedules they wanted.

Now, with Covid-19 almost no factor and unemployment, especially for information technology positions, growing, opposition to non-office work is again becoming entrenched.  What is the evidence of that?

One piece is the emerging of a new expression, as featured in “No more “coffee badging”” (Business Insider, July 21st).  The term applies to “employees who badge in, get coffee, and leave shortly after to satisfy their (return to office) requirements.”  As of just before this date, Amazon was “getting serious” about ending this custom, and, as we shall see, there was more to come.

Especially in transition times, private organizations have varied greatly in what they allow.  One large public one, capable of setting national, multi-installation rules, is the subject of “To be remote or not to be?  That is the burning federal workplace question” (Gleb Tsipursky, Fox News, August 16th).  While “many federal agencies have implemented hybrid work models, allowing leaders to refine strategies to adapt to evolving employee needs and mission-driven objectives,” “there is tension between this flexible approach and congressional legislative efforts such as the Back to Work Act of 2024… a bipartisan bill that seeks to limit telework for federal employees to no more than 40% of their workdays per pay period.”  That is broad-based and specific, and nothing that would have been taken seriously in 2014.

As well as the stick, businesses are also using the carrot.  “The Hotelification of Offices, With Signature Scents and Saltwater Spas” (Stacey Freed, The New York Times, August 18th).  Such things have been controversial since they began appearing around the turn of the century, especially when remote work has been unfashionable.  This case is “the Springline complex in Menlo Park, Calif.,” where employees and others “are surrounded by a sense of comfort and luxury often found at high-end hotels:  off-white walls with a Roman clay finish, a gray-and-white marble coffee table and a white leather bench beneath an 8-by-4 resin canvas etched with the words “Hello, tomorrow,” and “hints of salty sea air, white water lily, dry musk and honeydew melon linger in the air.”  You get the idea.  While “companies have over the years improved their spaces in the hopes of getting more out of employees,” this kind of thing is now transparently designed to make people happier about reporting in person, and will not be immune to backlashes as they figure that out.

Another change many companies are making turned up in “Downtown’s lost prestige” (Bloomberg, August 27th).  “The US office market is splitting in two:  Investors are writing off the value of older buildings downtown as newer developments outside traditional business hubs become prestige destinations,” resulting in “more than half-a-trillion dollars of value” being “erased from US offices from 2019 through 2023.”  Suburban desk farms are nothing new – I started my AT&T cubicle career at one 35 years ago – but employers are now motivated by “trying to get employees back to their desks” by moving to “low-crime neighborhoods with plenty of shopping and parking.”

The big story here was, though, “Bosses Rejoice!  Amazon Delivers the End of Hybrid Work” (Vanessa Fuhrmans, Katherine Bindley and Chip Cutter, The Wall Street Journal, September 21).  This article, on the front page of the Exchange section and embellished with a picture of an Amazon shipping box containing someone at a plain-looking office desk, was subtitled “If you thought your two days a week of work-from-home were safe, think again.  The CEO of one of America’s largest employers just called everyone back to the office full-time,” effective January 2nd. 

It was clearly an overreaction – Amazon does not set national workplace policy – but documented a remarkably firm and all-encompassing decision.  Per the first story above, “until (the CEO’s) memo, 4½ years after the Covid-19 pandemic sent everyone home, bosses and employees had largely reached a truce on part-time remote work,” as, while “many company leaders looked out at their substantially empty offices in quiet exasperation,” they feared top-performer departure.  Amazon’s pronouncement, “the talk of the town” in Seattle, was publicized as something “that will help both the company and its employees,” as in offices “we’ve observed that it’s easier for our teammates to learn, model, practice, and strengthen our culture,” as, in person, “collaborating, brainstorming, and inventing are simpler and more effective;  teaching and learning from one another are more seamless; and teams tend to be better connected to one another.” 

Beyond Amazon, a survey showed that while about 30% of CEOs said they “expect workers to be back in the office full-time within three years” in April.  Earlier this month that had become almost 80%.  That stunning shift gives Amazon’s decision at least the appearance of spearheading a widespread change.  There will be exceptions, but many more companies will follow, and, for now, we will hear little about the good side of remote work.

That will come back in the 2030s.  Count on it.

Friday, September 6, 2024

The Jobs Report Tells Only One Story; Consistently, AJSN Shows Latent Demand Down 250,000 to 17.6 Million

This morning’s was supposed to be a critically important Bureau of Labor Statistics Employment Situation Summary.  How did it turn out?

The headline figure, the number of net new nonfarm payroll positions, fell a small amount short of published 160,000 and 161,000 estimates at 142,000.  Seasonally adjusted unemployment ended its monthly march upwards, falling back 0.1% to 4.2%.  Unadjusted unemployment lost the same amount, from 4.5% to 4.4%.  There were 7.1 million officially jobless people, 100,000 better.  The number of long-term unemployed, out 27 weeks or longer, was 1.5 million for the third straight month.  The two measures showing most clearly the share of people actually working or that plus officially jobless, the employment-population ratio and the labor force participation rate, held at 60.0% and 62.7%.  Average hourly private nonfarm payroll earnings gained 14 cents, more than inflation, to $35.21.  Trailing the rest was the count of people working part-time for economic reasons, or keeping such employment while thus far unsuccessfully seeking a full-time proposition, up 200,000 to 4.8 million.

The American Job Shortage Number or AJSN, our long-standing statistic showing how many positions, in addition to those now available, could be quickly filled if all knew they would be easy and routine to get, lost 258,000 as follows:

  


The fall from July’s result was almost exactly the amount from unemployment, with no other change more than 100,000.  The share of the AJSN from that, at 37.9%, was 0.8% lower. 

Compared with a year before, the AJSN was about 800,000 higher, with 713,000 added from official joblessness, 288,000 from more people wanting work but not looking for it for a year or more, and 200,000 less from a smaller number of American expatriates.  None of the other factors increased or decreased over 50,000. 

What was the one thing which happened?  People left the labor force.  Remember that last month the boost in joblessness came from those jumping back into the working pool without finding it – well, this time, they got out.  Evidence of that was the count of those not interested leaping 1.3 million, the unadjusted number of employed despite the unemployment rate’s fall losing 690,000, and the numbers above of marginal attachment – those wanting work but stopped now for family responsibilities, being in school or training, with ill health or disability, the “other” category, and especially, with a 24% reduction, discouraged – all down.  The AJSN’s drop came from the same place, as people moved to the status with the lowest latent demand.  With this event factored out, despite the 142,000 gain the American employment situation stayed right where it was.  Interest rate decisions should be unchanged from yesterday.  As for the turtle, he did not budge either.


Friday, August 30, 2024

Artificial Intelligence’s Limitations and Clear Current Problems

We’re marching through months and years since last year’s AI awakening.  We can’t fairly say that the shortcomings it has are permanent, but, as of now, what are they?

First, although computer applications have excelled at many games, such as chess, where they are vastly better than any human ever, and checkers, which was electronically solved 17 years ago, they have not done the same with bridge.  Per BBO Weekly News on July 21st, Bill Gates said, correctly, that “bridge is one of the last games in which the computer is not better.”  Artificial intelligence progress has done nothing, so far, to change that, and it is noteworthy that even in a closed system with completely defined rules, objectives, and scoring, it has not been able to take over. 

Not only has it not replaced huge numbers of jobs, but “77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds” (Bryan Robinson, Forbes, July 23rd).  The effort, “in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers.”  It found that “the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees,” to the point where, in contrast with 96% of C-suite executives expecting AI to boost productivity… 77% of employees using AI say it has added to their workload and created challenges,” and has been “contributing to employee burnout.”  Also, 47% “of employees using AI say they don’t know how to achieve the expected productivity gains their employers expect, and 40% feel their company is asking too much of them when it comes to AI.”  This is what we used to call a disconnect.  The author recommended employers get outside help with AI efforts and measuring productivity differently, and workers to generally “embrace outside expertise.”

A similarly negative view was the subject of “Machines and the meaning of work” (Bartleby, The Economist, July 27th).  The pseudonymous writer cited a paper claiming that although “in theory, machines can free up time for more interesting tasks; in practice, they seem to have had the opposite effect.”  Although in health care, automation can allow more time with patients, in others, as “the number of tasks that remain open to humans dwindles, hurting both the variety of work and people’s understanding of the production process,” “work becomes more routine, not less.”  Overall, “it matters whether new technologies are introduced in collaboration with employees or imposed from above, and whether they enhance or sap their sense of competence.”

Similarly, Emma Goldberg, in the New York Times on August 3rd, asked “Will A.I. Kill Meaningless Jobs?”  If it does, it would make workers happier in the short run, but it could also contribute to “the hollowing out of the middle class.”  Although the positions that AI could absorb might be lacking in true significance, many “have traditionally opened up these white-collar fields to people who need opportunities and training, serving as accelerants for class mobility:  paralegals, secretaries, assistants.”  These roles could be replaced by ones with “lower pay, fewer opportunities to professionally ascend, and – even less meaning.”  Additionally, “while technology will transform work, it can’t displace people’s complicated feelings toward it.”  So we don’t know – but breaking even is not good enough for what is often predicted to be a trillion-dollar industry.

Back to the issue of perceived AI value is “A lack-of-demand problem” (Dan DeFrancesco, Insider Today, August 8th).  “A chief marketing officer” may have been justified in expecting that the Google AI tools it introduced would “be an easy win,” as “in the pantheon of industries set to be upended by AI, marketing is somewhere near the top,” as the technology could “supercharge a company’s marketing department in plenty of ways,” such as by providing “personalized emails” and “determining where ads should run.” Unfortunately, per the CMO, “it hasn’t yet,” as “one tool disrupted its advertising strategy so much they stopped using it,” “another was no better at the job than a human,” and one more “was only successful about 60% of the time.”  Similar claims appear here from Morgan Stanley and “a pharma company.”  In all, “while it’s only fair to give the industry time to work out the kinks, the bills aren’t going to slow down anytime soon.”

In the meantime, per “What Teachers Told Me About A.I. in School” (Jessica Grose, The New York Times, August 14th), AI is causing problems there, per examples of middle school students, lacking “the background knowledge or… intellectual stamina to question unlikely responses,” turning in assignments including the likes of “the Christian prophet Moses got chocolate stains out of T-shirts.”  Teachers are describing AI-based cheating as “rampant,” but are more concerned about students not learning how to successfully struggle through challenging problems.  Accordingly, they are “unconvinced of its transformative properties and aware of its pitfalls,” and “only 6 percent of American public school teachers think that A.I. tools produce more benefit than harm.”

I do not know how long these AI failings will continue.  With massive spending on the technology by its providers continuing, they will be under increasing pressure to deliver useful and accurate products.  How customers react, and how patient they will be, will eventually determine how successful artificial intelligence, as a line of business, will be over the next several years.  After some length of time, future promises will no longer pacify those now dissatisfied.  When will we reach that point?

Friday, August 23, 2024

Seven Weeks on Artificial Intelligence Progress: Real, Questioned, Disappointing, and Baked into the Investment Cake

What recent substantial contributions has AI recently made?  What big weakness does it still have?  What has happened to its great prospects?  Can we know its true inherent advancement?  And what forecasts do today’s AI-related stock prices include?

The first report is “A sequence of zeroes” (The Economist, July 6th), subtitled “What happened to the artificial-intelligence revolution?”  “Move to San Francisco and it is hard not to be swept up by mania over artificial intelligence… The five big tech firms – Alphabet, Amazon, Apple, Meta and Microsoft… this year… are budgeting an estimated $400bn for capital expenditures, mostly on AI-related hardware.”  However, “for AI to fulfil its potential, firms everywhere  need to buy the technology, shape it to their needs and become more productive as a result,” and although “investors have added more than $2trn to the market value of the five big tech firms in the past year… beyond America’s west coast, there is little sign that AI is having much of an effect on anything.”  One reason for the non-progress is that “concerns about data security, biased algorithms and hallucinations are slowing the roll-out” – an example here is that “McDonald’s… recently canned a trial that used AI to take customers’ drive-through orders after the system started making errors, such as adding $222 worth of chicken nuggets to one diner’s bill.”  Charts here show that the portion of American jobs that are “white collar” has still been marching steadily upward, and, disturbingly, that share prices of “AI beneficiaries” have stayed about even since the beginning of 2019 while others have on average risen more than 50%.  Now, “investors anticipate that almost all of big tech’s earnings will arrive after 2032.”

“What if the A.I. Boosters Are Wrong?” (Bernhard Warner and Sarah Kessler, The New York Times, July 13th), and not even premature?  MIT labor economist Daron Acemoglu’s “especially skeptical paper” described how “A.I. would contribute only “modest” improvement to worker productivity, and that it would add no more than 1 percent to U.S. economic output over the next decade.”  The economist “sees A.I. as a tool that can automate routine tasks… but he questioned whether the technology alone can help workers “be better at problem solving, or take on more complex tasks.””  Indeed, AI may fall victim to the same problem which got 3D printing out of the headlines in the 2010s – lack of a massively beneficial, large-scale application.

In real contrast to common concerns, especially from last year, “People aren’t afraid of A.I. these days.  They’re annoyed by it” (David Wallace-Wells, The New York Times, July 24th).  One issue “has inspired a… neologistic term of revulsion, “AI slop”: often uncanny, frequently misleading material, now flooding web browsers and social-media platforms like spam in old inboxes.”  Some delightful examples cited here are X’s and Google’s pronouncements that “it was Kamala Harris who had been shot… that only 17 American presidents were white… that Andrew Johnson, who became president in 1865 and died in 1875, earned 13 college degrees between 1947 and 2012… that geologists advise eating at least one rock a day,” and “that Elmer’s glue should be added to pizza sauce for thickening.”  Such “A.I. “pollution”” is causing “plenty of good news from A.I.” to be “drowned out.”  With Google’s CEO admitting “that hallucinations are “inherent” to the technology,” they don’t look like they’ll be going away soon.

Even given the disappointments above, “Getting the measure of AI” (Tom Standage, The Economist, July 31st) is not easy.  One way “is to look at how many new models score on benchmarks, which are essentially standardised exams that assess an AI model’s capabilities.”  One such metric is “MMLU, which stands for “massive multi-task language understanding,”” contains “15,908 multiple-choice questions, each with four possible answers, across 57 topics including maths, American history, science and law,” and has been giving scores “between 88% and 90%” to “today’s best models,” compared with barely better than the pure-chance 25% in 2020.  There will be more, and it will be useful to see how they improve from here.

On the constructive side, “A.I. Is Helping to Launch New Businesses (and Not Just A.I. Businesses)” (Sydney Ember, The New York Times, August 18th).  A Carnegie Mellon University professor who for 14 years has been having “groups of mostly graduate students start businesses from scratch,” said, after advising the use of generative AI extensively, that he’d “never seen students make the kind of progress that they made this year.”  The technology helped them to “write intricate code, understand complex legal documents, create posts on social media, edit copy and even answer payroll questions.” As well, one budding entrepreneur said, “I feel like I can ask the stupid questions of the chat tool without being embarrassed.”  That counts also, and while none of these are, as a Goldman Sachs researcher quoted in the Wallace-Wells article asked about, a $1 trillion problem that AI could solve, they collectively are of real value.

Is it reasonable to think that AI stocks will roughly break even from here if lofty expectations go unrealized?  No, according to Emily Dattilo, in Barron’s on August 19th: “Apple Is Set to Win in AI.  How That’s ‘Already Priced In.’”  Analysts at Moffett Nathanson, for example, pronounced that, although Apple was “on track to win in artificial intelligence,” the “bad news” was “that’s exactly what’s already priced in.”  I suspect that’s happening with the other AI stocks as well.  If the technology not only grows in scope but does so more than currently expected, share prices may rise, but if it only gets moderately larger, they could drop.  That can be called another problem with artificial intelligence – if enough investors realize this situation, the big five companies above, Nvidia, and others may have already seen their peaks.  Small-scale achievements such as startup business help will not be enough to sustain tremendous financial performance.  What goes up does not always come down, but here it just might.  And the same thing goes for AI hopes.