Wednesday, November 26, 2025

September’s Jobs Report – Months Ago Now, with Mild Changes – AJSN Now 16.9 Million

Between the government shutdown and my own outage, we’re about eight weeks later for this one than we usually are, but it still has something meaningful to say.  What?

The number of net new nonfarm payroll positions in the Bureau of Labor Statistics Employment Situation Summary came in at 119,000, not huge but strongly positive and exceeding a few estimates.  Seasonally adjusted unemployment was 4.4%, up 0.1%, and the unadjusted variety, reflecting work increases in September, fell from 4.5% to 4.3%, with the unadjusted count of those with jobs up 606,000, just more than last time’s loss, similarly moving to 163,894,000.  The two measures showing how many Americans are working or only one step away, the employment-population ratio and the labor force participation rate, each gained 0.1% to 59.7% and 62.4%.  The count of those working part-time for economic reasons, or looking thus far unsuccessfully for full-time labor while keeping at least one part-time proposition, was down 100,000 to 4.8 million, as was the number of people officially unemployed for 27 weeks or longer, reaching 1.8 million.  Average private hourly nonfarm payroll earnings rose 14 cents, a bit more than inflation, to $36.67.

The American Job Shortage Number or AJSN, the Royal Flush Press statistic showing how many additional positions could be quickly filled if all knew they would be easy to get, lost 844,000, mostly seasonally, to get to the following:

 

Less than half of the drop was from lower unemployment – more was from a large cut in those reporting they wanted to work but had not looked for it during the previous year.  The other factors changed little.  Year-over-year, the AJSN increased 316,000, with unemployment up since September 2024 and those not wanting work adding 115,000.  The share of the AJSN from official joblessness shrank 0.3% to 38.9%.

What happened this time?  Not a great deal, and barely better than neutral.  Those not interested in work rose 750,000, which with August’s 860,000 meant over 1.6 million over two months, which is a lot.  Otherwise, everything reasonably hung on.  There will be no October AJSN or Employment Situation Summary, but you can expect November’s writeup to appear here on the next jobs report’s December 16th release date.  For now, the turtle managed only a tiny step forward.

Thursday, November 13, 2025

Artificial Intelligence Going Wrong: Eleven Weeks of Real or Questionable Problems

Somewhere between AI’s accomplishments and its postulated threats to humanity are things with it that have gone wrong, and concerns that something might.  Here are nine – almost one per week since the end of August.

A cuddly danger?  In “Experts warn AI stuffed animals could ‘fundamentally change’ human brain wiring in kids” (Fox News, August 31st), Kurt Knutsson reported that “pediatric experts warn these toys could trade human connection for machine conversation.”  Although television has been doing that for generations, some think that with AI playthings, “kids may learn to trust machines more than people,” which could damage “how kids build empathy, learn to question, and develop critical thinking.”  All of this is possible, but speculative, and nothing in this piece convinced me AI toys’ effect would be much more profound than TV’s.

A good if preliminary company reaction was the subject of “OpenAI rolls out ChatGPT parental controls with help of mental health experts” (Rachel Wolf, Fox Business, September 2nd).  In response to a ChatGPT-facilitated suicide earlier this year, “over the next 120 days… parents will be able to link their accounts with their teens’ accounts, control how ChatGPT responds to their teen, manage memory and chat history features and receive notifications if their child is using the technology in a moment of acute distress.”  That will be valuable from the beginning, and will improve from there.

On another problem front, “Teen sues AI tool maker over fake nude images” (Kurt Knutsson, Fox News, October 25th).  The defendant, AI/Robotics Venture Strategy 3 Ltd., makes a product named ClothOff, which can turn a photo into a simulated nude, keeping the original face.  A plaintiff’s classmate did that to one of hers, shared it, and “the fake image quickly spread through group chats and social media.”  As of the article’s press time, “more than 45 states have passed or proposed laws to make deepfakes without consent a crime,” and “in New Jersey,” where this teenager was living, “creating or sharing deceptive AI media can lead to prison time and fines.”  Still, “legal experts say this case could set a national precedent, as “judges must decide whether AI developers are responsible when people misuse their tools” and “need to consider whether the software itself can be an instrument of harm.”  The legal focus here may need to be on sharing such things, not just creating or possessing them, which will prove to be impossible to stop.

In a Maryland high school, “Police swarm student after AI security system mistakes bag of chips for gun” (Bonny Chu, Fox News, October 26th).  Oops!  This was perpetrated by “an artificial intelligence gun detection system,” which ended up “leaving officials and students shaken,” as, per the student, “police showed up, like eight cop cars, and they all came out with guns pointed.”  I advise IT tool companies to do their beta testing in their labs, not in live high school parking lots.

Was the action taken by the firm in the third paragraph above sufficient?  No, Steven Adler said, in “I Worked at OpenAI.  It’s Not Doing Enough to Protect People” (The New York Times, October 28th).  Although the company “ultimately prohibited (its) models from being used for erotic purposes,” and its CEO claimed about the parental-control feature above that it “had been able to “mitigate” these issues,” per Adler it “has a history of paying too little attention to established risks,” and that it needs to use “sycophancy tests” and “commit to a consistent schedule of publicly reporting its metrics for tracking mental health issues.”  I expect that the AI-producing firms will increasingly do such things.  And more are in progress, such as “Leading AI company to ban kids from chatbots after lawsuit blames app for child’s death” (Bonny Chu, Fox Business, October 30th).  The firm here, Character.ai, which is “widely used for role-playing and creative storytelling with virtual characters,” said that “users under 18 will no longer be able to engage in open-ended conversations with its virtual companions starting Nov. 24.”  They will also restrict minors from having more than 2 daily hours of “chat time.”

In the October 29th New York Times, Anastasia Berg tried to show us “Why Even Basic A.I. Use Is So Bad for Students.”  Beyond academic cheating, “seemingly benign functions” such as AI-generated summaries, “are the most pernicious for developing minds,” as that stunts the meta-skill of being able to summarize things themselves.  Yet the piece contains its own refutation, as “Plato warned against writing,” since “literate human beings… would not use their memories.”  Technology, from 500 BC to 2025 AD, has always brought tradeoffs.  As calculators have made some arithmetic unnecessary but have hardly extinguished the need to know and use it, while people may indeed be weaker at summarizing formal material, they will continue to have no choice but to do that while living the rest of their lives.

We’re getting more legal action than that mentioned above, as “Lawsuits Blame ChatGPT for Suicides and Harmful Delusions” (Kashmir Hill, The New York Times, November 6th).  Seven cases were filed that day alone, three on behalf of users who killed themselves after extensive ChatGPT involvement, another with suicide plans, two with mental breakdowns, and one saying the software had encouraged him to be delusional.  As before, this company will need to ongoingly refine its safeguards, or it may not survive at all.                  

I end with another loud allegation, this one from Brian X. Chen, who told us, also in the November 6th New York Times, “How A.I. and Social Media Contribute to ‘Brain Rot.’”   He started noting that “using A.I.-generated summaries” got less specific information than through “traditional Google” searches, and continued to say that those who used “chatbots and A.I. search tools for tasks like writing essays and research” were “generally performing worse than people who don’t use them.”  All of that, though, when it means using AI as a substitute for personal work, is obvious, and not “brain rot.”  This article leaves open the question of whether the technology hurts when it is being used to help, not to write.

Three conclusions on the above jump out.  First, as AI progresses it will also bring along problems.  Second, legally and socially acceptable AI considerations are continuing to be defined and to evolve, and we’re nowhere near done yet.  Third, fears of adverse mental and cognitive effects from general use are, thus far, unsubstantiated.  Artificial intelligence will bring us a lot, both good and bad, and we will, most likely, excel at profiting from the former and stopping the latter.

Friday, November 7, 2025

Artificial Intelligence’s Power, Water, and Land Uses, What’s Coming Next, and What Might Remain After a Business Bloodbath

How big has the AI buildup been?  What major problem with that is on the way?  If AI proves to be a bubble, what of value would stay? 

The oldest piece here, “AI energy demand in US will surge but also provide opportunity to manage energy” (Aislinn Murphy, Fox Business, April 18th) told us that “the world, particularly the United States, is projected to see a massive jump in data center and artificial intelligence demand for electricity by 2030, per a recently released International Energy Agency (IEA) report.”  That happened not only in five years but within six months, though we can’t yet vouch for the prediction that “renewable energy sources will meet nearly half of the additional demand, followed by natural gas and coal, with nuclear starting to play an increasing important role.”

With that, let’s look at “What AI’s insatiable appetite for power means for our future” (Kurt Knutsson, Fox News, June 20th).  Even less than five months ago, “the modern AI boom” was “pushing our power grid to its limits,” as “the energy needed to support artificial intelligence is rising so quickly that it has already delayed the retirement of several coal plants in the U.S., with more delays expected,” and “energy is becoming the next major bottleneck.”  As the previous author also wrote, power is going for “running” it “at scale,” for current use of the technology, not for creating models for future releases.  Perhaps unexpectedly, 30% to 55% “of a data center’s total power use” goes to “keeping AI servers from overheating,” and, overall, “the demand for AI is growing faster than the energy grid can adapt.”  Despite pledges to use renewable energy, much of that may be nuclear instead of wind, solar, or hydro, and even if not, “because the grid is shared, fossil fuels often fill the gap when renewables aren’t available.”

In “At Amazon’s Biggest Data Center, Everything Is Supersized for A.I.” (June 24th, The New York Times), Karen Weise and Cade Metz reported that “a year ago, a 1,200-acre stretch of farmland outside New Carlisle, Ind., was an empty cornfield.  Now, seven Amazon data centers rise up from the rich soil, each larger than a football stadium.”  The company plans to build about 23 more there “over the next several years,” which “will consume 2.2 gigawatts of electricity – enough to power a million homes,” along with “millions of gallons of water to keep the chips from overheating.”  When fully constructed, this facility “will be the largest power user in the state of Indiana by a country mile.”

People connected with rural areas may not mind the jobs and money such projects bring, but per Ivan Penn and Karen Weise in the August 14th New York Times, “Big Tech’s A.I. Data Centers Are Driving Up Electricity Bills for Everyone.”  Even though “Amazon, Google, Microsoft and other technology companies” are moving “into the energy business,” “the average electricity rate for residents has risen more than 30 percent since 2020,” and as “recent reports expect data centers will require expensive upgrades to the electric grid,” “A.I. could turbocharge those increases,” “unless state regulators and lawmakers force tech companies to cover those expenses.” 

Similarly, “AI Isn’t Free.  The First Costs Are on Your Bill, and More Are Coming” (Kay Rubacek, The Epoch Times, September 24th).  With rising electric costs common nationwide, “despite the technological advancements, computing power is not getting more efficient in terms of power usage.  It is becoming ever more energy-hungry.”  As such, “the Department of Energy now warns of a hundred-fold increase in blackout risk by 2030 if data center growth continues and plants keep closing on schedule,” yet “experts cannot accurately predict (AI’s) future costs because the technology is changing too fast.”

General-public reactions to AI power and water use are coming in.  They are often not positive, as “AI Data Centers Create Fury from Mexico to Ireland” (Paul Mozur et al., The New York Times, October 20th).  “In country after country, activists, residents and environmental organizations have banded together to oppose data centers,” but “there are few signs of a slowdown,” as, per bank UBS, “companies are expected to spend $375 billion on data centers globally this year and $500 billion in 2026.”  In Ireland in particular, where “a third of the country’s electricity is expected to go to data centers in the next few years, up from 5 percent in 2015,” the “welcoming mood has soured,” and it has now “become one of the clearest examples of the transnational backlash against data centers,” as “a protest movement has grown.”  “Impoverished small towns” in Mexico near where data centers have appeared have “began experiencing longer water shortages and more blackouts.”

It is clear from all this that the rubber of increased AI infrastructure is meeting the road of damage to residents.  There will be vastly more conflict next year, much of it, even in the United States as protests multiply, preventing data centers from being built.  That will become yet another problem for the technology to overcome, and will push costs even higher.

I have been reading about the possibility of a severe artificial intelligence downturn, and comparisons and contrasts with what happened almost 200 years ago with railroads.  Then, the failed companies left behind track, bridges, and stations that were later used when the industry reconstructed itself.  What would AI abandon?  Failed companies’ data center buildings would remain, but the chips would, as now, be worthless well under a decade later.  While the news that it is not upgrades driving current resource usage is heartening, and the chance of what is now a vast number of profitable and worthwhile applications disappearing is almost nonexistent, companies going bust could mean the end of tens of trillions in market capitalization.  It’s easy to imagine effects such as a 50% NASDAQ-index fall.  Yet those gigantic physical structures will still be useful.  How, we don’t know, but they will be, one way or another.

Still No AJSN

 Until data from the Bureau of Labor Statistics becomes available, there will be no further editions of the American Job Shortage Number.  If the November data is available by December 4th, the November version will be posted December 5th as previously expected.  I will put together and release the September and October editions, on dates to be determined, if the BLS publishes back data supporting them.