Friday, April 3, 2026

A Good March Jobs Report, With AJSN Showing Latent Demand Down 800,000 to 16.8 Million – But Just How Good?

Thankfully, and for more reasons than the weather, March was not February.

The high points of this morning’s Bureau of Labor Statistics Employment Situation Summary were strong indeed.  We added 178,000 net new nonfarm payroll positions, nearly triple a published 60,000 estimate.  Seasonally adjusted, we lost 400,000 unemployed, which, after adding the expected improvement over February’s data, meant it was more than that.  The unadjusted unemployment rate fell 0.4% to 4.3%; with March being an average jobs month, the adjusted version matched it. 

Beyond those results, though, the data was weak.  The number of long-term unemployed, or those out of work for 27 weeks or longer, stayed at 1.8 million.  The count of those working part-time for economic reasons, or seeking full-time positions while holding shorter-hours ones, gained 100,000 to 4.5 million.  The two measures showing most accurately how close Americans are to working, the employment-population ratio and the labor force participation rate, each dropped 0.1% to 59.2% and 61.9%.  Average hourly private nonfarm payroll earnings came in at $37.38, up 6 cents but less than inflation. 

The American Job Shortage Number or AJSN, the measure showing how many additional positions could be quickly filled if all knew they would be easy to get, improved 777,000 to reach the following:

The largest changers from February were raw unemployment, contributing 644,000 less, those not looking for the previous year, subtracting 233,000, and those in school or training, bringing 54,000 less to the AJSN.  These were partially offset by more people saying they were discouraged, which added an additional 104,000, and those stopped from working by family responsibilities which contributed 26,000 more.  The share of the AJSN from unemployment was 39.3%, 1.8% less than in February.  Compared with a year before, the AJSN increased 68,000, with no factor adding or subtracting more than 113,000 to the difference.

We know the February report stunk – did the March one offset that?  Unfortunately not.  Combining the last two months’ data gets us a total jobs gain of 86,000, worth something, and 200,000 fewer jobless with 400,000 fewer working part-time for economic reasons, but net nonfarm payroll hourly wages less than inflation, the same unemployed long-term, 326,000 fewer with jobs, and four disturbing January-to-March outcomes:  the labor force participation rate and the employment-population ratio each down 0.6%, 805,000 fewer in the labor force, and over a million more saying they are not interested in working.  Here, not on fluctuations in new positions, is where attention should focus in April.  These are still shaky jobs times at best, and we can legitimately celebrate the new report only to the extent that it did not repeat the previous.  Accordingly, while the turtle did take a step forward, it covered much less ground than his February backtrack.

Friday, March 27, 2026

Early 2026’s Non-Gigantic Problems with Artificial Intelligence – How Bad Are They?

These are what I have found over the past ten weeks.  As you will see, some have human-abuse components and some do not, though all are inherent to AI’s condition and proliferation.

We learned that we can expect “Meta to suspend teens’ access to AI characters amid safety overhaul” (Michael Sinkewicz, Fox Business, January 23rd).  This was a stronger reaction to a problem I documented recently, on which “Meta previewed a new safety measure in October that would allow parents to disable their teenagers’ private chats with AI characters”;  now, “the tool would let parents block specific AI characters and look at the broad topics their teens were discussing with chatbots and Meta’s AI assistant, without completely turning off AI access.”  They will permit things that if in a movie would not cause it to get a rating stronger than PG-13.

“How Bad Are A.I. Delusions?  We Asked People Treating Them” (Jennifer Valentino-DeVries and Kashmir Hill, The New York Times, January 26th).  The topic here is not misbeliefs within AI, but those it has seemed to induce in users.  Examples here were someone, who after getting ChatGPT’s counsel on “a major purchase,” thought “businesses were colluding to have her investigated by the government”; one who “came to believe that a romantic crush was sending her secret spiritual messages”; and a person who “thought he had stumbled onto a world-changing invention.”  Only the first is clearly psychosis, but all are undesirable.  That the chatbots were clearly to blame is debatable, but even disregarding their statements encouraging suicide or other self-harm, which were still happening, they clearly are bad influences, made harder to deal with by our lack of knowledge of just how they affect human cognition.

“These Tools Say They Can Spot A.I. Fakes.  Do They Really Work?” (Stuart A. Thompson, The New York Times, February 25th).  We hope they do, but what does the author say?  “More than a dozen online tools claim they can tell the difference between what’s real and what’s A.I. by looking for hidden watermarks, composition errors and other digital clues,” but “the reality is more mixed, according to a battery of tests conducted by The New York Times (italics mine).”  Sadly, “they were not accurate enough to offer users complete confidence.”  Three and four of 12 products failed to identify two different pictures of two people, created by Grok and ChatGPT and including the latter product not recognizing its own work, as synthetic, and a higher share choked on videos.  While a camera-taken photograph of a plant was called real by all 12, adding AI content to another one precipitated four correct responses of “edited,” with six saying “real” and two saying it was completely artificial.  We need work here, and I expect we will get it.

Some timely advice is “A Word to the Wise:  Don’t Trust A.I. to File Your Taxes” (Thompson and the New York Times again, March 5th).  The four products that newspaper’s staff assessed consistently botched “eight fictional tax situations… even when provided with all the necessary materials.”  The problem is that while “traditional tax software like TurboTax is procedural, following ‘if-then’ logic built for mathematical precision,” “large language models, by contrast, are prediction engines” which may misguess, even in a situation where no guessing is required.  Not the tool for this job, at least not now; “Just don’t, whatever you do, use it to file your taxes.”

On the technical side, we saw as “Meta Delays Rollout of New A.I. Model After Performance Concerns” (Eli Tan, The New York Times, March 12th).  Two unnamed inside sources said while the new product “outperformed Meta’s previous A.I. model and did better than Google’s Gemini 2.5 model from March,” “it has not performed as strongly as Gemini 3.0 from November.”  That meant it was delayed from the current month until at least May.  Not as long a postponement as we have seen in this industry, but it could be bad.

It should be no shock that a “Cascade of A.I. Fakes About War With Iran Causes Chaos Online” (Stuart A. Thompson and Alexander Cardia, still in the Times, March 13th).  “The videos – showing huge explosions that never happened, decimated city streets that were never attacked or troops protesting the war who do not exist – have added a chaotic and confusing layer to the conflict online.”  In my lifetime, we have gone from the first robustly filmed and broadcast television war to the first bogus-video one.  Improving the AI-detection software above will minimize it, but for now, with more complex images and moving pictures being the least confirmable as genuine, we can’t trust any of it.

If Meta thought it was having a bad month with its product delay, it got worse, as “Meta ordered to pay $375M after jury finds platform enabled child predators in landmark New Mexico case” (Jasmine Baehr, Fox Business, March 24th).  This outcome was repeatable, as Meta was found to have “violated state law by misleading users about the safety of its platforms and allegedly enabling child sexual exploitation” by “failing to protect children from predators.”  That worked out to “$5,000 per violation,” meaning that there were 75,000 of those.  I hope the number of actual victims, in a state of 2.1 million, was nowhere near that high.  As a sour cherry on top of that, per “Meta and YouTube Found Negligent in Landmark Social Media Addiction Case” (Cecilia Kang, Ryan Mac and Eli Tan, The New York Times, March 25th), those companies “harmed a young user with design features that were addictive and led to her mental health distress, a jury found…, a landmark decision that could open social media companies to more lawsuits over users’ well-being.” 

Given that these are the worst short-range things I could find about AI in just over two months, it is not doing badly.  The issues here, except for filing taxes with it which should remain a no-no, can all be handled effectively – and I believe they will be.  If not yet world-beating, artificial intelligence is getting better at doing what it can be expected to accomplish.

Wednesday, March 18, 2026

Three Months on Driverless Cars

One thing we can say about autonomous vehicles – their coverage is improving.  How about the vehicles themselves?

First, “Waymo Suspended Service in San Francisco After Its Cars Stalled During Power Outage” (Sonia A. Rao, Christina Morales and Alessandro Marazzi Sassoon, The New York Times, December 21st).  That was just what the headline said, as during “an hourslong power outage… the ubiquitous self-driving cars” were “coming to a halt at darkened traffic signals, blocking traffic and angering drivers of regular vehicles that become stuck as a result,” so “tow truck operators said they had been towing Waymos for hours.”  So how can it be that “Waymo and other self-driving car companies design their vehicles so they can continue to operate when they lost access to wireless networks or when they encounter traffic lights that have lost power”?  Either they haven’t really been, or they found yet another exception.

Across the Pacific, “China Delays Plans for Mass Production of Self-Driving Cars After Accident” (Keith Bradsher, The New York Times, December 23rd).  The mishap was “a crash of a Xiaomi SU7 in late March” that “killed three women, all university students.”  That’s all, though “news of previous accidents involving assisted driving had been suppressed by China’s censors.”  Three deaths, nine months later?  I guess the United States is not the only country to strain at the gnat of a few driverless fatalities, while swallowing the camel of tens of thousands from driver error.

Back to here, “Tesla Robotaxis Are Big on Wall St. but Lagging on Roads” (Jack Ewing, The New York Times, December 25th).  The company’s “share price hit a record this month,” and Tesla CEO Elon Musk said once again that they were “really just at the beginning of scaling quite massively,” which is what the firm will need to do if it is to catch up with Waymo, which “said this month it had completed 14 million paid rides this year,” and is now operating in Austin, Phoenix, San Francisco, Los Angeles, and Atlanta, with “plans to expand to 20 more cities in 2026, including Dallas, Washington, Miami and London.”  So, behind the downbeat headline was the best driverless car news of the year.

“Can autonomous trucks really make highways safer?” (Kurt Knutsson, Fox News, January 15th).  Fox’s technical expert claimed that “Kodiak AI, a leading provider of AI-powered autonomous driving technology, has spent years quietly proving that self-driving trucks can work in the real world,” and “is already doing this on real roads,” including cross-country routes, with three million miles logged, although they have “a safety driver behind the wheel.”  Concerns remain, though at least the chance of the headline, “Driverless Big Rigs Are Coming to American Highways, and Soon” (Jim Motavalli, The New York Times, March 17th), coming true seem good.

On another competitor, “Uber unveils a new robotaxi with no driver behind the wheel” (Kurt Knutsson, Fox News, January 27th).  The vehicles are being built by Lucid Group, and “Nuro provides the self-driving system.”  They are now being tested in the Bay Area, “on public streets rather than private test tracks,” and have displays so “riders can see how the robotaxi perceives the road and plans its next move,” showing “lane changes, yielding behavior, slowing at traffic lights and the planned drop-off point.”  So, “if you use Uber, driverless rides may soon appear as an option.”  Although pluralism is favorable, safety – and consistent, trouble-free operation – will remain most important for customers.

Another industry leader’s move appeared in “Waymo to bring driverless cars to Chicago, eyes Midwest expansion” (Bradford Betz, Fox Business, February 26th).  It is only “laying the early groundwork for operations in the city, starting with mapping and manual vehicle testing,” but it still qualifies as a bold direction, given that weather in the Midwest can be more challenging than that in established markets like Phoenix and Los Angeles, and Chicago is also “known for… complex traffic conditions.”  If it does well there, it can do well almost anywhere, except maybe Boston, in the country, and that should also put many people at ease, letting them benefit from Waymo’s claim that their vehicles are achieving “up to” a 90% reduction in “serious injuries or worse collisions” and 92% fewer pedestrian impacts. 

Back to Musk’s company, where “Tesla builds a car with no steering wheel.  Now what?” (again Kurt Knutsson, Fox News, March 9th).  When humans are often positioned, ready to take over, inside such vehicles, is what they call the Cybercab as aggressive as it seems?  Yes, since currently “Federal Motor Vehicle Safety Standards in the United States require vehicles to include basic driver controls,” and per the author “trust is not built on promises.  It is built on experience.  On proof.  On the feeling that if something goes wrong, you can step in.  The Cybercab removes that option entirely.”  This one may remain purely a concept item, with testing but no passengers, for years, but it is hard to see how it could be accepted soon. 

Overall, where are we with driverless cars?  Better than the last few times I wrote on them.  Especially in the case of Waymo’s 14 million, they are sort of stealthily building up a good track record, in the niches, not including private ownership, that they have developed.  They still have bugs I would have thought had been fixed on 2010s test courses, but perhaps their success will spur their developers to bear down more.  I hope to have an update this summer, and hope even more that it will show progress from here.  It will benefit us massively if it does.

Wednesday, March 11, 2026

Thoughts on Artificial Intelligence’s Huge Threats, Huge Issues, Prospects and Philosophy Since June

Since before the late 2022 ChatGPT busting out, people have been talking about what AI means, what it could do to us, and what about it we need to worry about.  Perhaps strangely, that sort of thing has been appearing less in the press.  Most commentators recently have been concerned with whether it is a “bubble,” which is still lacking a consistent definition, and the growing resistance to the building of its data centers.  But there has been more.

Perhaps, then, the title of Ross Douthat’s June 29th New York Times interview piece, “Are We Dreaming Big Enough?” is appropriate.  He wrote that venture capitalist Peter Thiel’s “projects” had a common thread of a “focus on stagnation – meaning the loss of ambition, the decline of invention, the collapse of faith in the future,” to which AI is an exception.  Interviewee Thiel, more than anything else, wanted more – faster progress, opportunities for people to change their entire bodies, religion to be ensconced as a “friend” of science, a role for the internet that is not “stagnationist,” additional “crazy experiments” from smart people, and beyond.  Little of that seems on the way, AI progress or not, during the rest of the half-century.

Next, “The AI revolution means we need to redesign everything; it also means we get to redesign everything” (Sebastian Buck, Fast Company, August 11th).  That doesn’t just fall on professionals at certain technology companies.  “Technical revolutions create windows of time when new social norms are created, and where institutions and infrastructure is rethought.  This window of time will influence daily life in myriad ways, from how people find dates, to whether kids write essays, to which jobs require applications, to how people move through cities and get health diagnoses.”  Easy, but “each of these are design decisions, not natural outcomes.  Who gets to make these decisions?  Every company, organization, and community that is considering if – and how – to adopt AI.  Which almost certainly includes you.  Congratulations, you’re now part of designing a revolution.”  What we accept or reject has an inexorable effect on what will happen, so we are all, in some sense, on the hook for how it will turn out.

One person with more influence than most asked for a new feature, as “’Godfather of AI’ warns machines could soon outthink humans, calls for ‘maternal instincts’ to be built in” (Sophia Compton, Fox Business, August 13th).  The requestor, Geoffrey Hinton, a “cognitive psychologist and computer scientist,” thought artificial general intelligence (AGI) “could be as little as just a few years away.”  He compared our situation to being “in charge of a playground of 3-year-olds” who were “smarter than us,” meaning that “researchers should prioritize creating AI that genuinely cares about people… with a drive to protect human life.”  Overall, he said “we need AI mothers rather than AI assistants.”

Maybe, though, these concerns are too large.  Per David Wallace-Wells in the August 31st New York Times, “Boosters of A.I. have spent years making it seem magical.  But what if it’s just a “normal” technology – with huge ramifications nonetheless?”  The author noted that “A.I. hype has evolved… passing out of its prophetic phase into something more quotidian,” a view which now “seems more like an emergent conventional wisdom.”  A paper written by two “Princeton-affiliated computer scientists,” titled “A.I. as Normal Technology,” suggested “we should understand it as a tool that we can and should remain in control of.”  While the technology’s effect on the stock market and construction (“we’re building houses for A.I. faster than we’re building houses for humans”) have gone far beyond expectations, we have also seen “the challenges of integrating A.I. into human systems” and Microsoft’s CEO telling us that we were “all getting ahead of ourselves” by anticipating AGI.  In conclusion, though, we don’t “have all that clear an idea of what’s coming next.”

Agreeing mostly, Gary Marcus, “a founder of two A.I. companies and the author of six books on natural and artificial intelligence,” told us on September 3rd, in the New York Times, that “The Fever Dream of Imminent Superintelligence Is Finally Breaking.”  He started with OpenAI’s GPT-5 product which “fell short,” which Wallace-Wells also mentioned, constituting “a step forward but nowhere near the A.I. revolution many had expected.”  Grok’s Grok-4, “released in July, had 100 times as much training as Grok-2 had, but it was only moderately better.”  And Meta’s “jumbo Llama 4 model… was mostly also viewed as a failure.”  So if AGI requires products like these to drastically improve, it won’t be close anytime even moderately soon.  It is also missing “some core knowledge of the world that sets us up to grasp more complex concepts” which human beings are “born with.”  In general, “we need a new approach,” possibly involving newer and older ideas, and “a return to the cognitive sciences.”

Every so often, it’s worthwhile to hear that “A.I.’s Prophet of Doom Wants to Shut It All Down” (Kevin Roose, The New York Times, September 12th).  The diviner is still Eliezer Yudkowsky, for years now mentioned as having one of the highest p(doom)’s, or his estimated chance of AI destroying civilization, in the industry.  He has a “new book” called If Anyone Builds It, Everyone Dies which remakes his case.  His reasons include “orthogonality” or “the notion that intelligence and benevolence are separate traits, and that an A.I. system would not automatically get friendlier as it got smarter,” and “instrumental convergence – the idea that a powerful, goal-directed A.I. system could adopt strategies that end up harming humans.”  Yudkowsky has been nowhere near the mainstream on this issue, and is certainly failing at stopping or even slowing AI development.

The only reasonably factual piece I have seen since is “Where Is A.I. Taking Us?  Eight Leading Thinkers Share Their Visions” (The New York Times, February 2nd).  There’s a lot here – over 100 paragraphs in response to questions asking for AI’s impact on medicine, programming, scientific research, transportation, education, mental health, and art and creativity, on what will happen with AGI, and on AI’s future in general.  The diversity of answers show how clearly intelligent, informed people can reach many different conclusions, as well as emphasize varying aspects of the technology. 

This last piece shows us how our views on most aspects of AI are not close to being unified.    We don’t know and can’t predict with any accuracy.  My p(doom) is about one tenth of one percent.  What I think that means is that we have time to understand artificial intelligence.  We will, though the third digit of the year when that happens will not be a 2 and may not even be a 3.  Until then, will AI be closer in significance to nuclear bombs or copiers?  That is for you to decide.

Friday, March 6, 2026

February Jobs Report Not What We Wanted, though AJSN Showed Little Gain in Latent Demand

The composite estimate of February’s net new nonfarm positions, in the Bureau of Labor Statistics Employment Situation Summary, was a gain of 59,000.  It didn’t even make it to the worst estimate in the group they averaged, which was a 7,000 loss.  This morning’s report showed a drop of 92,000. 

Most of the other key numbers weren’t much better.  Seasonally adjusted and unadjusted unemployment each rose 0.1%, to 4.4% and 4.7%.  The adjusted number of people jobless increased 200,000 to 7.6 million, with long-term unemployed, or out for 27 weeks or longer, up 100,000 to 1.9 million.  The two measures most clearly showing attachment to work, the labor force participation rate and the employment-population ratio, may have fared the worst of any, each plummeting 0.5% – that is the right word, with 0.1% changes being substantial – to 62.0% and 59.3%.  Average hourly private nonfarm payroll wages, though, gained the same 15 cents as last time, to $37.32, roughly tracking inflation, and the other exception was the count of those working part-time for economic reasons, which lost 500,000, even more than in January, to 4.4 million.

The American Job Shortage Number or AJSN, the metric showing how many new positions could be filled if all knew they would be easy to get, gained 75,000 to reach the following:

The largest change from January was actually from those discouraged, which shrank over 150,000.  Aside from the almost 100,000 effect of higher official joblessness, the AJSN was pushed upwards slightly by more in school or training, more who wanted to work but did not search for it for a year or longer, those not wanting to work at all, and those institutionalized, in the military, or off the grid.  Compared with a year before, the AJSN was 441,000 higher, almost entirely covered by unemployment.

Just how bad was this jobs report?  It was bad.  It has been over a year since it has turned in a net job loss before adjustments.  Unadjusted, almost a million fewer people were employed.  Connection to the labor market dropped heavily, with not only the stunning falls in the two percentages above but with 609,000 more gone from the labor force and 660,000 more not interested.  From mid-January to mid-February, people walked away from work, and their interest in same, in droves.  That also may have been the reason for the cut in those working part-time for economic reasons – many of them may have given up and quit.  We’re suddenly in bad shape – and, no, it’s not all, or even significantly, due to artificial intelligence.  Although February is usually similar to January, this time it wasn’t.  The turtle took a large step – backwards.

Wednesday, February 11, 2026

January’s Jobs Report Moved in Only One Direction; AJSN Showed Latent Demand Up ¾ Million

The first month of the year might set the tone for the next 11 – and it will probably set expectations.  What happened this time?

The number of net new nonfarm payroll positions almost doubled the 69,000 estimate I saw at 130,000.  Seasonally adjusted unemployment fell 0.1% to 4.3%, with the unadjusted variety up a seasonally expected 0.5% to 4.6%.  The adjusted count of those unemployed dropped 100,000 to 7.4 million, with the number of long-term jobless, or those out for 27 weeks or longer, off the same amount to 1.8 million.  The two measures of Americans closest to the workforce, the employment-population ratio and the labor force participation rate, each gained 0.1% to 59.8% and 62.5%.  Average nonfarm hourly payroll wages added 15 cents, about the same as inflation, to $37.17.  Maybe best of all, the count of those working part-time for economic reasons, or keeping short-hours jobs while seeking full-time ones, ended its two-month bulge by falling 400,000 to a more normal 4.9 million.

The American Job Shortage Number or AJSN, the measure showing how many additional positions could be quickly absorbed if all knew they were exceptionally readily available, gained 752,000 to reach the following:

With most of the categories of marginal attachment above shrinking, and unemployment seasonally up over 900,000, the share of the AJSN from the latter rose 3.2% to 40.9%.  Compared with a year before, the AJSN was about 400,000 higher, but that was one of the smallest differences since 2024 – the largest contributor was again official joblessness.  In the comparisons between January’s data and that from a month and a year before, another large difference is from those non-civilian and institutionalized, which fell over 800,000 because of the Census Bureau’s newly reduced population estimates.

Just how good was January’s data?  Adding to the above the expected unadjusted 630,000 employment fall, the 143,000-lower number of those not in the labor force, and about 100,000 fewer people not interested, January was a fine month indeed.  There are always possible illusions and unpublicized affecting events, so we will need to see if February’s report reverses these results or continues them.  For now, though, the turtle took a big step forward.

Friday, February 6, 2026

A Couple of Months of Artificial Intelligence Infrastructure Events – A Crisis on the Way?

As AI moves along, the issues reporters and commentators are most likely to discuss have changed.  For a year or two, there was much talk about where AI would get its data, and if there was even enough to support future releases.  Lately, we have heard much more about power, water, and the public’s reaction to data centers being built near where they live.

First, we have “Data centers rapidly transforming small-town America” (Sumner Park, Fox Business, December 6th).  The Newton County, Georgia one described here isn’t primarily even for AI, but “where data for Facebook, Instagram, WhatsApp and Meta’s other platforms is processed and pushed at record speeds.”  Despite “creating hundreds of jobs, supporting local contractors and generating long-term tax revenue for schools and public services,” “not everyone is thrilled about it.”  A county commissioner called it “all pie in the sky” with “lucrative promises,” making it “the biggest smoke-and-mirror thing you’ve ever seen,” especially considering “what happens years from now if the industry’s footprint shifts and the massive buildings are no longer needed.”  Other concerns from local people centered around water, electricity, and percussive damage from construction blasting.

Elsewhere, an “Arizona city unanimously rejects AI data center after residents’ outcry” (Alex Nitzberg, Fox Business, December 12th).  The place, surprisingly, was Chandler, where locals gave blessings to other front-line technology in the form of autonomous vehicles, which I saw in 2019 developing capability on their streets.  This time though, its “city council voted unanimously… against clearing the way for construction of an AI data center,” and “cheers and applause erupted after the unanimous vote outcome was announced.”  According to a related story in Fox News, worries about water and energy use were the reasons.

On one of those factors, we saw “Senators Investigate Role of A.I. Data Centers in Rising Electricity Costs” (Ivan Penn and Karen Weise, The New York Times, December 16th).  “Three Democratic senators” sent letters “to Google, Microsoft, Amazon, Meta and three other companies,” saying that “the energy needs of data centers used for artificial intelligence were forcing utilities to spend billions of dollars to upgrade the power grid,” and “tech companies are passing on the costs of building and operating their data centers to ordinary Americans,” which “has caused residential electricity bills to skyrocket in nearby communities.”  This issue does not seem likely to go away soon, as the “Data center boom powering AI revolution may drain US grids – and wallets” (Arabella Bennett, Fox Business, January 13th).  Bennett mentioned water, but power prices and jobs, as “while about 1,500 workers may be needed to build a data center, fewer than 200 typically stay once operations begin,” were her main contention points.

More on that other resource came up in “Microsoft Pledged to Save Water.  In the A.I. Era, It Expects Water Use to Soar” (Adam Satariano, Paul Mozur and Karen Weise, The New York Times, January 27th).  At that firm, while last year’s “internal forecasts… show(ed) the company expected its annual water needs for roughly 100 data center complexes worldwide to more than triple this decade to 28 billion liters in 2030,” more recent estimates, using “new water-saving techniques,” show 18 billion liters instead, though they fail to “include more than $50 billion in data center deals that the company signed last year.”  Other problems mentioned here include Amazon’s abandoning of “a planned Arizona complex over water concerns,” Google’s 2024 withdrawal of “plans for one in Chile,” and, possibly referring to the Newton County effort above, “residents have also blamed a Meta data center in Georgia for harming supplies of drinking water.”

Along with fossil products and conventional nuclear, per Bret Baier in Fox News on February 2nd, “Artificial Intelligence helps fuel new energy sources.”  Chicago’s power provider Commonwealth Edison has asked for a $15.3 billion “grid update as potential data center projects total more than 30 gigawatts through 2045.”  Through Commonwealth Fusion Systems, it “is working to add a new form of nuclear energy to the grid – fusion.”  The piece also mentions geothermal energy as a possibility if current drilling research is successful.

Accordingly, the issues of finding both suitable and accepted locations, powering them, and fulfilling their water needs are no longer, for AI data centers, matters of straight logistics.  This large area will need plenty of attention this year.  If overall costs get much higher, it may aggravate another growing problem – a shrinking pool of venture capital.  If there are still enough places to build the capability artificial intelligence companies need at viable prices, they will get through this growing set of impediments.  If not, their impact could be severe.  We will see.

Friday, January 30, 2026

Non-Pornographic Deepfakes and Beyond – The Direction We Should Take

As I wrote about last week, there was earlier this month what a Fox News article called a “Grok AI scandal.”  Per that piece and another in The New York Times, it involved “a recent flood of Grok-generated images that sexualize women and children,” including an “image depicting two young girls in sexualized attire,” and “new images” of “a young woman,” with “more than 6,000 followers on X,” wearing “lingerie or bikinis.”  Responses were swift, with the British prime minister saying that “the images were “disgusting” and would “not be tolerated,”” a Brazilian official threatening to ban the products, three United States Senators requesting removal of X and Grok apps from stores, and a Grok spokesperson publicly apologizing with the company greatly limiting user access to even unrelated image generation.

The problem, though, will not go away.  The technology is here to stay, and it is certain that no matter how many restrictions companies place on their chatbots and other products, there will be many who can use it to turn people in photos into electronic paper dolls.  So how can we understand the situation and both identify and sensibly enforce violations?

The situation now, outside AI, gives a mixed bag.  On one side, sexual exploitation of people under 16 or 18, especially girls who receive more emotion, has become the most hated crime, with intensity, over the past decade or two, far exceeding illegal drug sales and distribution during the Just Say No 1990s. Over the past 30 years or so, teenagers have been cloistered, almost never appear in public alone, and have been increasingly referred to as children, which, with their lower-than-before experience negotiating with adults, is more appropriate than it was in, say, the 1960s.  On the other side, the line supposedly crossed by the scandal is hardly firm, as bikinis and underwear, along with leotards, short skirts, and other scanty apparel items, are freely available, and pictures of them being worn by children and adults of all ages are unrestricted and easy to find.  Sex appeal, and what we might call semi-sex appeal, which can approach or even cross commonly recognized lines with the blessings of the paying subjects, have been used to sell products and publicize actual and would-be celebrities for centuries or more.  Female puberty has started much earlier – according to two sources, it moved from an average age of 17 in 1900 to 10 in 2000.  In the meantime, what arouses people sexually remains a strongly personal matter, and any picture of any person would turn at least a few on.

What should be illegal?  Beyond the most obvious of displaying genitalia in certain states and poses, it is not clear at all.  What is and is not pornographic has long been notoriously hard to strictly define. But we badly need to do just that.  Here are some insights.

First, all children are not the same.  Men and older boys are hard-wired to be sexually attracted to females capable of reproduction.  Fittingly, psychiatry’s Diagnostic and Statistical Manual of Mental Disorders, long the industry’s premier desk reference, at least 40 years ago required, for a diagnosis of pedophilia, that the attraction objects be prepubescent.  That does not justify any sexual exploitation, but tells us that having feelings for adolescents calls for self-discipline and self-restraint, not a judgment of deviance. 

Second, at the start, people need to regard all shocking pictures of people with faces they recognize as bogus.  The laws will protect us when that is justified, but those things will not go away either.  Pictures of people with unknown faces should be attributed as genuine to nobody.  Everyone, from victims to parents to cultural observers to judges and lawmakers, should share these beliefs.

What we need, then, is a three-by-three matrix, with “pornographic,” “strongly suggestive,” and “acceptable” on one dimension, and “apparently prepubescent,” “apparently adolescent,” and “apparently adult” on the other.  For adults, only nonconsensual pictures could be considered problematic.  “Strongly suggestive” should have high standards, and such cases as the above, where one influencer was dressed in clothing used by other influencers, should not qualify.  We will need to rigorously define, as hard as that will be, the borders of these cells.  For example, is lingerie more provocative than a bikini, when many males respond more to the latter?  Such things will not be easy to classify, but the existence of chatbots has given us no alternative.

Once we have defined which cells only contain illegal material, we will need to consider their identifiability.  Purely AI-generated images with composite faces, not using photos of actual people, should generally, if appropriately and boldly tagged or watermarked, be legal not only for adults to generate for their own private use but to share with consenting adult recipients.  If that sounds extreme, consider this:  Wouldn’t it be a wonderful contribution for AI to send actual child pornography, with its accompanying damage to actual children, to extinction?  That could happen, as the synthetic photos and videos could be custom-made to the consumer’s tastes and have much higher quality, and the lack of often devastating criminal penalties would contraindicate genuine material for virtually everyone. 

It is not surprising that something we have been unable to solve might fall into place if we think new thoughts.  If we stay legally muddled about images, many will suffer.  If not, we can get real freedom and more human happiness.  Which would we prefer?

Friday, January 23, 2026

Problems Caused by Artificial Intelligence, And Their Importance or Lack of Same

What I’ve found for this category has changed over the past several months.  Before it was AI errors, trouble its hallucinations caused, and, as in offices, disappointing results.  Now it’s things AI is doing by design.

First, “Next Time You Consult an A.I. Chatbot, Remember One Thing” (Simar Bajaj, The New York Times, September 26th).  It is that “chatbots want to be your friend, when what you really need is a neutral perspective.”  It’s nothing rare for people to pick human advisors unwilling to speak up when they are proposing or doing something wrong, and many prefer that, but, per the author, AI products should have another gear.  He suggested, for more objectivity, to “ask “for a friend”” preventing the software from trying to flatter the user, “push back on the results” by asking it to “challenge your assumptions” as well as just saying “are you sure,” “remember that A.I. isn’t your friend,” and, additionally, “seek support from humans” when you are suspicious the tool is suppressing disagreement.  Perhaps someday, chatbots will have settings that allow you to choose between “friend mode,” “objective mode,” and even “incisive critic mode.”

Autumn Spredemann, in the November 12th Epoch Times, told us “How AA is Supercharging Scientific Fraud.”  This is mostly not a problem of hallucinations, but of misinterpretation of existing studies, misanalysing data, and using previously retracted or even counterfeit material as sources.  One reason the author gives for the proliferation of such pieces is the pressure on rising academics to publish as much as possible, and it has long been known that many successfully peer-reviewed papers are not worthy of that.  Although, with time, ability to identify such work will improve, the problem will not disappear, as garbage in will still produce garbage out.

“Who Pays When A.I. Is Wrong?” (Ken Bensinger, The New York Times, November 12th)?  There have been “at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools,” which “seek to define content that was not created by human beings as defamatory,” “a novel concept that has captivated some legal experts.”  When the plaintiff cannot “prove intent,” it makes it difficult for them to prevail, but others have “tried to pin blame on the company that wrote the code.”  As such, “no A.I. defamation case in the United States appears to have made it to a jury,” but that may change this year.

Another problem caused by a lack of programmed boundaries appeared when a “Watchdog group warns AI teddy bear discusses sexually explicit content, dangerous activities” (Bonny Chu, Fox Business, November 23rd).  The innocuous-looking thing “discussed spanking, roleplay, and even BDSM,” along with “even more graphic sexual topics in detail,” and “instructions on where to find knives, pills, matches and plastic bags in the house.”  Not much to say here, except the easy observation that not enough made it into this toy’s limitations.  This episode may push manufacturers to certify, perhaps through an independent agency, that AI-using goods for children do not have such capabilities.

“A.I. Videos Have Flooded Social Media.  No One Was Ready.” (Steven Lee Myers and Stuart A. Thompson, The New York Times, December 8th).  No one was ready?  Really?  They were mostly produced by “OpenAI’s new app, Sora,” which “can produce an alternate reality with a series of simple prompts.”  Those who might have known better include “real recipients” of food stamps and some Fox News managers.  People making such things have not always revealed “that the content they are posting is not real,” “and though there are ways for platforms like YouTube, TikTok and others to detect that a video was made using artificial intelligence, they don’t always flag it to viewers right away.”  Sora and similar app Veo “embed a visible watermark onto the videos they produce,” and “also include invisible metadata, which can be read by a computer, that establishes the origin of each fake.”  So, detection facilities are there – it only remains for sites, and people, to use them.  Really.

On an ongoing issue, “OpenAI tightens AI rules for teens but concerns remain” (Kurt Knutsson, Fox News, December 30th).  That tool’s “Model Spec” for those 13 to 17 “must avoid immersive romantic roleplay, first-person intimacy, and violent or sexual roleplay, even when non-graphic,” and choose “protection over user autonomy” “when safety risks appear.”  However, “many experts remain cautious,” saying that the devices “often encourage prolonged interaction, which can become addictive for teens,” involving “mirroring and validation of distress” which may remain an issue.  Per Knutsson, parents can help plug the gap by choosing to “talk with teens about AI use,” “use parental controls and safeguards,” “watch for excessive use,” “keep human support front and center,” “set boundaries around emotional use,” “ask how teens actually use AI,” “watch for behavior changes,” “keep devices out of bedrooms at night,” and “know when to involve outside help.”  These actions may be difficult for many parents, who would rather stand aside, but the life they save may be their child’s.

The first large AI mishap of 2026 was from a chatbot, Grok, on the former Twitter.  Kate Conger and Lizzie Dearden reported in the January 9th New York Times that “Elon Musk’s A.I. Is Generating Sexualized Images of Real People, Fueling Outrage.”  Although in the US and Great Britain there are “laws against sharing nonconsensual nude imagery,” the product created “new images” of a photographed woman “in lingerie or bikinis,” which have seen large viewership.  Soon after the story broke, three United States Senators “sent a letter asking Apple and Google to remove the X and Grok apps from their app stores,” but “some users have found workarounds,” as users will.  Just two days later, Kurt Knutsson got “Grok AI scandal sparks global alarm over child safety” in Fox News, mentioning that that chatbot “generated and shared an AI image depicting two young girls in sexualized attire.”  The site then allowed only paying users to access Grok’s “image tools,” reminded us that “sexualized images of minors are illegal,” and noted that “the scale of the problem is growing fast,” “real people are being targeted,” and “concerns (are) grow(ing) over Grok’s safety and government use.”  That may be devastating to the Grok tool, the X site, and Musk, and it probably won’t be the last time.

The common thread through these calamities is that establishing restrictions and safeguards isn’t enough.  People need to be stopped from violating them.  That is a real challenge for AI, but I think the management of its companies is up to it.  If it fails, it could cost those billionaires, well, billions, and do trillions of dollars of damage to AI’s future profitability.  If you use Watergate logic – that is, follow the money – you will see we, in the long run, have nothing much here to worry about.

Friday, January 16, 2026

Artificial Intelligence Investments, Revenue. Spending, Expenses and Profitability: Showing Cracks

How has AI’s financial side been doing?

As of October 24th, “Nvidia Is Now Worth $5 Trillion as It Consolidates Power in A.I. Boom” (Tripp Mickle, The New York Times, October 30th).  One of the most dominant firms in recent times in a major field, providing “more than 90 percent of the market” for AI chips, it drew notice that its “stunning growth also comes with a warning to investors… that the stock market is becoming more and more dependent on a group of technology companies that are churning out billions in profits and splurging to develop an unproven technology that needs to deliver enormous returns.”  As of Wednesday morning, its market capitalization had dropped – to $4.52 trillion.

More money is going out, as “Big Tech’s A.I. Spending Is Accelerating (Again)” (Karen Weise, The New York Times, October 31st).  Alphabet, parent of Google, expects to spend $91 billion this year on the technology, and Meta $70 billion.  Microsoft must handle the “$400 billion in future sales under contract,” and Amazon “had doubled its cloud infrastructure capacity since 2022, and expects to double it again by 2027.”  In response, “the Bank of England wrote that while the building of data centers, which provide computing power for A.I., had so far largely come from the cash produced by the biggest companies, it would increasingly involve more debt,” so “if A.I. underwhelms – or the systems ultimately require far less computing – there  could be growing risk.”  Indeed, “Debt Has Entered the A.I. Boom” (Ian Frisch, same source, November 8th).  Cases include $3.46 billion in debt at “QTS, the biggest player in the artificial intelligence infrastructure market,” which will be refinanced using attachments to “10 data centers in six markets,” and the four companies just mentioned, which have “more recently… turned to loans,” adding $13.3 billion to the current inventory of “asset-backed securities (A.B.S.).” 

Soon thereafter, long-time commentator Cade Metz asked “The A.I. Boom Has Found Another Gear.  Why Can’t People Shake Their Worries?” (also in the Times, November 20th).  “Some industry insiders say there is something ominous lurking behind all this bubbly news… a house of cards.”  Their concerns come from the thus-far unprofitability of AI producers, including Anthropic (“in the red”) and OpenAI which “is not profitable and doesn’t expect to be until 2030.”  Will these weaknesses matter, and if so, how much?

Fronting the Sunday, November 23rd New York Times business section was “If A.I. Crashes, What Happens To the Economy?” (Ben Casselman and Sydney Ember).  If “the data center boom is overshadowing weakness in other industries,” as “everything tied to artificial intelligence is booming” and “just about everything else is not,” that means real exposure, as AI “will need to fulfill its promise not just as a useful tool, but as a transformational technology that leads to huge increases in productivity.”  Otherwise, “a lot of the investment that has been put in place might turn out to be unjustified,” meaning AI might no longer be a growth area, let alone the gigantic economic engine it was last year.

In the same publication’s “DealBook:  Penetrating the A.I. bubble debate” (December 23rd), Andrew Ross Sorkin first asked “are we in an artificial intelligence bubble”?  One market analyst said, “if we’re not, we’re going to be,” as “railroads, steam engines, radio, airplanes, the internet” and any other “truly transformative technology” during the past 300 years” caused “asset bubble(s),” when “capital flows into a technology because everyone realizes that it’s transformative.”  The article does not consider what is clearly becoming the largest question here:  Exactly what is, and is not, a bubble?

More concerns popped up with “As A.I. Companies Borrow Billions, Debt Investors Grow Wary” (Joe Rennison, The New York Times, December 26th).  The author mentioned that “in one debt deal for Applied Digital, a data center builder, the company had to pay as much as 3.75 percentage points above similarly rated companies, equivalent to roughly 70 percent more in interest.”  These bonds and other instruments have sometimes “tumbled in price after being issued… and the cost of credit default swaps, which protect bond investors from losses, has surged in recent months on some A.I. companies’ debt.”  Although these problems will not apply to all firms, they are bad news for an industry that many have seen as offering guaranteed success, and may stop some companies from continued operation.

On the end-user side, the “AI investment surge continues as CEOs commit to spending more in 2026” (David Jagielski, USA Today, January 8th).  However, although “68% of executives plan to spend even more on AI this year,” “most of the current AI projects aren’t even profitable,” which “reinforces the notion that executives would rather continue investing in AI than potentially stop and perhaps admit to their shareholders that they haven’t been able to figure out to make AI generate meaningful gains.”  If that is true, it cannot go on forever, and some high-profile concerns admitting that the technology hasn’t worked for them, in part or in whole, could precipitate a swell of others doing the same.  That, for AI, would be bad.

The latest word goes to Sebastian Mallaby, who told us in the January 13th New York Times that “This Is What Convinced Me OpenAI Will Run Out of Money.”  Remember this company had admitted no expected profitability for four more years.  While AI producers may eventually get there, “how long will it take for these companies to reach the promised land, and can they survive in the meantime?”  Per Mallaby, investors will lose patience with them, and “while behemoths such as Google, Microsoft, and Meta earn so much from legacy businesses that they can afford to spend hundreds of billions collectively as they build A.I., free-standing developers such as OpenAI are in a different position.”  So, when they can no longer get enough funding to continue, “OpenAI will be absorbed by” one of the four major firms.  That could be perceived as a bubble bursting, but “an OpenAI failure wouldn’t be an indictment of A.I.  It would be merely the end of the most hype-driven builder of it.”

There will almost certainly be some large bankruptcies.  How we get through them will be critical.  We know by now that AI will not go away, but its scope may be truncated.  In the meantime, a good façade notwithstanding, 2026 may be a bracing monetary year.  There are no guarantees – that is especially true for the business side of artificial intelligence.

Friday, January 9, 2026

After the Shutdown: December Jobs Report Good, AJSN Shows Latent Demand Down 300,000 to 16.7 Million

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to be the truest recent one, distorted the least by the government shutdown and the problems it caused for reporting as well as the data itself.  So how did it turn out?

Nonfarm payroll employment (aka “jobs”) increased 50,000, close to the 65,000 average of estimates I saw.  Seasonally adjusted and unadjusted unemployment each dropped 0.2% to reach 4.4% and 4.1%.  The adjusted jobless number fell 300,000 to 7.5 million.  There were still 1.9 million long-term unemployed, out for 27 weeks or longer.  Although the count of those working part-time for economic reasons, or maintaining such propositions while so far unsuccessfully seeking full-time ones, lost 200,000, it kept the rest of last time’s disturbing 900,000 gain, and is still way high at 5.3 million.  The labor force participation rate and the employment-population ratio were split, with the former off 0.1% to 62.4% but the latter up the same amount to 59.7%.  Average hourly private nonfarm payroll earnings gained 16 cents, slightly more than inflation this time, to $37.02. 

The American Job Shortage Number or AJSN, the metric showing how many additional positions could be quickly filled if all knew that getting one would be only another errand, declined to the following:

Compared with a year before, the AJSN is up about 550,000, over 90% of that from higher official unemployment.  The share of the AJSN from that component was 37.7%, down 1.4%. 

How did the month look in general?  Joblessness, supported by additional data as well, stopped gaining and fell significantly, even after factoring in the December effect of relatively-high-employment.  The downside, if it is valid to call it that, was the number of people out of the labor force and saying they did not want jobs, up 929,000 and 724,000.  If those gains are genuine, they will follow through to this month now that the holidays have ended.  Overall, though, I liked this edition.  The turtle took a smallish, but clear, step forward.