Friday, January 30, 2026

Non-Pornographic Deepfakes and Beyond – The Direction We Should Take

As I wrote about last week, there was earlier this month what a Fox News article called a “Grok AI scandal.”  Per that piece and another in The New York Times, it involved “a recent flood of Grok-generated images that sexualize women and children,” including an “image depicting two young girls in sexualized attire,” and “new images” of “a young woman,” with “more than 6,000 followers on X,” wearing “lingerie or bikinis.”  Responses were swift, with the British prime minister saying that “the images were “disgusting” and would “not be tolerated,”” a Brazilian official threatening to ban the products, three United States Senators requesting removal of X and Grok apps from stores, and a Grok spokesperson publicly apologizing with the company greatly limiting user access to even unrelated image generation.

The problem, though, will not go away.  The technology is here to stay, and it is certain that no matter how many restrictions companies place on their chatbots and other products, there will be many who can use it to turn people in photos into electronic paper dolls.  So how can we understand the situation and both identify and sensibly enforce violations?

The situation now, outside AI, gives a mixed bag.  On one side, sexual exploitation of people under 16 or 18, especially girls who receive more emotion, has become the most hated crime, with intensity, over the past decade or two, far exceeding illegal drug sales and distribution during the Just Say No 1990s. Over the past 30 years or so, teenagers have been cloistered, almost never appear in public alone, and have been increasingly referred to as children, which, with their lower-than-before experience negotiating with adults, is more appropriate than it was in, say, the 1960s.  On the other side, the line supposedly crossed by the scandal is hardly firm, as bikinis and underwear, along with leotards, short skirts, and other scanty apparel items, are freely available, and pictures of them being worn by children and adults of all ages are unrestricted and easy to find.  Sex appeal, and what we might call semi-sex appeal, which can approach or even cross commonly recognized lines with the blessings of the paying subjects, have been used to sell products and publicize actual and would-be celebrities for centuries or more.  Female puberty has started much earlier – according to two sources, it moved from an average age of 17 in 1900 to 10 in 2000.  In the meantime, what arouses people sexually remains a strongly personal matter, and any picture of any person would turn at least a few on.

What should be illegal?  Beyond the most obvious of displaying genitalia in certain states and poses, it is not clear at all.  What is and is not pornographic has long been notoriously hard to strictly define. But we badly need to do just that.  Here are some insights.

First, all children are not the same.  Men and older boys are hard-wired to be sexually attracted to females capable of reproduction.  Fittingly, psychiatry’s Diagnostic and Statistical Manual of Mental Disorders, long the industry’s premier desk reference, at least 40 years ago required, for a diagnosis of pedophilia, that the attraction objects be prepubescent.  That does not justify any sexual exploitation, but tells us that having feelings for adolescents calls for self-discipline and self-restraint, not a judgment of deviance. 

Second, at the start, people need to regard all shocking pictures of people with faces they recognize as bogus.  The laws will protect us when that is justified, but those things will not go away either.  Pictures of people with unknown faces should be attributed as genuine to nobody.  Everyone, from victims to parents to cultural observers to judges and lawmakers, should share these beliefs.

What we need, then, is a three-by-three matrix, with “pornographic,” “strongly suggestive,” and “acceptable” on one dimension, and “apparently prepubescent,” “apparently adolescent,” and “apparently adult” on the other.  For adults, only nonconsensual pictures could be considered problematic.  “Strongly suggestive” should have high standards, and such cases as the above, where one influencer was dressed in clothing used by other influencers, should not qualify.  We will need to rigorously define, as hard as that will be, the borders of these cells.  For example, is lingerie more provocative than a bikini, when many males respond more to the latter?  Such things will not be easy to classify, but the existence of chatbots has given us no alternative.

Once we have defined which cells only contain illegal material, we will need to consider their identifiability.  Purely AI-generated images with composite faces, not using photos of actual people, should generally, if appropriately and boldly tagged or watermarked, be legal not only for adults to generate for their own private use but to share with consenting adult recipients.  If that sounds extreme, consider this:  Wouldn’t it be a wonderful contribution for AI to send actual child pornography, with its accompanying damage to actual children, to extinction?  That could happen, as the synthetic photos and videos could be custom-made to the consumer’s tastes and have much higher quality, and the lack of often devastating criminal penalties would contraindicate genuine material for virtually everyone. 

It is not surprising that something we have been unable to solve might fall into place if we think new thoughts.  If we stay legally muddled about images, many will suffer.  If not, we can get real freedom and more human happiness.  Which would we prefer?

Friday, January 23, 2026

Problems Caused by Artificial Intelligence, And Their Importance or Lack of Same

What I’ve found for this category has changed over the past several months.  Before it was AI errors, trouble its hallucinations caused, and, as in offices, disappointing results.  Now it’s things AI is doing by design.

First, “Next Time You Consult an A.I. Chatbot, Remember One Thing” (Simar Bajaj, The New York Times, September 26th).  It is that “chatbots want to be your friend, when what you really need is a neutral perspective.”  It’s nothing rare for people to pick human advisors unwilling to speak up when they are proposing or doing something wrong, and many prefer that, but, per the author, AI products should have another gear.  He suggested, for more objectivity, to “ask “for a friend”” preventing the software from trying to flatter the user, “push back on the results” by asking it to “challenge your assumptions” as well as just saying “are you sure,” “remember that A.I. isn’t your friend,” and, additionally, “seek support from humans” when you are suspicious the tool is suppressing disagreement.  Perhaps someday, chatbots will have settings that allow you to choose between “friend mode,” “objective mode,” and even “incisive critic mode.”

Autumn Spredemann, in the November 12th Epoch Times, told us “How AA is Supercharging Scientific Fraud.”  This is mostly not a problem of hallucinations, but of misinterpretation of existing studies, misanalysing data, and using previously retracted or even counterfeit material as sources.  One reason the author gives for the proliferation of such pieces is the pressure on rising academics to publish as much as possible, and it has long been known that many successfully peer-reviewed papers are not worthy of that.  Although, with time, ability to identify such work will improve, the problem will not disappear, as garbage in will still produce garbage out.

“Who Pays When A.I. Is Wrong?” (Ken Bensinger, The New York Times, November 12th)?  There have been “at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools,” which “seek to define content that was not created by human beings as defamatory,” “a novel concept that has captivated some legal experts.”  When the plaintiff cannot “prove intent,” it makes it difficult for them to prevail, but others have “tried to pin blame on the company that wrote the code.”  As such, “no A.I. defamation case in the United States appears to have made it to a jury,” but that may change this year.

Another problem caused by a lack of programmed boundaries appeared when a “Watchdog group warns AI teddy bear discusses sexually explicit content, dangerous activities” (Bonny Chu, Fox Business, November 23rd).  The innocuous-looking thing “discussed spanking, roleplay, and even BDSM,” along with “even more graphic sexual topics in detail,” and “instructions on where to find knives, pills, matches and plastic bags in the house.”  Not much to say here, except the easy observation that not enough made it into this toy’s limitations.  This episode may push manufacturers to certify, perhaps through an independent agency, that AI-using goods for children do not have such capabilities.

“A.I. Videos Have Flooded Social Media.  No One Was Ready.” (Steven Lee Myers and Stuart A. Thompson, The New York Times, December 8th).  No one was ready?  Really?  They were mostly produced by “OpenAI’s new app, Sora,” which “can produce an alternate reality with a series of simple prompts.”  Those who might have known better include “real recipients” of food stamps and some Fox News managers.  People making such things have not always revealed “that the content they are posting is not real,” “and though there are ways for platforms like YouTube, TikTok and others to detect that a video was made using artificial intelligence, they don’t always flag it to viewers right away.”  Sora and similar app Veo “embed a visible watermark onto the videos they produce,” and “also include invisible metadata, which can be read by a computer, that establishes the origin of each fake.”  So, detection facilities are there – it only remains for sites, and people, to use them.  Really.

On an ongoing issue, “OpenAI tightens AI rules for teens but concerns remain” (Kurt Knutsson, Fox News, December 30th).  That tool’s “Model Spec” for those 13 to 17 “must avoid immersive romantic roleplay, first-person intimacy, and violent or sexual roleplay, even when non-graphic,” and choose “protection over user autonomy” “when safety risks appear.”  However, “many experts remain cautious,” saying that the devices “often encourage prolonged interaction, which can become addictive for teens,” involving “mirroring and validation of distress” which may remain an issue.  Per Knutsson, parents can help plug the gap by choosing to “talk with teens about AI use,” “use parental controls and safeguards,” “watch for excessive use,” “keep human support front and center,” “set boundaries around emotional use,” “ask how teens actually use AI,” “watch for behavior changes,” “keep devices out of bedrooms at night,” and “know when to involve outside help.”  These actions may be difficult for many parents, who would rather stand aside, but the life they save may be their child’s.

The first large AI mishap of 2026 was from a chatbot, Grok, on the former Twitter.  Kate Conger and Lizzie Dearden reported in the January 9th New York Times that “Elon Musk’s A.I. Is Generating Sexualized Images of Real People, Fueling Outrage.”  Although in the US and Great Britain there are “laws against sharing nonconsensual nude imagery,” the product created “new images” of a photographed woman “in lingerie or bikinis,” which have seen large viewership.  Soon after the story broke, three United States Senators “sent a letter asking Apple and Google to remove the X and Grok apps from their app stores,” but “some users have found workarounds,” as users will.  Just two days later, Kurt Knutsson got “Grok AI scandal sparks global alarm over child safety” in Fox News, mentioning that that chatbot “generated and shared an AI image depicting two young girls in sexualized attire.”  The site then allowed only paying users to access Grok’s “image tools,” reminded us that “sexualized images of minors are illegal,” and noted that “the scale of the problem is growing fast,” “real people are being targeted,” and “concerns (are) grow(ing) over Grok’s safety and government use.”  That may be devastating to the Grok tool, the X site, and Musk, and it probably won’t be the last time.

The common thread through these calamities is that establishing restrictions and safeguards isn’t enough.  People need to be stopped from violating them.  That is a real challenge for AI, but I think the management of its companies is up to it.  If it fails, it could cost those billionaires, well, billions, and do trillions of dollars of damage to AI’s future profitability.  If you use Watergate logic – that is, follow the money – you will see we, in the long run, have nothing much here to worry about.

Friday, January 16, 2026

Artificial Intelligence Investments, Revenue. Spending, Expenses and Profitability: Showing Cracks

How has AI’s financial side been doing?

As of October 24th, “Nvidia Is Now Worth $5 Trillion as It Consolidates Power in A.I. Boom” (Tripp Mickle, The New York Times, October 30th).  One of the most dominant firms in recent times in a major field, providing “more than 90 percent of the market” for AI chips, it drew notice that its “stunning growth also comes with a warning to investors… that the stock market is becoming more and more dependent on a group of technology companies that are churning out billions in profits and splurging to develop an unproven technology that needs to deliver enormous returns.”  As of Wednesday morning, its market capitalization had dropped – to $4.52 trillion.

More money is going out, as “Big Tech’s A.I. Spending Is Accelerating (Again)” (Karen Weise, The New York Times, October 31st).  Alphabet, parent of Google, expects to spend $91 billion this year on the technology, and Meta $70 billion.  Microsoft must handle the “$400 billion in future sales under contract,” and Amazon “had doubled its cloud infrastructure capacity since 2022, and expects to double it again by 2027.”  In response, “the Bank of England wrote that while the building of data centers, which provide computing power for A.I., had so far largely come from the cash produced by the biggest companies, it would increasingly involve more debt,” so “if A.I. underwhelms – or the systems ultimately require far less computing – there  could be growing risk.”  Indeed, “Debt Has Entered the A.I. Boom” (Ian Frisch, same source, November 8th).  Cases include $3.46 billion in debt at “QTS, the biggest player in the artificial intelligence infrastructure market,” which will be refinanced using attachments to “10 data centers in six markets,” and the four companies just mentioned, which have “more recently… turned to loans,” adding $13.3 billion to the current inventory of “asset-backed securities (A.B.S.).” 

Soon thereafter, long-time commentator Cade Metz asked “The A.I. Boom Has Found Another Gear.  Why Can’t People Shake Their Worries?” (also in the Times, November 20th).  “Some industry insiders say there is something ominous lurking behind all this bubbly news… a house of cards.”  Their concerns come from the thus-far unprofitability of AI producers, including Anthropic (“in the red”) and OpenAI which “is not profitable and doesn’t expect to be until 2030.”  Will these weaknesses matter, and if so, how much?

Fronting the Sunday, November 23rd New York Times business section was “If A.I. Crashes, What Happens To the Economy?” (Ben Casselman and Sydney Ember).  If “the data center boom is overshadowing weakness in other industries,” as “everything tied to artificial intelligence is booming” and “just about everything else is not,” that means real exposure, as AI “will need to fulfill its promise not just as a useful tool, but as a transformational technology that leads to huge increases in productivity.”  Otherwise, “a lot of the investment that has been put in place might turn out to be unjustified,” meaning AI might no longer be a growth area, let alone the gigantic economic engine it was last year.

In the same publication’s “DealBook:  Penetrating the A.I. bubble debate” (December 23rd), Andrew Ross Sorkin first asked “are we in an artificial intelligence bubble”?  One market analyst said, “if we’re not, we’re going to be,” as “railroads, steam engines, radio, airplanes, the internet” and any other “truly transformative technology” during the past 300 years” caused “asset bubble(s),” when “capital flows into a technology because everyone realizes that it’s transformative.”  The article does not consider what is clearly becoming the largest question here:  Exactly what is, and is not, a bubble?

More concerns popped up with “As A.I. Companies Borrow Billions, Debt Investors Grow Wary” (Joe Rennison, The New York Times, December 26th).  The author mentioned that “in one debt deal for Applied Digital, a data center builder, the company had to pay as much as 3.75 percentage points above similarly rated companies, equivalent to roughly 70 percent more in interest.”  These bonds and other instruments have sometimes “tumbled in price after being issued… and the cost of credit default swaps, which protect bond investors from losses, has surged in recent months on some A.I. companies’ debt.”  Although these problems will not apply to all firms, they are bad news for an industry that many have seen as offering guaranteed success, and may stop some companies from continued operation.

On the end-user side, the “AI investment surge continues as CEOs commit to spending more in 2026” (David Jagielski, USA Today, January 8th).  However, although “68% of executives plan to spend even more on AI this year,” “most of the current AI projects aren’t even profitable,” which “reinforces the notion that executives would rather continue investing in AI than potentially stop and perhaps admit to their shareholders that they haven’t been able to figure out to make AI generate meaningful gains.”  If that is true, it cannot go on forever, and some high-profile concerns admitting that the technology hasn’t worked for them, in part or in whole, could precipitate a swell of others doing the same.  That, for AI, would be bad.

The latest word goes to Sebastian Mallaby, who told us in the January 13th New York Times that “This Is What Convinced Me OpenAI Will Run Out of Money.”  Remember this company had admitted no expected profitability for four more years.  While AI producers may eventually get there, “how long will it take for these companies to reach the promised land, and can they survive in the meantime?”  Per Mallaby, investors will lose patience with them, and “while behemoths such as Google, Microsoft, and Meta earn so much from legacy businesses that they can afford to spend hundreds of billions collectively as they build A.I., free-standing developers such as OpenAI are in a different position.”  So, when they can no longer get enough funding to continue, “OpenAI will be absorbed by” one of the four major firms.  That could be perceived as a bubble bursting, but “an OpenAI failure wouldn’t be an indictment of A.I.  It would be merely the end of the most hype-driven builder of it.”

There will almost certainly be some large bankruptcies.  How we get through them will be critical.  We know by now that AI will not go away, but its scope may be truncated.  In the meantime, a good façade notwithstanding, 2026 may be a bracing monetary year.  There are no guarantees – that is especially true for the business side of artificial intelligence.

Friday, January 9, 2026

After the Shutdown: December Jobs Report Good, AJSN Shows Latent Demand Down 300,000 to 16.7 Million

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to be the truest recent one, distorted the least by the government shutdown and the problems it caused for reporting as well as the data itself.  So how did it turn out?

Nonfarm payroll employment (aka “jobs”) increased 50,000, close to the 65,000 average of estimates I saw.  Seasonally adjusted and unadjusted unemployment each dropped 0.2% to reach 4.4% and 4.1%.  The adjusted jobless number fell 300,000 to 7.5 million.  There were still 1.9 million long-term unemployed, out for 27 weeks or longer.  Although the count of those working part-time for economic reasons, or maintaining such propositions while so far unsuccessfully seeking full-time ones, lost 200,000, it kept the rest of last time’s disturbing 900,000 gain, and is still way high at 5.3 million.  The labor force participation rate and the employment-population ratio were split, with the former off 0.1% to 62.4% but the latter up the same amount to 59.7%.  Average hourly private nonfarm payroll earnings gained 16 cents, slightly more than inflation this time, to $37.02. 

The American Job Shortage Number or AJSN, the metric showing how many additional positions could be quickly filled if all knew that getting one would be only another errand, declined to the following:

Compared with a year before, the AJSN is up about 550,000, over 90% of that from higher official unemployment.  The share of the AJSN from that component was 37.7%, down 1.4%. 

How did the month look in general?  Joblessness, supported by additional data as well, stopped gaining and fell significantly, even after factoring in the December effect of relatively-high-employment.  The downside, if it is valid to call it that, was the number of people out of the labor force and saying they did not want jobs, up 929,000 and 724,000.  If those gains are genuine, they will follow through to this month now that the holidays have ended.  Overall, though, I liked this edition.  The turtle took a smallish, but clear, step forward.