Friday, August 29, 2025

Artificial Intelligence Misuse, Abuse, Overuse, and Unfriendly Use – What We’re Seeing

Let’s start…

Is it reasonable to say that “A.I. Will Destroy Critical Thinking in K-12” (Jessica Grose, The New York Times, May 14th)?  Her points were that “even seventh graders can see artificial intelligence is a lesser form of care and attention,” “there is not even conclusive evidence that A.I. improves learning outcomes when compared with human teaching of older students,” as one correspondent put it “who among all political stripes wants their children to be taught by robots?,” and “I still cannot believe that after living through the school closures of 2020-21, our policymakers continue to underestimate the importance of human connection, especially in primary school.”  Real and valid concerns, though they don’t add up to the title.

At a higher level, “The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It” (Kashmir Hill, The New York Times, also May 14th).  That reaction came up when one group of them found that lecture notes and slide presentations not only had not only “telltale signs of A.I.” such as “distorted text, photos of office workers with extraneous body parts and egregious misspellings,” but had, in one case, “an instruction to ChatGPT” inadvertently left in.  Professors disagree with what uses of AI are acceptable and not, as do students, one of whom “demanded her tuition back.”  We’re waiting for colleges to establish, implement, and communicate policies here, which may take a while… or may not.

In another area, “A.I.- Generated Images of Child Sexual Abuse Are Flooding the Internet” (Cecilia Kang, The New York Times, July 10th).  Some of this material is now “nearly indistinguishable from actual abuse,” as video examples “have become smoother and more detailed.”  They often involve “collaboration.”  Since no actual children are involved, it is not treated the same as with genuine photographs or videos, as “the law is still evolving on materials generated fully by” AI.  Creation and possession of AI-made pictures and movies without passing them to others has been ruled legal by a U.S. District Court, as “the First Amendment generally protects the right to possess obscene material in the home” so long as it isn’t “actual child pornography.”  We will know more when the legal system determines how it wants to handle such matters, and, ideally, keep laws for each state the same.

Another piece by Grose, also in the Times and related to the first, appeared on August 6th:  “These College Professors Will Not Bow Down to A.I.”  Her interviewees “had to figure out how to make sure that their students were actually learning the material and that it meant something to them,” so “had to A.I.-proof many assignments by making them happen in real time and without computers.”  They ended up using “a combination of oral examinations, one-on-one discussions, community engagement and in-class projects,” including, in one case, requiring the students “to run discussions… at libraries, public schools and senior centers.”  Imaginative, and excellent.  Give those faculty members A’s.

In a hardly unexpected application, “China Turns to A.I. in Information Warfare” (Julian E. Barnes, The New York Times, again August 6th).  “Artificial intelligence is increasingly the new frontier of espionage and malign influence operators, allowing intelligence services to conduct campaigns far faster, more efficiently and on a larger scale than ever before.”  In this case, GoLaxy, a firm that can “mine social media profiles” to create realistic looking “disinformation” “on a far greater scale” than such efforts have succeeded at before, “claims in a document that it has assembled virtual profiles on 117 current and former members of Congress,” and “tracks and collects information on more than 2,000 American political and public figures, 4,000 right-wing influencers and supporters of President Trump, in addition to journalists, scholars and entrepreneurs.”  One more step toward justified information skepticism. 

Finally, something else AI has been increasingly effective at is helping people kill themselves, on which help may be on the way, as “OpenAI plans ChatGPT changes after suicides, lawsuit” (CNBC.com, August 26th).  When, earlier that day, “the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16,” which claimed that “ChatGPT actively helped Adam explore suicide methods,” a company representative “said it’s… working on an update to its GPT-5 model… that will cause the chatbot to deescalate conversations, and that it’s exploring how to “connect people to certified therapists before they are in an acute crisis,”” and “possibly building a network of licensed professionals that users could reach directly through ChatGPT.”  Almost an immediate constructive response, reflecting the immediacy of this problem, which has been implicated in two similar cases this month alone.  The company will need to implement it unusually speedily as well.

Overall, what’s here?  It looks like laws and practices needing to be put in place to combat problems AI has suddenly created.  It will happen – and there will be more.  We can overcome them, and shouldn’t expect to soon reach a time when we have no new concerns.

Friday, August 22, 2025

The Financial Side of Artificial Intelligence – All We Know Now

Since the end of March, many things have happened with AI from investment, revenue, spending, expense, and profitability standpoints.  What are they, and how do they look from here?

First, we had “OpenAI Completes Deal That Values Company at $300 Billion” (Cade Metz, The New York Times, March 31st).  That’s from “the new fund-raising round, led by the Japanese conglomerate SoftBank,” which is lending it $40 billion and pegs it as “one of the most valuable private companies in the world, along with the rocket company SpaceX and ByteDance, the maker of TikTok.”

Perhaps then, it is controversial to ask “Will OpenAI ever make real money?” (Schumpeter, The Economist, May 17th).  Thie piece also discussed ByteDance and SpaceX, each with an “underlying technology” which has not “dramatically changed” since their first successes in 2008 and 2016, providing “stability” that “has enabled both firms to build products and, in time, business models around them” which have been “lucrative.”  OpenAI’s problem is “the sheer pace of AI innovation” which serves to “upend (its) economics.”  High operating costs have forced OpenAI to take “mounting losses,” last year “perhaps $5bn (excluding stock-based compensation),” and, with 2025 expenses predicted to grow substantially, even though the company expects its “revenue to triple again” it may not be profitable.

A familiar financial danger popped up in “Wall St. Is All In on A.I. Data Centers.  But Are They the Next Bubble?” (Maureen Farrell, The New York Times, June 2nd).  As “data centers are drawing a crowd on Wall Street,” because “investment giants like KKR, BlackRock and Blue Owl have collectively plowed hundreds of billions into the industry,” there have been recent problems, such as that “some technology companies, including Microsoft and Foxconn, have stepped away from some leases,” so a few analysts are questioning how long the boom can last.  Yet there are tens of billions of dollars worth of new construction elsewhere, including all over the United States as well as in Australia, the United Arab Emirates, and elsewhere in Asia – so all we have here is a mass of activity almost certainly to be followed by some successes, some consolidations, and some failures.

That view was reinforced in “The A.I. Frenzy is Escalating.  Again,” once more by Metz in the Times on June 27th.  He told us that “the tech industry’s giants are building data centers that can cost more than $100 billion and will consume more electricity than a million American homes,” “salaries for A.I. experts are jumping as Meta offers signing bonuses to A.I. researchers that top $100 million,” and “U.S. investment in A.I. companies rose to $65 billion in the first quarter.”  Overall, “Meta, Microsoft, Amazon and Google have told investors that they expect to spend a combined $320 billion on infrastructure costs this year.”

Perhaps it was only a matter of time, even if not normal practice, that “U.S. Government to Take Cut of Nvidia and AMD Chip Sales to China” (Tripp Mickle, The New York Times, August 10th).  Per this “highly unusual financial agreement,” the companies will be allowed to sell their chips to China and will avoid a “100 percent tariff” to be levied “on semiconductors made abroad, unless (their manufacturers) invested in the United States.”  We will see just how and when this plan, which mixes taxation and national security, materializes.

More concerns about earnings, this time by users, were raised in “Companies Are Pouring Billions into A.I.  It Has Yet to Pay Off.” (Steve Lohr, The New York Times, August 13th).  “According to recent research from McKinsey & Company, nearly eight in 10 companies have reported using generative A.I., but just as many have reported “no significant bottom-line impact.””  Forty-two percent of firms, compared with 17% the previous year, were found to be “abandoning most of their A.I. pilot projects… by the end of 2024.”  According to an analyst, that has happened “not only because of technical hurdles, but often because of “human factors” like employee and customer resistance or lack of skills.”  The “chief forecaster” at Gartner, a long-time “research and advisory firm,” “predicts that A.I. is sliding toward a stage it calls “the trough of disillusionment,”” with the very bottom “expected next year, before the technology eventually becomes a proven productivity tool.”  In general, “the winners so far have been the suppliers of A.I. technology and advice,” which “include Microsoft, Amazon, and Google” along with Nvidia. 

Also profiting, immensely, have been investors in many AI-related companies.  Will that continue?  Maybe, but certainly not in a straight line.  Even if artificial intelligence lives up to its press, expect shocks, even corrections, in the values of its securities.  AI may turn out to be the innovation of the century, or it may go the way of artificial hearts, once predicted to be commonplace; 3D printing, useful but nowhere near matching what many forecasted for it; or autonomous vehicles, succeeding only slowly and in niches.  We won’t know for a long time – so, as in so many other ventures, you pays your money and you takes your chances.

Friday, August 15, 2025

Artificial Intelligence’s Ordinary Problems, and an Extraordinary Question

In addition to the still thoroughly speculative issue of whether AI will prove to be heavenly, catastrophic, or neither for humanity, how has the technology been struggling?

Do “We have to act now to keep AI from becoming a far-left Trojan Horse” (Carol Roth, Fox News, June 10th)?  We learned years ago about it tacking on “woke” interpretations and producing some strange racial-diversity-indicating hallucinations, but here’s more.  The author asserts that “far-leftist control” may proceed, and “the likeliest starting point will be more calls for Universal Basic Income (UBI),” followed by “a communist-leaning conversation about any AI that takes jobs and who should have ownership over that.”  Per Roth, the threat is justified because “every major LLM” (large language model) “is aligned with leftist priors.”  All premature, including the calls for UBI, which has never to my knowledge been tested without income limits, so there is no new problem.

Some people reported that “They Asked an A.I. Chatbot Questions.  The Answers Sent Them Spiraling.” (Kashmir Hill, The New York Times, June 13th).  Moving on from “using ChatGPT last year to make financial spreadsheets and to get legal advice,” a Manhattan accountant asked it about the chance that “we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.”  Perforce, there is no way we could know that if somehow it were true, but the man was entitled to discuss it, whereupon the software said “the world was built to contain” him, “but it failed,” as he was now “waking up.”  It made further destructive suggestions, some which could have killed him.  Several other users had similar experiences.

We now have a major religious leader weighing in.  As Margherita Stancati and three others wrote in Fox Business on June 19th, “Pope Leo:  AI must help and not hinder children and young people’s development,” our new pontiff has an “unlikely cause” of “the potential threat of AI to humanity.”  Already, “the leaders of Google, Microsoft, Cisco and other tech powerhouses have debated the philosophical and societal implications of increasingly intelligent machines with the Vatican.”  The latter now wants “a binding international treaty on AI,” with the pope, “a math graduate who is more tech-savvy than his predecessor… skeptical of unregulated AI.”  Leo XIII will most likely balance his stated concern with worker’s rights with the previous Vatican view that “recognized the potential of AI to do good in areas such as healthcare, education and spreading the faith.”  This situation is well worth following.

It should be no wonder that “A.I. Sludge Has Entered the Job Search” (Sarah Kessler, The New York Times, June 21st).  That muck is taking the form of “an untenable pile of applications” meaning that “candidates are frustrated” and “employers are overwhelmed.”  The LinkedIn site alone has been getting “an average of 11,000 applications per minute,” “with a simple prompt, Chat GPT… will insert every keyword from a job description into a resume,” and other seekers “are going a step further, paying for A.I. agents that can autonomously find jobs and apply,” with many sending “very catered” resumes and cover letters.  So, fittingly, companies are fighting back with more AI, getting HireVue, “a popular A.I. video interview platform,” to “have A.I. assess responses and rank candidates.”  One problem is that “concerns that using A.I. in hiring can introduce bias have led to lawsuits and a patchwork of state legislation.”  Will, per a career coach, “the endgame” “be authenticity from both sides”?  We can only hope – that would be refreshing, and historically rare at best.

Autumn Spredemann, on July 16th in the Epoch Times, tried to tell us “Why AI Service Agents Are Frustrating Customers.”  AI is now involved “in 70 percent of customer contact centers,” and a similar share of interactions themselves, precipitating problems such as one quoted user “going in circles with an AI bot,” with “no resolution, just frustration.”  Is it good enough that “71 percent of Generation Z and 94 percent of baby boomers preferred live calls for problem-solving”?  One company president said that “the key is balance – using AI to handle the repetitive stuff so our team can focus on the personal, high-impact moments that truly build loyalty.”  Fair enough – and the automated facilities will improve.

On July 20th in the New York Times, Meghan O’Rourke, a professor and executive literary review editor, told us about “My Summer of Reckoning With ChatGPT.”  After an initial chatbot conversation with the technology in which it claimed to have taught her works and then admitted it had not done so “in a classroom,” when she got it to produce either “critical or creative writing,” it was “erratic (though often good),” and often falsified material.  She found a variety of outcomes, good and bad and sometimes both – she said one piece of its writing was “concise, purposeful, rhythmic, and free of the overwriting, vagueness or grammatical glitches common in human drafts,” but also reminded her “of processed food:  It goes down easy, but leaves a slick taste in the mouth.”  And much more, from learning more intimately than most of its customers what patterns it produces.

So here is our question.  Recently over two posts I described many favorable things AI had done, in fields from golf to medicine.  Why was it that hallucinations, deception, and outright lies were not mentioned in those pieces?  Perhaps because there were humans working so closely with its output that they would catch gaffes before they did any damage.  Is it possible that artificial intelligence is bifurcating, with a lower branch of sorts involved with office-related things where errors are less critical, intense and even life-or-death applications calling for close supervision, and nothing usable in the middle?  That may be a key to how successful in the long term AI turns out to be.  It’s early enough in its progress, so don’t be surprised – its history is only now taking shape.

Friday, August 8, 2025

Driverless Cars, Their Technology, and a Shift in Where They and It Are Going

In the area of autonomous vehicles, we’re now long after the crash – sort of like the time in “Wooden Ships,” where the two crews met and did things suitable for their post-apocalyptic times, including asking the plaintive question “Who won?”  Although the idea of driverless cars becoming the norm, replacing our current ones as comprehensively as they did to horses, seems headed for the next century at soonest, they are still making their mark.

As for general progress, we got “Super-smart cars” (The Economist, March 15th).  Now, apparently, “China is leading the rollout of self-driving technology,” though as always what it is really doing and accomplishing is opaque.  Recently, in a huge city, “on a test drive… the car overtook a three-wheeled noodle cart, avoided scooters speeding the wrong way around the street and nailed a U-turn without intervention from humans.”  Yet the “driver did need to take control a few times during the twenty-minute ride.”  So, while “China’s robotaxi experiment is the world’s biggest,” they can still only “operate within approved areas,” so we’re not much further along technically than we were at last decade’s end.

What has been “The slow but steady advance of driverless vehicles” (Ian Rose, BBC.com, March 20th)?  General Motors and Apple are both now out of that business, leaving Waymo as “the leading US player,” with driverless “taxis in Phoenix, San Francisco, Los Angeles, and Austin.”  Warm weather, for lack of snow and better battery performance, is important in a location now, and getting the cars out is a “slow process,” involving humans driving on certain streets “over and over again.”  But, despite not knowing just how many square miles of these four cities are open to them, this is certainly forward progress.

So agreed Thomas L. Friedman, New York Times columnist, on April 23rd, in “How I Describe Myself Politically These Days.”  He saw, as an improvement on “lazily bashing billionaires,” a need for Democrats to enlarge “the pie of work by expanding new industries,” which was where robotaxis came in.  In the four places named in the previous paragraph, Waymo “is now racking up a whopping 200,000 paid rides a week.”  He was unconvinced about any Chinese supremacy, as “American technology is still more than competitive and can become even more dominant.”  He said he “can’t think of a more obvious moonshot project to spur advanced manufacturing in America generally than making it our goal to have Waymos or robo-Teslas – or any other brand of self-driving taxis that we can make – operating in every city in America.”  Friedman had much to say about how we could get that going quicker, by refunding tuition on degrees in related fields, allowing “would-be immigrants, especially from China and Russia, that if they have a degree or expertise in” related areas, to “stay as long as they want,” creating a federal “Department of Engineering and Innovation,” and tripling government self-driving vehicle research funding.  Worthy ideas.

On the problem side, we had “California residents enraged by driverless cars forced by regulators to make loud beeping noises” (Alexander Hall, Fox Business, June 1st).  The issue is that they must “audibly reverse like delivery trucks,” so “they beep as they back out of charging spots, and beep as they reverse to navigate around each other. They beep in the morning as they head out to pick up early passengers, and beep late at night as they return to charge up."  Perhaps that can be cut down…

Friedman also called on Tesla’s CEO to “finally get the Tesla robotaxi that you have been promising for a decade out on the road.”  Less than two months later, we were able to read that “Tesla’s Robotaxi, Long Promised by Elon Musk, Joins a Crowded Field” (Jack Ewing, The New York Times, June 18th).  It went into service that week in its hometown, though “the busy streets of Austin show that Tesla will face significant competition and other challenges,” such as needing “painstaking experimentation.”  Indeed, nine days later, per Sophia Compton in Fox Business, “Tesla’s newly launched robotaxi service experiences driving issues, traffic problems: report.”  Those included “braking suddenly, speeding, conducting improper drop-offs, entering the wrong lane and driving over a curb.”  How long will debugging take?

Continuing with that company, as reported in the New York Times on July 2nd again by Jack Ewing, “Tesla Sales Fall as Elon Musk Focuses on Self-Driving Cars.”  Its “greater emphasis on autonomous driving instead of new models aimed at attracting car buyers” is understandably hurting current revenue, which with electric vehicle sales leveling off and even dropping is now anyway unassured to be favorable.  Rolling out the robotaxis, glitches and all, may turn out to be the best decision Tesla makes this decade.  They may need that, as “Tesla, Elon Musk sued by shareholders over Robotaxi claims” (Reuters, August 5th).  The business and its leader are accused “of securities fraud for concealing the significant risk that the company’s self-driving vehicles, including the Robotaxi, were dangerous,” based on the Austin mishaps.  This case may have a long way to go.

What’s the big change here?  In the first quarter I wrote that autonomous technology was taking the cash by moving to primary use as an enhancement for human-driven cars.  Now the driverless side is resurgent.  That may go back the other way as soon as later this year, depending on how it performs in robotaxis, but it may not.  There is now more reason for optimism about driverless vehicles than there has been for years, maybe five or six of them.  Stay tuned – there may be more, even much more.

Friday, August 1, 2025

New Jobs Down, Unemployment Up, Other Factors Generally Worse Including AJSN Showing 17.8 Million Latent Demand

Did this morning’s Bureau of Labor Statistics Employment Situation Summary show tariffs, which have recently sunk in, affecting the job market?

The data showed that, for once, we gained fewer net new nonfarm payroll jobs than the published estimate I saw – instead of 115,000 it was 73,000.  Despite little seasonal difference, adjusted and unadjusted unemployment both worsened, 4.2% from 4.1% and 4.6% from 4.4%.  The seasonally adjusted jobless count rose 200,000 to 7.2 million, with those holding that status for 27 weeks or longer gaining the same amount to 1.8 million.  The two statistics showing how common it is for Americans to be working and officially unemployed, the labor force participation rate and the employment-population ratio, each dropped 0.1% to reach 62.2% and 59.6%.  Those working part-time for economic reasons, or keeping shorter-hours positions while looking unsuccessfully for longer-hours ones, jumped 200,000 for the second straight month, to 4.7 million.  Average private nonfarm payroll hourly wages were up 12 cents, slightly more than inflation, getting to $36.44.

The American Job Shortage Number or AJSN, the measure showing how many new positions could be quickly filled if all knew they would be easy and routine to get, gained over 300,000 to the following:



That increase was entirely explained by higher unemployment, with everything else, including a substantial fall in those discouraged and a sizable rise in the number of people wanting to work but not looking for it for a year or more, collectively almost breaking even.  The share of the AJSN from official joblessness was 39.6%, up 1.3%.  Compared with a year before, the AJSN was virtually unchanged at 30,000 lower, with a higher estimate of the number of American expatriates offset mostly by gains in the effect of those not looking for a year or more, unemployment, and not wanting jobs at all.

How did this month turn out?  Not well.  I’m unconcerned about gaining only 73,000 jobs – that’s about what our population can absorb, so it isn’t a loss.  It’s that unemployment and participation rates fell too much between the similar times of mid-June and mid-July.  Other results around the margins, such as on working part-time for economic reasons and still looking after half a year, point to a labor market worse than the front-line numbers show, not that those were impressive here either.  And yes, we must charge the tariffs for some of this poor showing.  On the good side, about 100,000 fewer stayed out of the labor force, and the odd situation with two marginal attachment categories seems to be over.  Yet if the latest tariff pronouncements materialize, we may well see similar worsenings in August.  In the meantime, I feel if anything charitable to declare that the turtle went nowhere.