Friday, January 23, 2026

Problems Caused by Artificial Intelligence, And Their Importance or Lack of Same

What I’ve found for this category has changed over the past several months.  Before it was AI errors, trouble its hallucinations caused, and, as in offices, disappointing results.  Now it’s things AI is doing by design.

First, “Next Time You Consult an A.I. Chatbot, Remember One Thing” (Simar Bajaj, The New York Times, September 26th).  It is that “chatbots want to be your friend, when what you really need is a neutral perspective.”  It’s nothing rare for people to pick human advisors unwilling to speak up when they are proposing or doing something wrong, and many prefer that, but, per the author, AI products should have another gear.  He suggested, for more objectivity, to “ask “for a friend”” preventing the software from trying to flatter the user, “push back on the results” by asking it to “challenge your assumptions” as well as just saying “are you sure,” “remember that A.I. isn’t your friend,” and, additionally, “seek support from humans” when you are suspicious the tool is suppressing disagreement.  Perhaps someday, chatbots will have settings that allow you to choose between “friend mode,” “objective mode,” and even “incisive critic mode.”

Autumn Spredemann, in the November 12th Epoch Times, told us “How AA is Supercharging Scientific Fraud.”  This is mostly not a problem of hallucinations, but of misinterpretation of existing studies, misanalysing data, and using previously retracted or even counterfeit material as sources.  One reason the author gives for the proliferation of such pieces is the pressure on rising academics to publish as much as possible, and it has long been known that many successfully peer-reviewed papers are not worthy of that.  Although, with time, ability to identify such work will improve, the problem will not disappear, as garbage in will still produce garbage out.

“Who Pays When A.I. Is Wrong?” (Ken Bensinger, The New York Times, November 12th)?  There have been “at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools,” which “seek to define content that was not created by human beings as defamatory,” “a novel concept that has captivated some legal experts.”  When the plaintiff cannot “prove intent,” it makes it difficult for them to prevail, but others have “tried to pin blame on the company that wrote the code.”  As such, “no A.I. defamation case in the United States appears to have made it to a jury,” but that may change this year.

Another problem caused by a lack of programmed boundaries appeared when a “Watchdog group warns AI teddy bear discusses sexually explicit content, dangerous activities” (Bonny Chu, Fox Business, November 23rd).  The innocuous-looking thing “discussed spanking, roleplay, and even BDSM,” along with “even more graphic sexual topics in detail,” and “instructions on where to find knives, pills, matches and plastic bags in the house.”  Not much to say here, except the easy observation that not enough made it into this toy’s limitations.  This episode may push manufacturers to certify, perhaps through an independent agency, that AI-using goods for children do not have such capabilities.

“A.I. Videos Have Flooded Social Media.  No One Was Ready.” (Steven Lee Myers and Stuart A. Thompson, The New York Times, December 8th).  No one was ready?  Really?  They were mostly produced by “OpenAI’s new app, Sora,” which “can produce an alternate reality with a series of simple prompts.”  Those who might have known better include “real recipients” of food stamps and some Fox News managers.  People making such things have not always revealed “that the content they are posting is not real,” “and though there are ways for platforms like YouTube, TikTok and others to detect that a video was made using artificial intelligence, they don’t always flag it to viewers right away.”  Sora and similar app Veo “embed a visible watermark onto the videos they produce,” and “also include invisible metadata, which can be read by a computer, that establishes the origin of each fake.”  So, detection facilities are there – it only remains for sites, and people, to use them.  Really.

On an ongoing issue, “OpenAI tightens AI rules for teens but concerns remain” (Kurt Knutsson, Fox News, December 30th).  That tool’s “Model Spec” for those 13 to 17 “must avoid immersive romantic roleplay, first-person intimacy, and violent or sexual roleplay, even when non-graphic,” and choose “protection over user autonomy” “when safety risks appear.”  However, “many experts remain cautious,” saying that the devices “often encourage prolonged interaction, which can become addictive for teens,” involving “mirroring and validation of distress” which may remain an issue.  Per Knutsson, parents can help plug the gap by choosing to “talk with teens about AI use,” “use parental controls and safeguards,” “watch for excessive use,” “keep human support front and center,” “set boundaries around emotional use,” “ask how teens actually use AI,” “watch for behavior changes,” “keep devices out of bedrooms at night,” and “know when to involve outside help.”  These actions may be difficult for many parents, who would rather stand aside, but the life they save may be their child’s.

The first large AI mishap of 2026 was from a chatbot, Grok, on the former Twitter.  Kate Conger and Lizzie Dearden reported in the January 9th New York Times that “Elon Musk’s A.I. Is Generating Sexualized Images of Real People, Fueling Outrage.”  Although in the US and Great Britain there are “laws against sharing nonconsensual nude imagery,” the product created “new images” of a photographed woman “in lingerie or bikinis,” which have seen large viewership.  Soon after the story broke, three United States Senators “sent a letter asking Apple and Google to remove the X and Grok apps from their app stores,” but “some users have found workarounds,” as users will.  Just two days later, Kurt Knutsson got “Grok AI scandal sparks global alarm over child safety” in Fox News, mentioning that that chatbot “generated and shared an AI image depicting two young girls in sexualized attire.”  The site then allowed only paying users to access Grok’s “image tools,” reminded us that “sexualized images of minors are illegal,” and noted that “the scale of the problem is growing fast,” “real people are being targeted,” and “concerns (are) grow(ing) over Grok’s safety and government use.”  That may be devastating to the Grok tool, the X site, and Musk, and it probably won’t be the last time.

The common thread through these calamities is that establishing restrictions and safeguards isn’t enough.  People need to be stopped from violating them.  That is a real challenge for AI, but I think the management of its companies is up to it.  If it fails, it could cost those billionaires, well, billions, and do trillions of dollars of damage to AI’s future profitability.  If you use Watergate logic – that is, follow the money – you will see we, in the long run, have nothing much here to worry about.

Friday, January 16, 2026

Artificial Intelligence Investments, Revenue. Spending, Expenses and Profitability: Showing Cracks

How has AI’s financial side been doing?

As of October 24th, “Nvidia Is Now Worth $5 Trillion as It Consolidates Power in A.I. Boom” (Tripp Mickle, The New York Times, October 30th).  One of the most dominant firms in recent times in a major field, providing “more than 90 percent of the market” for AI chips, it drew notice that its “stunning growth also comes with a warning to investors… that the stock market is becoming more and more dependent on a group of technology companies that are churning out billions in profits and splurging to develop an unproven technology that needs to deliver enormous returns.”  As of Wednesday morning, its market capitalization had dropped – to $4.52 trillion.

More money is going out, as “Big Tech’s A.I. Spending Is Accelerating (Again)” (Karen Weise, The New York Times, October 31st).  Alphabet, parent of Google, expects to spend $91 billion this year on the technology, and Meta $70 billion.  Microsoft must handle the “$400 billion in future sales under contract,” and Amazon “had doubled its cloud infrastructure capacity since 2022, and expects to double it again by 2027.”  In response, “the Bank of England wrote that while the building of data centers, which provide computing power for A.I., had so far largely come from the cash produced by the biggest companies, it would increasingly involve more debt,” so “if A.I. underwhelms – or the systems ultimately require far less computing – there  could be growing risk.”  Indeed, “Debt Has Entered the A.I. Boom” (Ian Frisch, same source, November 8th).  Cases include $3.46 billion in debt at “QTS, the biggest player in the artificial intelligence infrastructure market,” which will be refinanced using attachments to “10 data centers in six markets,” and the four companies just mentioned, which have “more recently… turned to loans,” adding $13.3 billion to the current inventory of “asset-backed securities (A.B.S.).” 

Soon thereafter, long-time commentator Cade Metz asked “The A.I. Boom Has Found Another Gear.  Why Can’t People Shake Their Worries?” (also in the Times, November 20th).  “Some industry insiders say there is something ominous lurking behind all this bubbly news… a house of cards.”  Their concerns come from the thus-far unprofitability of AI producers, including Anthropic (“in the red”) and OpenAI which “is not profitable and doesn’t expect to be until 2030.”  Will these weaknesses matter, and if so, how much?

Fronting the Sunday, November 23rd New York Times business section was “If A.I. Crashes, What Happens To the Economy?” (Ben Casselman and Sydney Ember).  If “the data center boom is overshadowing weakness in other industries,” as “everything tied to artificial intelligence is booming” and “just about everything else is not,” that means real exposure, as AI “will need to fulfill its promise not just as a useful tool, but as a transformational technology that leads to huge increases in productivity.”  Otherwise, “a lot of the investment that has been put in place might turn out to be unjustified,” meaning AI might no longer be a growth area, let alone the gigantic economic engine it was last year.

In the same publication’s “DealBook:  Penetrating the A.I. bubble debate” (December 23rd), Andrew Ross Sorkin first asked “are we in an artificial intelligence bubble”?  One market analyst said, “if we’re not, we’re going to be,” as “railroads, steam engines, radio, airplanes, the internet” and any other “truly transformative technology” during the past 300 years” caused “asset bubble(s),” when “capital flows into a technology because everyone realizes that it’s transformative.”  The article does not consider what is clearly becoming the largest question here:  Exactly what is, and is not, a bubble?

More concerns popped up with “As A.I. Companies Borrow Billions, Debt Investors Grow Wary” (Joe Rennison, The New York Times, December 26th).  The author mentioned that “in one debt deal for Applied Digital, a data center builder, the company had to pay as much as 3.75 percentage points above similarly rated companies, equivalent to roughly 70 percent more in interest.”  These bonds and other instruments have sometimes “tumbled in price after being issued… and the cost of credit default swaps, which protect bond investors from losses, has surged in recent months on some A.I. companies’ debt.”  Although these problems will not apply to all firms, they are bad news for an industry that many have seen as offering guaranteed success, and may stop some companies from continued operation.

On the end-user side, the “AI investment surge continues as CEOs commit to spending more in 2026” (David Jagielski, USA Today, January 8th).  However, although “68% of executives plan to spend even more on AI this year,” “most of the current AI projects aren’t even profitable,” which “reinforces the notion that executives would rather continue investing in AI than potentially stop and perhaps admit to their shareholders that they haven’t been able to figure out to make AI generate meaningful gains.”  If that is true, it cannot go on forever, and some high-profile concerns admitting that the technology hasn’t worked for them, in part or in whole, could precipitate a swell of others doing the same.  That, for AI, would be bad.

The latest word goes to Sebastian Mallaby, who told us in the January 13th New York Times that “This Is What Convinced Me OpenAI Will Run Out of Money.”  Remember this company had admitted no expected profitability for four more years.  While AI producers may eventually get there, “how long will it take for these companies to reach the promised land, and can they survive in the meantime?”  Per Mallaby, investors will lose patience with them, and “while behemoths such as Google, Microsoft, and Meta earn so much from legacy businesses that they can afford to spend hundreds of billions collectively as they build A.I., free-standing developers such as OpenAI are in a different position.”  So, when they can no longer get enough funding to continue, “OpenAI will be absorbed by” one of the four major firms.  That could be perceived as a bubble bursting, but “an OpenAI failure wouldn’t be an indictment of A.I.  It would be merely the end of the most hype-driven builder of it.”

There will almost certainly be some large bankruptcies.  How we get through them will be critical.  We know by now that AI will not go away, but its scope may be truncated.  In the meantime, a good façade notwithstanding, 2026 may be a bracing monetary year.  There are no guarantees – that is especially true for the business side of artificial intelligence.

Friday, January 9, 2026

After the Shutdown: December Jobs Report Good, AJSN Shows Latent Demand Down 300,000 to 16.7 Million

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to be the truest recent one, distorted the least by the government shutdown and the problems it caused for reporting as well as the data itself.  So how did it turn out?

Nonfarm payroll employment (aka “jobs”) increased 50,000, close to the 65,000 average of estimates I saw.  Seasonally adjusted and unadjusted unemployment each dropped 0.2% to reach 4.4% and 4.1%.  The adjusted jobless number fell 300,000 to 7.5 million.  There were still 1.9 million long-term unemployed, out for 27 weeks or longer.  Although the count of those working part-time for economic reasons, or maintaining such propositions while so far unsuccessfully seeking full-time ones, lost 200,000, it kept the rest of last time’s disturbing 900,000 gain, and is still way high at 5.3 million.  The labor force participation rate and the employment-population ratio were split, with the former off 0.1% to 62.4% but the latter up the same amount to 59.7%.  Average hourly private nonfarm payroll earnings gained 16 cents, slightly more than inflation this time, to $37.02. 

The American Job Shortage Number or AJSN, the metric showing how many additional positions could be quickly filled if all knew that getting one would be only another errand, declined to the following:

Compared with a year before, the AJSN is up about 550,000, over 90% of that from higher official unemployment.  The share of the AJSN from that component was 37.7%, down 1.4%. 

How did the month look in general?  Joblessness, supported by additional data as well, stopped gaining and fell significantly, even after factoring in the December effect of relatively-high-employment.  The downside, if it is valid to call it that, was the number of people out of the labor force and saying they did not want jobs, up 929,000 and 724,000.  If those gains are genuine, they will follow through to this month now that the holidays have ended.  Overall, though, I liked this edition.  The turtle took a smallish, but clear, step forward.

Wednesday, December 31, 2025

Artificial Intelligence in 2026: A Hard Rain’s A‐Gonna Fall

For many reasons, AI may be heading for a storm.

This was a great year for the technology.  It absorbed tens of billions of dollars in spending, in the process accounting for, according to one estimate, fully half of the nation’s Gross Domestic Product increase.  The NASDAQ index, heavy on technology stocks and reacting greatly to AI events, rose 20% during the 52 weeks ending on December 29th’s early morning.  A large string of niche successes, from health care to robotics to shopping aids, have put AI in the news and in people’s lives.  Companies have generally done well and acted in good faith when problems with their products have materialized.  Press coverage was copious and predominantly positive, with a big drop in the number of stories about how and whether it endangers humankind. 

Yet in some ways, 2025 was more of a getting-into-position year than one of overwhelming success.  The most profitable AI-related companies, starting with Nvidia, were not producing AI tools but providing chips and other resources to those that are.  That firm’s market capitalization, along with that of others, reflects mostly expected future income, dwarfing how much it has had so far.

There are unresolved problems looming.  Many communities have recently said they do not want data centers, which have pushed up water and electricity prices, the latter nationwide.  Chinese competition, from an unfree state which need not reveal the practices it fosters and condones, greatly strengthened this year.  The American people bifurcated, into one group containing about the bottom two-thirds of families by earnings and another with more, and AI has helped the first cohort little while hurting them proportionally more with the higher utility rates.  A variety of lawsuits against these corporations are in progress and have begun to be resolved, starting with the first of many large ones from those owning the rights to books and other material used by AI model builders without authorization.  The emergence of “artificial general intelligence,” not pegged to specific tasks, even in a recently shortened estimate, is expected no sooner than 2029.

What does all that mean?  First, what was accomplished with AI this year does not require huge data centers for improved versions, as it was with largely limited if well-focused applications.  That will also be true for the vast majority of 2026 successes.  Second, if current market valuations are to be maintained, firms selling the software itself will need to start getting amounts consistent with the cost of the chips it requires.  Third, it needs to be perceived as benefiting most Americans, else it may be taken to symbolize the richer-poorer split above.  Fourth, we want to see major-publication articles with titles and contents more positive and less demanding than “A 1 Percent Solution to the Looming A.I. Job Apocalypse” (Sal Khan, December 27th) and “An Anti-A.I. Movement Is Coming. Which Party Will Lead It?” (Michelle Goldberg, December 29th, both in the New York Times).  Fifth, it is time for the industry to integrate its opposite communication pairs, of potential and present, 2022 views and 2025 views, and niches and humanity-shaking feats.

For 2026, I predict continuing AI specific application success, but problems of financing, earnings, and public support causing industry concern and even panic.  It may be time for some companies to expensively exit the scene, which many will interpret as a crash or bubble.  Data center construction will level off near the middle of the year and be greatly reduced by Christmas.  Overall, artificial intelligence will end shaky, but in 2027 we will learn with much more accuracy where it is going – and not going.  For now, the people are many and their hands are all empty – as always, we pays our money and takes our chances.

Wednesday, December 24, 2025

Artificial Intelligence Regulation Since April, and Why It Won’t Be Settled Soon

Not a lot has changed this year in the laws around AI, but we’ve spent eight months getting into position for what could be a big year for that. 

First, a look at “Where the legal battle stands around copyright and AI training” (Patrick Kulp, Emerging Tech Brew, April 21st).  The short answer is unsettled, as although the Supreme Court will probably eventually hear a related case, “intellectual property lawyers say there aren’t yet many signs of where courts will land.”  As Anthropic seems to have used at least one of my books without permission, I was offered membership in a group to be compensated in a “$1.5 Billion Proposed Class Action Settlement.”  This may go through, and there may be similar resolutions offered by other AI companies.

Next, “Why the A.I. Race Could be Upended by a Judge’s Decision on Google” (David McCabe, The New York Times, May 1st).  Although “a federal judge issued a landmark ruling last year, saying that Google had become a monopolist in internet search,” that did not resolve whether it “could use its search monopoly to become the dominant player in A.I.”  A hearing had started in April 2025 to settle that issue; at its conclusion four months later, in the views of Kate Brennan of Tech Policy Press, the “Decision in US vs. Google Gets it Wrong on Generative AI” (September 11th).  The judge of the hearing considered AI, unlike search engines, to be a competitive field, and rejected “many of the Department of Justice’s bold, structural remedies to unseat Google’s search monopoly position.”  That could be a problem, as “Google maintains control over key structural chokepoints, from AI infrastructure to pathways to the consumer.”  This conflict, though, may not be completely settled, as the extent to which that company can absorb more of the AI field with its Gemini product is unknown.

In “Trump Wants to Let A.I. Run Wild.  This Might Stop Him” (Anu Bradford, The New York Times, August 18th), we see that our presidential administration produced an “A.I. Action Plan, which looks to roll back red tape and onerous regulations that it says paralyze A.I. development.”  The piece says that while “Washington may be able to eliminate the rules of the road at home… it can’t do so for the rest of the world.”  That includes the European Union, which follows its “A.I. Act,” which “establishes guardrails against the possible risks of artificial intelligence, such as the loss of privacy, discrimination, disinformation and A.I. systems that could endanger human life if left unchecked.”  If Europe “will take a leading role in shaping the technology of the future” by “standing firm,” it could effectively limit AI companies around the world.

From there, “Status of statutes” (Patrick Kulp, Jordyn Grzelewski, and Annie Sanders, Tech Brew, October 3rd) told us that that week California passed “major AI legislation… establishing some of the country’s strongest safety regulations” there, which “will require developers of the most advanced AI models to publish more details about safety steps taken in development and create more protections for whistleblowers at AI companies.”  Comments, both ways, were that the law “is a good start,” “doesn’t necessarily go far enough,” and “is too focused on large companies.”  It may, indeed, be changed, and other states considering such efforts will learn from California’s experience.

Weeks later, “N.Y. Law Could Set Stage for A.I. Regulation’s Next ‘Big Battleground’” (Tim Balk, The New York Times, November 29th).  It “became the first state to enact a law targeting a practice, typically called personalized pricing or surveillance pricing, in which retailers use artificial intelligence and customers’ personal data to set prices online.”  Companies using such in New York will now need to post “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.”  As of article time, there were “bills pending in at least 10 states that would either ban personalized pricing outright or require disclosures.”  Expect more.

After a “federal attempt to halt state AI regulations,” “State-level AI rules survive – for now – as Senate sinks moratorium despite White House pressure” (Alex Miller, Fox News, December 6th).  Although “the issue of a blanket AI moratorium, which would have halted states from crafting their own AI regulations, was thought to have been put to bed over the summer,” it “was again revived by House Republicans.”  Would this be constitutional, as AI is not mentioned as an area to be overseen by the federal government?  Or would it just be another power grab?

The latest article here, fittingly, is “Fox News Poll:  Voters say go slow on AI development – but don’t know who should steer” (Victoria Balara, Fox News, December 18th).  “Eight in ten voters favor a careful approach to developing AI,” but “voters are divided over who should oversee the new technology, splitting between the tech industry itself (28%), state governments (26%), and Congress (24%).” Additionally, 11% “think the president should regulate it… while about 1 in 10 don’t think it should be regulated at all.”  That points up how contentious the artificial intelligence regulation issue is – and tells us that, urgent need or not, it may take longer to be resolved than we may think.  We will do what we can, but once again it won’t be easy.

Merry Christmas, happy Hanukkah, happy Kwanzaa, and happy new year.  I’ll see you again on January 2nd.

Tuesday, December 16, 2025

November’s Employment Data Sluggish and Worse – AJSN Shows Latent Demand Up 110,000 to 17 Million

Here we are with the first Bureau of Labor Statistics Employment Summary since last month, the first data in two, and the first timely information in three.  Was it worth the wait?

The headline number, the count of net new nonfarm payroll positions, exceeded its 45,000 estimate but not by much – 64,000.  Seasonally unadjusted unemployment stayed at 4.3% but the adjusted figure gained 0.2% to 4.6%, since September.  The unadjusted number of unemployed rose 200,000 to 7.8 million, of which 1.9 million were designated as long-term or out for 27 weeks or longer, up 100,000.  The labor force participation rate gained 0.1% to 62.5%, but the employment-population ratio, best showing how common it is for Americans to be working, lost the same amount to 59.6%.  Average private nonfarm payroll earnings grew 19 cents, less than inflation, since September, to reach $36.86.  The alarming change was to the count of those working part-time for economic reasons, or holding on to less than full-time opportunities while looking thus far unsuccessfully for full-time ones.  That soared 900,000, to 5.5 million.

The American Job Shortage Number or AJSN, which shows how many additional positions could be quickly filled if all knew they would be easy to get, increased modestly to the following:

The largest change came from those discouraged, adding almost 130,000 to the metric, followed by unemployment itself which contributed 69,000 more.  The main subtraction came from people wanting to work but not searching for it for a year or more, which took away 62,000.  Thirty-nine point one percent of the AJSN came from those officially jobless, up 0.2% from September.

Compared with November 2024, the AJSN rose 1.2 million, half of that from official unemployment, and most of the rest from those discouraged and those not looking for at least a year. 

What does all that add up to?  Adding the modest increases in unadjusted unemployment (+172,000), those not in the labor force (+156,000), and those claiming no interest in work (+175,000) does not change anything much.  It was a torpid month or two, with few positive outcomes.  The fair number of new jobs was offset by what we hope is not a new level for those working part-time for economic reasons.  We also don’t like the rising unemployment rate, the highest in four years and in need of more front-line attention.  Without knowing what he did in October, we saw the turtle stay just where he was the month after.

Friday, December 12, 2025

Two Months of Driverless Cars, With Progress and Cogent Observations

This area has been heating up lately.  It’s been a good but limited year for autonomous vehicles, with those offering them mainly building on their robotaxi success.  What’s been happening?

On that, “Way more” (The Economist, October 4th) discussed “the peculiar economics of self-driving taxis,” claiming that “the rise of autonomy has played out in two different ways.  First it has raised overall taxi demand in San Francisco.  Second, it has catered to a lucrative corner of the market.”  The number of rides in cabs with drivers stayed the same, and from 2023 to 2024 the count of people working in “taxi and limousine service” increased, 7%, leading Lyft’s CEO to say that autonomous taxis will “actually expand the market.” 

Moving along, “Could a driverless car deliver your next DoorDash?  New collab announced” (Michelle Del Rey, USA Today, October 16th).  That company is combining with Waymo, to “launch the testing phase of an autonomous delivery service in the Phoenix metro area, with plans to expand it more broadly this year.”  Customers, if they are “in an eligible area,” can use the Waymo app’s “Autonomous Delivery Platform.”  As well as human “dashers,” DoorDash is already also making at least some deliveries with robots and drones.

On the other size end, an “AI truck system matches top human drivers in massive safety showdown with perfect scores” (Kurt Knutsson, Fox News, October 29th).  Autonomous system Kodiak Driver’s rating, described as 98 on a 1-100 scale on the industry assessment VERA, “placed it beside the safest human fleets.”  The self-driving trucks have “advanced monitoring and hazard detection systems,” and have eliminated many human problems, such as “distraction, fatigue and delayed reaction.”  Nothing was provided, though, on how many of these trucks are running now, and whether they are being used for true production – but see three paragraphs below for one modest data point.

Now we can expect “Waymo to launch robotaxi service in Las Vegas, San Diego and Detroit in 2026” (Akash Sriram, USA Today, November 4th).  The first two cities aren’t surprising, but can such vehicles deal with snow?  “In Detroit, the company said its winter-weather testing in Michigan’s Upper Peninsula has strengthened its ability to operate year-round.”  We will see if that place can really join “Phoenix, San Francisco, Los Angeles, and Austin,” where it “has completed more than 10 million trips.”

Miami is not in that group, but there, a “Sheriff’s office tests America’s first self-driving police SUV” (Kurt Knutsson, Fox News, November 6th).  This “bold experiment” is a “year-long pilot program” of “the Police Unmanned Ground Vehicle Patrol Partner, or PUG,” which “is packed with high-tech features” including interfaces “with police databases, license plate readers and crime analytics software in real time,” and “can drive itself, detect suspicious activity through artificial intelligence-powered cameras and even deploy drones for aerial surveillance.”  A massive, if scary, potential help for law enforcement forces.

Are “Self-driving cars still out of reach despite years of industry promises” (Jackie Charniga, USA Today, November 25th)?  Although “driverless semitrucks have traveled more than 1,000 miles hauling cargo between Dallas and Houston,” and robotaxis are established as above, “the unmanned vehicles circulating on American highways and side streets are a fraction of what executives promised in the giddy early days.”  We know that, though, and progress, on a more specialized and certainly slower track, is still real.  Don’t bet anything you don’t want to lose against improvement continuing indefinitely.

On the other hand, autonomous vehicles are still embarrassing themselves.  We now have “US investigating Waymo after footage captures self-driving cars illegally moving past school buses in Texas” (Bonny Cho, Fox Business, December 4th).  Driverless technology has struggled mightily with understanding on a detailed level how human drivers think, and have not been able to quantify some great pieces of that, but why weren’t school buses, with telltale flashing lights and capability of being tagged in some way, long since identified and understood?  Were there none of them in the mile-square testing grounds where base autonomous software was developed?  This is the kind of thing which causes people to be overly fearful, and, if there are many more problems remaining at this stage, that’s justified.  I hope there are no more humiliations as low-level as this yet to emerge.

Ever since I first wrote about driverless technology, close to ten years ago, I have been making points about how beneficial it would be.  Avoiding the tens of thousands of annual deaths caused by human driver error was the main benefit, followed by higher general prosperity and allowing easier transportation of older children, those impaired, and others unable to drive.  As with when our current cars became the norm, we would not know all of self-driving’s effects, but many, such as reduced smoking as people would eventually not need to stop at gas stations where many now buy cigarettes, would be both probable and valuable.  I have been disappointed by overreactions to autonomous vehicles’ tiny amounts of fatalities, governmental unwillingness to allow the technology to progress, and general lacks of will and ability to see how many lives could be saved, but there has lately been, in two places, at least a small advance.

The first opinion piece was “Auto injuries are my job.  I want Waymo to put me out of work” (Marc Lamber, USA Today, November 21st).  The author, with “a 34-year career as a plaintiff personal injury lawyer,” said his “calls have been heartbreakingly familiar:  a parent and spouse is paralyzed because someone was texting; a pedestrian on a sidewalk is killed because a driver had “just two drinks”; a family is shattered by speeding, fatigue or road rage.”  He pointed out that “autonomous driving technology doesn’t get drunk, distracted, tired or tempted to speed,” and that “a rare autonomous vehicle mistake dominates headlines while the daily toll of human driving error goes underreported.”  He mentioned that Waymo, “across 96 million miles without a human driver,” had “91% fewer serious injury crashes, 79% less airbag deployment crashes,” and “92% fewer injury-resulting pedestrian collisions,” along with “89% less injury-causing motorcycle collisions and 78% fewer injury-related cyclist crashes.”  Overall, “that is not perfection.  That is progress worth protecting.”

The second piece, published in the New York Times website on December 2nd and in the Sunday print edition December 7th, by Jonathan Slotkin, a neurosurgeon, was, in the latter, “The Human Driver Is a Failed Experiment.”  He made many of the same points Lamber did, adding that “more than 39,000 Americans died in motor vehicle crashes last year,” of which “the combined economic and quality-of-life toll exceeds $1 trillion annually, more than the entire U.S. military or Medicare budget.”  He said that “if 30 percent of cars were fully automated, it might prevent 40 percent of crashes,” and that “insurance markets will accelerate this transition, as premiums start to favor autonomous vehicles,” but “many cities are erecting roadblocks,” and “in a future where manual driving becomes uncommon, perhaps even quaint, like riding horses is today… we no longer accept thousands of deaths and tens of thousands of broken spines as the price of mobility.”  Ending, “it’s time to stop treating this like a tech moonshot and start treating it like a public health intervention.” 

Do we want this outcome?  If not, why not?