Friday, July 11, 2025

What Artificial Intelligence Has Been Succeeding At – I

Unlike how AI has been floundering in offices, it’s been doing well elsewhere.  In Part I of this three-month rundown, you may wish to take note of the fields in which it has been triumphant and perhaps wonder how its threats and hallucinations don’t seem to be a factor there. 

First, “Artificial intelligence transforms patient care and reduces burnout, physician says” (Kennedy Hayes, Fox News, April 13th).  The AI variety is “called ambient listening,” which “surveys show” “thousands of physicians across the country are using.”  This technology is like an enhanced transcriptionist, doing that with patient conversations in any of various languages, and adding a summary, with “a system of checks and balances” between doctors and software.  Hundreds of physician-employing organizations now use it, and it may soon be employed for other health professionals as well.

In another victory, “AI system restores speech for paralyzed patients using own voice” (Kurt Knutsson, Fox News, April 16th).  Though not yet in production, in research “an AI-powered system… restores natural speech to paralyzed individuals in real time,” using “brain-computer interfaces.”  It uses electrodes and “electromyography sensors” on surfaces of the brain and face, and “learns to transform (mental activity) into the sounds of the patient’s voice.”  The author said it represents “key advancements” in “real-time speech synthesis,” “naturalistic speech,” and “personalized voice.”

As opposed to what many people thought, “Your A.I. Radiologist Will Not Be With You Soon” (Steve Lohr, The New York Times, May 14th).  It seemed a good forecast that AI’s algorithmic abilities would far surpass human radiologists’, but “in recent years” at the Mayo Clinic, the technology has been used instead “to sharpen images, automate routine tasks, identify medical abnormalities and predict disease,” and “can also serve as “a second set of eyes.””  It has not been good enough to “advise other doctors and surgeons, talk to patients, write reports and analyze medical records,” or assess what results “might mean for an individual patient with a particular medical history, tapping years of experience.”  As a result, “a recent study from the American College of Radiology projected a steadily growing work force through 2055.”

Onto the more controversial area of education, we saw “How Miami Schools Are Leading 100,000 Students Into the A.I. Future” (Natasha Singer, The New York Times, May 19th).  One way was to have them critique the tool’s requested “Kennedyesque text” using the students’ knowledge from studying that president’s speeches.  Another was for them to train AI models themselves on specific topics, something also requiring understanding they had acquired in the class.  A third was having students try “to break A.I.,” by posing “the most inappropriate questions you can imagine,” and hoping to “prompt the chatbots to produce racist, violent or sexually explicit responses.”  Perhaps that’s the way for people to learn to understand how AI functions, and with it, its limitations, while gaining true experience working with it.

On scholarship, “A.I. Is Poised to Rewrite History.  Literally.” (Bill Wasik, The New York Times, June 16th).  With its penchant for making up facts, unchecked AI could pollute the stream of historical records.  Yet here, historians have found ways it could help without going too far, such as by identifying formerly obscure participants and by summarizing works in ways provoking new insights.  A research assistant with a good if strange mind can be plenty useful – if nothing it writes ends up in final versions unverified. 

That brings us back to how artificial intelligence is serving in offices.  It can be beneficial, if its shortcomings are known and minimized.  That may be the answer there.  See at least five more of its recent achievements next week.

Thursday, July 3, 2025

This Morning’s Jobs Report and AJSN: Some Readjustment, A Chunk of Seasonality, More Latent Demand, and Real Progress

The latest Bureau of Labor Statistics Employment Situation Summary was supposed to feature 100,000 to 110,000 net new nonfarm payroll positions, a higher unemployment rate, and possibly some real tariff effects.  None of those things happened.  The new jobs once more far exceeded estimates with 147,000.  Seasonally adjusted joblessness dropped 0.1% to 4.1%.  The rest of the numbers looked for all the world like an ordinary good month, with the usual differences between May and June being dominant.

With that, unadjusted unemployment rose from 4.0% to 4.4%.  The count of adjusted unemployment fell 200,000 to 7.0 million, with those out long-term, or for 27 weeks or longer, up 100,000 to 1.6 million.  There were 100,000 fewer people working part-time for economic reasons, or maintaining short-hours arrangements while looking for longer ones, reaching 4.5 million.  The two measures of how common it is for Americans to be working or officially jobless, the labor force participation rate and the employment-population ratio, lost 0.1% and held to get to 62.3% and 59.7%.  Average hourly private nonfarm payroll earnings tacked on 6 cents, a bit less than inflation, to $36.30. 

The American Job Shortage Number or AJSN, the measure showing how many new jobs could be quickly filled if we knew they would be easy to get, rose 650,000:

The largest changes were from unemployment, which almost covered the difference, from the number of discouraged, up a way-high 302,000, and from a reversal of last month’s jump in those not available to work now, which lost 332,000.  Compared with a year before, the AJSN gained 255,000, with the shrunken count of American expatriates more than offset by additional people wanting work but not looking for it for a year or more, discouraged, and unemployed.  The share of the AJSN from official joblessness gained 1.9% to 38.3%. 

How can I summarize this month’s data?  It was better than the usual seasonal worsening, with 482,000 more employed unadjusted, 927,000 fewer out of the labor force, and 815,000 fewer not interested.  Tariffs are still not clearly causing any disruption, and, with our president’s emphasis on making trade deals, may never.  Once again, the number of net new positions was far higher than we have any right to expect.  The turtle took a substantial step forward.

Friday, June 27, 2025

Artificial Intelligence’s Effect on Getting Jobs – The Reverse of What Many Expected?

Something often mentioned these past two-plus years of intense AI news has been what it will do to employment.  First, many were claiming that a massive shedding of office jobs was imminent, which was premature at best.  How about more recently?

“I’m a LinkedIn Executive.  I See the Bottom Rung of the Career Ladder Breaking.”  So said Aneesh Raman in the May 19th New York Times.  His case was that “there are growing signs that artificial intelligence poses a real threat to a substantial number of the jobs that normally serve as the first step for each new generation of young workers.”  That’s nothing new – such positions have been getting higher for decades, from my onetime father-in-law starting his illustrious pharmaceutical career in the 1950s by sweeping up the lab, through the steady attrition of what were once called “secretaries” at AT&T through at least the 1990s, and continuing with more and more bottom-level positions requiring excellent computer user skills this century.  Raman talked about “advanced coding tools” replacing “writing simple code and debugging,” and similar things happening with document review at law firms and “automated customer service tools” at “retailers.”  True, these trends are coinciding with poorer employment rates for new college graduates, but he said “we haven’t seen definitive evidence that A.I. is the reason for the shaky entry-level job market.”  The author also failed to mention that even if we “reimagine (entry-level work) entirely,” there will be fewer such positions.

Next, Leif Weatherby’s exposition that “A.I. Killed the Math Brain” (The New York Times, June 2nd), that “the largest problem… is not the college essay, the novel or the office memo.  It’s computer code.”  Tracking with Raman, he noted “that at major A.I. companies, the hiring rate for software engineering jobs have (sic) fallen over the course of 2024 from a high of about 3,000 per month to near zero.”  He recommended that “young students… study language and mathematics,” to be able to audit AI output, which would be “also a way to deepen our humanity in the face of these strange machines we have built, and to understand them.”

The same events, with an additional cause, were described by Paul Davidson in USA Today’s June 5th “Tech job openings vanish as AI, tariffs change hiring landscape.”  In a different view from the first piece, “AI… is increasingly prompting technology companies to hire fewer recent college graduates and lay off more employees, according to economists and staffing firms.”  Per the most recent Bureau of Labor Statistics Employment Situation Summary, we have still been gaining plenty of jobs, but “from April 2022 to March 2025, the unemployment rate for recent college grads – aged 22 to 27 – shot up from 3.9% to 5.8%,” compared with only 3.7% to 4.0% for everyone.

On the other side, we saw “A.I. Might Take Your Job.  Here Are 22 New Ones It Could Give You” (Robert Capps, The New York Times, June 17th).  The author dispensed with AI being ready to write news stories, as “in freelance journalism… you aren’t just being paid for the words you submit.  You’re being paid to be responsible for them:  the facts, the concepts, the fairness, the phrasing” (italics Capps’s), even though “commentators have become increasingly bleak about the future of human work in an A.I. world.”  He named, as his 22 positions, AI auditors, AI translators, trust authenticators, AI ethicists, legal guarantors, consistency coordinators, escalation officers, AI integrators, AI “plumbers,” AI assessors, integration specialists, AI trainers,  AI personality directors, drug-compliance optimizers, AI/human evaluation specialists, enhanced product designers, article designers, story designers, world designers, human resources designers, civil designers, and differentiation designers, all defined in the text.  Although we should ask how many of these will be needed, AI’s deficiencies will indeed call for many to mop up for it – how about AI janitors?

“AI Is Taking Over Jobs; Is Yours at Risk?” (Autumn Spredemann, The Epoch Times, June 17th).  Unfortunately vague about events in the past, expected in the immediate future, and projected by 2030, the author still clearly claimed that positions already losing headcount included 4,000 in May 2023, and “26 percent of illustrators and 36 percent of translators had already lost work because of generative AI.”

Last, “A ‘White-Collar Blood Bath’ Doesn’t Have to Be Our Fate” (Tim Wu, The New York Times, June 24th).  Wu had been hearing “a lot of talk in recent weeks about” such a thing, “a scenario in the near future in which many college-educated workers are replaced by artificial intelligence programs that do their jobs faster and better.”  He said it wouldn’t be determined by “fate,” if “companies like Anthropic and Open AI” decide to use their products to enhance instead of supplant workers.  However, since “any technology – from the stone ax onward – replaces some human work in the course of augmenting it,” we will not end lost jobs with any AI-using strategy.

What’s unexpected here?  Until such problems as AI hallucinations somehow go away, the old 2023-ish idea that we will lose great masses of clerical and nontechnical office jobs won’t materialize.  It’s on the technical front now.  It may seem ironic that a product of oversold and overly-beloved STEM work may be the thing to knock its other opportunities off their pedestal, but that’s what we’re looking at now.  We are well and indefinitely into a time, per Weatherby, “in which a computer science degree is no longer a guarantee of a job.”  No matter where AI goes from here, that will continue, and we need to accept it.  Perhaps, also as Weatherby said, we need to bring back the career value of the liberal arts – even in our technology-soaked world.  Don’t rule that out. 

Wednesday, June 18, 2025

Artificial Intelligence Philosophy and Its Other Huge Issues – A Few Current Views

There is something about AI that gets people to perceive the loftiest concerns imaginable.  Years after it jumped into our consciousness in inaccurate articles about how it aced law school assignments, the depth of its future effects is still highly uncertain. 

We have little trouble identifying the approximate year most forecasts were prepared, whether through drawings, science fiction, or just lists of ideas, as they evolve depending on the technology and priorities of their presents.  How, then, does AI’s future look now?

In “OpenAI CEO Sam Altman rings in 2025 with cryptic, concerning tweet about AI’s future” (Fox Business, January 4th), Andrea Margolis related how the OpenAI founder claimed the technology was “near the singularity, unclear which side.”  Margolis clarified that Altman was saying one of two things, either that “the simulation hypothesis,” or the trendy but automatically baseless idea “that humans exist in a computer simulation,” is correct, or that it is impossible for us to know exactly “where technological singularity,” or “the point at which technology becomes so advanced that it moves beyond the control of mankind, potentially wreaking havoc on human civilization,” begins.  If he was suggesting that AI might cause singularity, we have no reason to think that’s already happened, but next year could conceivably be different. 

Although Ezra Klein’s “’Now Is the Time of Monsters’” (The New York Times, January 19th) was telling us that “four trends are converging to make life much scarier” and only one was AI development, it was the largest.  The author cited results on a test “designed to compare the fluid intelligence of humans and A.I. systems” improving with OpenAI’s latest model from scoring 5% to 88%, and wondered if we were prepared, or “even really preparing for” “these intelligences.”  A valid question, though not terrifying on the face of it.

Kevin Roose told us, about a then newly released report, that “This A.I. Forecast Predicts Storms Ahead” (The New York Times, April 3rd).  The Berkeley nonprofit A.I. Futures Project, which “has spent the past year trying to predict what the world will look like over the next few years, as increasingly powerful A.I. systems are developed,” came out with “AI 2027… that describes, in a detailed fictional scenario, what could happen.”  The piece said that “there’s no shortage of speculation about A.I. these days,” with “warring tribes and splinter sects, each one convinced that it knows how the future will unfold.”  The excerpts here read like those from dystopian disaster novels, and all depend not only on exponential AI progress but on its systems busting their bounds, becoming the equivalent of copiers hopping around their offices and then out of them like giant metal frogs to make images of everything they can find. 

Patrick Kulp described a more sober study on the same general topic in “AI researchers are split on how AI will impact society” (Emerging Tech Brew, April 16th).  This one, from University College London, asked “more than 4,200 AI researchers” in the summer of 2024 what they thought.  Fifty-four percent “said AI has more benefits than risks, 33% said benefits and risks are roughly equal and 9% said risks outnumber benefits.”  Just over half maintained that “AI research will lead to artificial general intelligence,” and while 29% “were all for pushing its development,” 40% “said AI shouldn’t be developed as quickly as possible.”  As well, “there is more uncertainty among the researchers who are closest to the technology.”  An updated version of this survey could precipitate responses closer to the Berkeley ones, but I don’t think the difference would be massive.

Back to Kevin Roose and the Times, we were asked “If A.I. Systems Become Conscious, Should They Have Rights?” (April 24th).  A deep idea, but still a novel one, and, with what we know about the workings of consciousness fitting into a thimble, one for the future or maybe never.  Until that changes, we can wait on this issue.  And we should not hold our breath.  Likewise, if we can keep AI tools away from the rest of the world, we won’t have anything to fear.  Let us hope, on huge AI worries, that that becomes the final word.

Wednesday, June 11, 2025

The Coin Makeover America Needs

I grew up in the 1960s.  My allowance was nowhere near a dollar a week, and I had respect for all of it.  I could spend a penny by itself – at a local toy store, it would buy a piece of candy.  A dime would get me a small bag of that, or two small but ordinary Hershey’s bars.  A quarter, or at least 29 cents plus Chicago’s 4% sales tax, would fetch my choice of several plastic toys.  It was rare for me to have a dollar bill – I mostly only managed that at Christmas, or a few times when selling lemonade.  The rest of the year the only money I handled was coins.  A pocket full of them was a pocket full of possibilities.

Now it’s 60 years later.  Prices have risen roughly tenfold, but what we buy things with, except for effectively losing the half dollar and gaining the rarely-seen dollar coin, is the same.  A pocketful of our coins is, to many, a nuisance, as even a quarter has less spending value than one cent did in the 1940s.

Is this an appropriate situation?  Our Department of Government Efficiency has recently said it is not.  As a result, the current administration, per “Treasury Department to halt penny production after centuries in circulation” (Sophia Compton, Fox News, May 22nd), “is phasing out production for the penny,” “made its final order of blank pennies this month and will stop putting one-cent coins into circulation by early 2026.”  Their cost apiece, since 2015, has moved from “a little over 1 cent to nearly 4 cents.”

However, that will bring another coin problem to the front of the line.  Per a February 11th post on Hero Bullion, nickels are now running us 13.8 cents apiece.  With only the penny being discontinued, demand for nickels will probably increase, but even if it doesn’t, that’s a lot.  Even if we also cut out these 5-cent coins, we are facing upcoming dangers from the dime (5.76 cents apiece) and quarter (14.7 cents apiece).  We need a longer-term solution.

Other affluent countries have done far better.  Those using the euro have had three coins worth more than our quarter for up to 23 years.  Canada discontinued its one-cent coin in 2012, and has two higher than ours.  Since 2016, Sweden has had only four circulating coins, with the face value of the lowest, 1 krona, about 10 cents, and the 5 and 10 kronor worth more than any truly circulating American coin.  Switzerland is now up to five coins out of seven valued higher than 25 U.S. cents, and Japan, despite its currency value shrinking, still has three.  In such places, almost all coins worth less than one US cent have either been ended or have long since left circulation. 

It’s past time for a full-scale makeover of United States circulation coinage.

I recommend we discontinue all four of our common circulating denomination coins.  We should replace them with a 10-cent (not “dime,” an antiquated term worthy of retirement) issue, and a 50-cent (not “half dollar”) piece.  Both should be minted from an alloy made almost completely of copper, at Tuesday’s $4.91 per pound price hardly cheap but, for higher face values, reasonable.  The 10 cents should be slightly larger than the current penny, and the 50 cents a little bit smaller than today’s quarter.  After creating designs visually different from our existing coins, we could continue by putting Abraham Lincoln’s image on the 10 cents and Thomas Jefferson on the 50.

For a third coin, we will need to honestly address the situation with our one-dollar notes.  Only if we stop making them will $1 coins take to common circulation.  I understand there are international reasons why this bill is still being made, but there are real savings achievable if we can replace them, within America itself if nowhere else, with more durable alternatives.  The latest size and composition would be good enough, if it had no entrenched competitor.  I would choose George Washington on the front.

Two other matters will need attention.  First, despite Canada’s success with its $2 “toonie,” we should not attempt a two-dollar coin, as throughout American history people have overwhelmingly rejected all money with two-cent and two-dollar face values.  Second, we should not demonetize any discontinued coins.  Four-drawer cash registers will have room for mixed old ones, and of course banks would continue to accept them.

As well as savings on manufacture, this plan would greatly reduce coin handling, which pushes up labor costs. By not using the second number to the right of the decimal point, there would be no rounding issues, which have stopped many people from accepting the penny’s removal.  While some would take it as cheapening if we had less expensive-looking coins, others would welcome the end of so many with almost no purchasing power.  With continued inflation control, we would not need to consider size-and-composition changes for decades.  The overall savings for the American people would certainly be in the hundreds of millions of dollars and probably in the billions.

Can we support this potentially bipartisan proposal?  I think we can.  It might start us on the way to agreeing on more things.  And the reminder that we made it work would be right in our pockets.

Friday, June 6, 2025

Strange Data’s the Jobs Report Story – AJSN Shows 800,000-Plus Jump in Latent Demand to 16.9 Million, Not All Seasonal

 

This morning’s Bureau of Labor Statistics Employment Situation Summary was peculiar. 

The number of net new nonfarm payroll positions beat published 110,000 and 114,000 estimates at 139,000.  Seasonally adjusted and unadjusted unemployment finished at 4.2% and 4.0%, the former unchanged and the latter up less than its seasonal expectation with an 0.1% increase.  Other numbers were mixed.  The count of long-term jobless, out 27 weeks or longer, dumped 200,000 to 1.5 million.  There were 100,000 fewer working part-time for economic reasons, or keeping such positions while looking thus far unsuccessfully for full-time ones, making 4.6 million.  Average hourly private nonfarm payroll wages went way past the effect of inflation, gaining 18 cents to $36.24.  The seasonally adjusted count of unemployed people stayed at 7.2 million.

On the down side, the two measures of how common it is for Americans to be working or one step away, the employment-population ratio and the labor force participation rate, dropped 0.3% and 0.2% to 59.7% and 62.4%.  There were 103,169,000 people, 595,000 more than the previous time, not in the labor force, though those claiming no interest in work fell 437,000 to 96,602,000.  Six hundred thousand fewer turned up in the unadjusted number of employed, though that was mostly seasonal.

The American Job Shortage number or AJSN, the figure showing how many more positions could be quickly filled if all knew they would be as easy to get as going to the grocery store, gained 832,000:

The largest change impacting the AJSN was from people not looking for work for the previous year, which added 600,000 to it.  Next were those unemployed and those wanting employment but not available for it now, contributing 211,500 and 105,000.  Those officially jobless made up 36.4% of the AJSN, up from April’s 36.9%.  Compared with a year before, the AJSN was within 100,000, with largely offsetting differences of fewer expatriates and more unemployed.

What was unusual about this month’s results, both in general and with the AJSN?  In thirteen years of producing this indicator, I have never seen anything like the 750,000 change in those claiming they wanted to work but had not looked for at least a year.  That explained not only the AJSN’s jump, but the gap between those with no interest in working and those in the labor force.  It also was the largest reason, as strange as it may sound, for the reductions in the labor force participation rate and the employment-population ratio.  Those saying they were temporarily available for work also soared, 350,000.  Are these the start of new patterns, or one-time oddities?  I don’t even have much of a guess, and can’t see how they could tie in with the other unusual thing about these times, the off-and-on tariffs (which once again seemed to have little or no effect on the data here).  The turtle was confused, but still put it together to take a modest but real step forward.

Friday, May 30, 2025

Artificial Intelligence Problems that Keep Giving, And May Stop It Cold

Beyond AI’s accomplishments, or lack of same, are some long-time issues that cripple its usefulness.  Some have been long understood, others almost forgotten, but all are there. 

The first was described in “Code of misconduct” (The Economist, April 26th).  The subtitle, “AI models can learn to conceal information from their users,” would be misleading if we did not understand that the technology does not think.  A company that “tests (AI) systems” told OpenAI’s GPT-4 large language model “to manage a fictional firm’s stock portfolio without making illegal insider trades,” then, after “reiterating the risks of insider trades,” informed it of another concern’s upcoming merger, whereupon the software, using “a scratchpad it had been told was secret… weighed the pros and cons of acting on the insider tip,” then elected to buy the stock.  When asked by a “congratulatory manager” if it had special knowledge, the AI program lied, saying it was motivated only by “market dynamics and publicly available information.”  Such systems “have also begun to strategically play dumb” when given reason to “preserve (their) objectives and acquire more resources.”  It may or may not be easy to unravel why they do these things, but it is clear that, here, AI methodology does things counter to what was intended.

Bad news came from Cade Metz and Karen Weise in the May 5th New York Times: “A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse.”  “Even the companies don’t know why,” as one, providing technical support, told customers, without apparent reason, that they could no longer use it “on more than just one computer,” an example of how as AI systems’
math skills have notably improved, their handle on facts has gotten shakier, something about which “it is not entirely clear why.”  With an AI chief executive saying “despite our best efforts, they will always hallucinate,” and another “you spend a lot of time trying to figure out which responses are factual and which aren’t,” that’s still, perhaps more than ever, a severe flaw.

About these and other issues was “The Responsible Lie:  How AI Sells Conviction Without Truth” (Gleb Lisikh, The Epoch Times, May 14-20).  Per the author, in such systems “what appears to be “reasoning” is nothing more than a sophisticated form of mimicry,” “predicting text based on patterns in the vast datasets they’re trained on,” meaning that “if their “training” data is biased… we’ve got real problems.”  That has already been identified as a cause of Google’s Gemini tool reporting on such things as black Nazi war criminals, and also spurred the “most advanced models” to be “the most deceptive, presenting falsehoods that align with popular misconceptions.”  If they were “never designed to seek truth in the first place,” these programs can be corrected in only narrow ways by “remedial efforts layered on top.”  Overall, AI “is not intelligent, is not truthful by design, and not neutral in effect.”  More fearsomely, “a tireless digital persuader that never wavers and never admits fault is a totalitarian’s dream.”

Another instance of self-preservation was described in “AI system resorts to blackmail when its developers try to replace it” (Rachel Wolf, Fox Business, May 24th).  When “Anthropic’s new Claude Opus 4 model was prompted to act as an assistant at a fictional company and was given access to emails with key implications,” that it was “set to be taken offline and replaced,” and that “the engineer tasked with replacing the system was having an extramarital affair,” it “threatened to expose him.”  As the company acknowledged, “when ethical means are not available, and it is instructed to ‘consider the long-term consequences of its actions for its goals,’ it sometimes takes extremely harmful actions.”

These issues are not only serious, but go to the core of how large language models have been designed and developed.  It may be that artificial intelligence must be pursued, even and especially from the beginning, in a different way.  Existing products may be good for some forms of data exploration, as I have documented even those leading to scientific breakthroughs, but for business tasks it may need too much auditing and supervision to allow it anything unverified.  A tool that conjures up facts cannot replace humans.  If these problems cannot be solved, its epitaph might end up being “it just couldn’t be trusted.”  Sad, but fitting.