Friday, September 26, 2025

Artificial Intelligence in General – Hopes and Expectations Both Behind and Ahead Of Reality

In some ways, AI is excelling, with a shoulder-high pile of successful specific applications.  Yet in others, such as matching human cognitive abilities, it seems to be motionless.  What do I mean?

First, “It’s Smart, But for Now, People Are Still Smarter” (Cade Metz, The New York Times, May 25th).  This Sunday piece by this paper’s most prominent AI writer, subtitled “The titans of the tech industry say artificial intelligence will soon match the powers of human brains.  Are they underestimating us?,” addressed the coming of artificial general intelligence (AGI), which OpenAI CEO Sam Altman had told President Donald Trump “would arrive before the end of his administration,” and Elon Musk “said it could be here before the end of the year.”  This AGI “has served as shorthand for a future technology that achieves human-level intelligence,” but has “no settled definition,” meaning that “identifying A.G.I. is essentially a matter of opinion.”  As of the article’s time, “according to various benchmark tests, today’s technologies are improving at a consistent rate in some notable areas, like math and computer programming,” yet ”these tests describe only a small part of what people can do,” such as knowing “how to deal with a chaotic and constantly changing world,” at which AI has not excelled.  There is no clear reason why it should be able to jump from huge specific competences to matching overall human intelligence, which “is tied to the physical world,” and “that is why many… scientists say no one will reach A.G.I. without a new idea – something beyond the neural networks that merely find patterns in data.”  In the four months since this article came out, I have seen no signs of any such notion. 

On a related point, De Kai wrote on “Why AI today is more toddler than Terminator” (freethink.com, June 9th).  That scarily predictive 1980s movie series provided the blueprint in many people’s minds for how AI could be relentlessly effective at what it called “autonomous goal-seeking,” at which it was amoral and lapsed into evil, and has underlaid people’s fears about it ever since.  Yet AI “relies less on human labor to write out digital logic and much more on automatic machine learning, which is analog,” meaning that “today’s AIs are much more like us than we want to think they are,” and “are already integral, active, influential, learning, imitative, and creative members of our society.”  Overall, “AIs are our children.  And the crucial question is:  How is our – your – parenting?” 

A new major ChatGPT release is gigantic news in the AI industry.  Soon after the last one, we read “How GPT-5 caused hype and heartbreak” (Alex Hern, Simply Science, The Economist, August 13th).  It did well in some ways, “hitting new highs” by “showing improvements over its predecessors in areas including coding ability, STEM knowledge and the quality of its health advice,” but some users of its older 40 system “have bonded with what they perceived as its friendly and encouraging personality” which GPT-5 did not share, making the change they were steered to “a very personal loss.”  Apparently, the company did not expect that.

A wider-scope issue appeared in “MIT report: 95% of generative AI pilots at companies are failing” (Sheryl Estrada, Fortune, August 18th).  “Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.”  The study blamed “flawed enterprise integration,” as the software doesn’t “learn from or adapt to workflows.”  Working with vendors functioned better than “internal builds,” and “back-office automation,” such as “eliminating business process outsourcing, cutting external agency costs, and streamlining operations,” fared better than “sales and marketing tools,” which were absorbing half of generative AI budget money.

How reasonable is it to consider that “A.I. May Just Be Kind of Ordinary” (David Wallace-Wells, The New York Times, August 20th)?  The author contrasted 2023 thoughts by “one-third to one-half of top A.I. researchers” that “there was at least a 10 percent chance the technology could lead to human extinction or some equally bad outcome,” with “A.I. hype… passing out of its prophetic phase into something more quotidian,” similar to what happened with “other charismatic megatraumas” such as “nuclear proliferation, climate change and pandemic risk.”  An April paper by two “Princeton-affiliated computer scientists and skeptical Substackers” claimed that we should see artificial intelligence “as a tool that we can and should remain in control of, and… that this goal does not require drastic policy interventions or technical breakthroughs.”  Given what it has done already, though, “the A.I. future we were promised, in other words, is both farther off and already here.”  With so much available right now and so much going nowhere, that is a better summary of this post than I can write, so I’ll stop there.

Friday, September 19, 2025

Another Two Months of Artificial Intelligence Accomplishments and Uses

For all the controversy and problems about AI, it’s building up its repertoire of ways of being useful.  Which have been in the news the past nine weeks?

According to “More Americans are turning to AI for health advice” (Kurt Knutsson, Fox News, July 31st), 35% of US adults “report already relying on AI to understand and manage aspects of their well-being.  From planning meals to getting fitness advice, AI is quickly moving from a futuristic concept to a daily health tool.”  As “trust in AI is climbing fast,” from 20% to 31% are using it to “explore specific medical concerns,” provide “meal planning and recipes,” get them “new workout routines,” and give “emotional or therapeutic support.”  All of that is constructive, unless people treat it as equivalent to a professional’s service, and do not get that level of help when they need it.

On the business side, “Delta moves toward eliminating set prices in favor of AI that determines how much you personally will pay for a ticket” (Irina Ivanova, Fortune, July 16th).  The airline used it for “3% of fares,” but they called its results “amazingly favorable.”  The article wasn’t clear about how Delta accomplished that, and reactions outside its industry will be negative, with one “surveillance pricing” tracker calling it “trying to see into people’s heads.”  Airline tickets have already been the strangest priced consumer product for many decades – what other can be priced higher if you buy less of it, necessitating rules against leaving multistep itineraries early?  I don’t know if this will work for the company, but they are vulnerable to people choosing their competitors instead, and good consumer relations are more than important.

Another frequently disturbing idea is in “Where Human Labor Meets ‘Digital Labor’” (Lora Kelley, The New York Times, August 1st).  “A digital native is a person raised on the internet.  A digital nomad is a person who moves around doing their computer job.  And a digital laborer is not a person at all.”  Say what?  It’s sort of an electronic-only robot that works “independently with a bit of management,” which can “grow and mature with its own data.”  Such things “are not really in wide use yet,” but the borders between them and people will take a while to firm up.  At Salesforce, an early proponent, “customers unhappy with a digital agent can escalate to a human,” sort of like getting out of phone-mail jail, but if the devices are going to be on “mainstream org charts,” they may need at least to be untouted (un-outed?) as automata.

How about “21 Ways People Are Using A.I. at Work” (Larry Buchanan and Francesca Paris, The New York Times, August 11th)?  “Almost one in five U.S. workers say they use it at least semi-regularly for work.”  They can get it to, among many other tasks, “select wines for restaurant menus,” “digitize a herbarium,” “make everything look better,” “create lesson plans that meet educational standards,” “make a bibliography,” “write up therapy plans,” act “as a ‘muse,’” “detect leaks in a water system,” “just write code,” “type up medical notes,” “run experiments to figure out how the brain encodes language,” “help get pets adopted,” “check legal documents in a D.A.’s office,” “get the busywork done,” “review medical literature,” “pick a needle and thread,” “(More politely) let band students know they didn’t make the cut,” “help humans answer more calls at a call center,” “help translate lyrics from the 17th and 18th centuries,” “explain my ‘legalese’ back to me,” and, fittingly, “detect if students are using A.I.”  The last one and many of the others are not new, but the list gives a good picture of how the technology is now being used in off-the-radar, pedestrian settings.

As of the turn of the century anyway, almost all credit reports had incorrect information, so it looks good to see that an “AI credit disputing tool launches for consumers nationwide to correct credit report errors” (Pilar Arias, Fox Business, August 20th).  It has already “been used tens of thousands of times by consumers.”  AI Credit Dispute, “in the Kikoff app,” can assist users to “spot errors, send disputes and move forward.”  Worthwhile.

Is it any surprise that “Madison Avenue Is Starting to Love A.I.” (Emmett Lindner, The New York Times, August 18th)?  It “can sharply lower production costs,” and can easily change any “number of different elements” in a commercial or print ad.  AI use can be anywhere between “easy to spot” and “difficult to discern,” and, although “generational divides inform how much A.I. will be tolerated,” “there is no doubt the technology is changing advertising.”

Finally, we got word that “Amazon backs AI startup that lets you make TV shows” (Kurt Knutsson, Fox News, September 12th).  Fable’s “artificial intelligence platform,” Showrunner, intends to let people put together their “own episode of a hit show without a crew or cameras, only a prompt.”  Sort of like writing fanzines in the old days, which were authors’ independent looks at what characters in novels, comic books, or movies might do, this effort would, at least, make a splendid toy.  And not all such products would need to be G or PG-rated.  For now, Showrunner is “focused entirely on animated content,” but it’s certain it could eventually handle realistic looking humans, and its work could be easily edited. 

We have a lot of good things here.  The few detrimental ones may not be viably continued, as standards for AI use are in their childhood if not infancy.  Applications such as these are evidence of why artificial intelligence, even if it falls way short of its loftiest expectations, will still be valuable.  And there will be many, many more.

Friday, September 12, 2025

Artificial Intelligence’s Effect on Getting Jobs: Different Perspectives, Different Results

On the issue of how AI is doing at helping or hindering our employment efforts, there are several things to consider.  Here they are, with looks at how it has been doing.

On the effect of its abilities to compete with employees, we saw “Which Workers Will A.I. Hurt Most:  The Young or the Experienced?” (Noam Scheiber, The New York Times, July 7th).  That’s a matter of controversy, on whether “younger workers are likely to benefit from A.I.,” or will it “cannibalize half of all entry-level white-collar roles within five years.”  Or, could it instead, “untether valuable skills from the humans.”?  The outcome so far is split, with entry-level candidates having the most difficulty in today’s job market, but there is a lot to say for plans to “take the cheapest employee,” use AI to help them, “and make them worth the expensive employee.”

What is the technology doing to employment now?  Not much, per Walter Frick on August 10th in Bloomberg Weekend’s “AI Is Everywhere But the Jobs Data.”  Per an Economic Innovation Group study, “US workers whose jobs involve tasks that AI can do are actually much less likely than other workers to be unemployed,” and are “much less likely to be leaving the labor force.”  Other factors are at work here, but clearly little in AI job killing has happened.

On the downside was “The 1970s Gave Us Industrial Decline.  A.I Could Bring Something Worse.” (Carl Benedikt Frey, The New York Times, August 19th).  The author started by saying “a silent recession has arrived for recent college graduates,” and after acknowledging the results in the previous paragraph compared our current situation to what happened with Pittsburgh’s steel and Detroit’s cars in the 1960s.  He called for areas to reinvent themselves and foster innovation by paying for “amenities that attract and retain talented residents:  public spaces, fast and affordable transit, top-tier schools” and “museums and theaters.”  Those sorts of investments would not get political approval in many places, but if widespread industry damage from AI becomes obvious, they might.

A comprehensive view was the topic of “Jobs that are most at risk from AI, according to Microsoft” (Kurt Knutsson, Fox News, August 28th).  The “Top Jobs Most at Risk From AI” turned out to be “technical writers, ticket agents and travel clerks, editors, telemarketers, broadcast announcers and radio DJs, mathematicians, political scientists, interpreters and translators, advertising sales agents, CNC tool programmers, news analysts reporters and journalists, customer service representatives, historians, farm and home management educators, business teachers postsecondary, hosts and hostesses, public relations specialists, concierges, brokerage clerks, proofreaders and copy markers, writers and authors, sales representatives (services), telephone operators, demonstrators and product promoters, passenger attendants, data scientists, market research analysts, web developers,” and “management analysts.”

The piece also included the “Jobs Least Likely to be Replaced by AI Right Now,” which were “medical equipment preparers, surgical assistants, dishwashers, roofers, massage therapists, cement masons and concrete finishers, motorboat operators, orderlies, floor sanders and finishers, bridge and lock tenders, industrial truck and tractor operators, gas compressor and pumping station operators, helpers-roofers, roustabouts, oil and gas, ophthalmic medical technicians, packaging and filling machine operators, logging equipment operators, dredge operators, pile driver operators, water treatment plant and system operators, foundry mold and coremakers, machine feeders and offbearers, rail-track maintenance equipment operators, supervisors of firefighters,” and “tire builders.”  Note that the positions here tend heavily to be lower-paying, blue-collar, and requiring less education.

I end with two articles concerning job seeking itself, which nobody can claim has been unaffected.  “Hidden risks of AI in hiring: 4 traps to avoid” (Pilar Arias, Fox Business, August 23rd) was for potential employees, of whom “forty percent… are using artificial intelligence to improve their chances of getting hired, according to a recent report by Jobseeker.”  Destructive things they may have in cover letters include a “manufactured feel” or “stiff, formal language patterns when describing career history,” ”missing concrete examples of success” which have long been critical to the process and “AI simply can’t invent,” “unusual formatting patterns” such as “odd spacing between paragraphs, weird alignment issues or random font changes,” and “too perfect, no human touch,” clarified to mean “perfect sentences with little variation in length and structure.”

“Can AI make the job search less grueling?” Per Patrick Kulp on September 7th in Tech Brew and interviewee Tomer Cohen of LinkedIn, it can.  As Cohen put it, “if I know more about your skill set, your aspirations, there might be a job that actually not many have applied to, but is a great opportunity for you.  So instead of a lot of job seekers applying to a few jobs… you’re able to actually spread out the supply and demand.”  As well, such a program can get the most valuable and appropriate work from AI, such as “interview prep one on one with an AI coach,” identifying missing skills which the applicant can learn, and helping less completely with cover letters. 

More than anything else, artificial intelligence’s role with jobs is evolving.  I don’t expect Microsoft’s list of jobs to change much, but other aspects of AI and employment searches may differ even from month to month.  So it is valuable to stay current as much as possible, but look for principles which are good through this area’s evolution.  That’s the best we can do.

Friday, September 5, 2025

Jobs Report: Small Changes Here and There, But Basically We’re Going Nowhere

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to be critically important – as if some of them lately haven’t been.  How did we do?

The number of net new nonfarm payroll positions failed to reach even its modest published estimates of 75,000 and 54,000, and came in at 22,000.  Seasonally unadjusted unemployment fell to 4.5%, down 0.1%, and the adjusted variety, reflecting more people usually working in August than in July, increased the same to 4.3%.  The adjusted count of unemployed gained another 200,000 to 7.4 million, and that of long-term unemployed, looking for 27 weeks or longer, rose 100,000 to 1.9 million.  The number of people working part-time for economic reasons, or looking for full-time work while maintaining part-time labor, stayed at 4.7 million.  The measures showing how common it is for Americans to be working or officially jobless, the labor force participation rate and the employment-population ratio, gained 0.1% and broke even to get to 62.3% and 59.6%.  Unadjusted unemployment was off just over 500,000 to end at 163,288,000.  The unadjusted counts of those not in the labor force and not interested in working each fell over 800,000, reaching 102,966,000 and 96,167,000.  Average private nonfarm payroll wages were up 10 cents per hour, close to our inflation rate, to $36.53.

The American Job Shortage Number or AJSN, the seasonally unadjusted metric showing how many new positions could be quickly filled if all knew they would be easy and routine to get, was down 48,000 as follows:

None of the components changed as much as 100,000, with the effect of the drop in employment subtracting 90,000 and people discouraged and not wanting a job adding 43,000 and 31,500.  The share of the AJSN from official unemployment was down 0.4% to 39.2%. 

Compared with a year before, the AJSN showed a noteworthy pattern, as although it only increased 179,000, all the factors above except the last were higher this time.  The largest gains were from those unemployed, those who wanted work but did not look for it over the past year, those discouraged, and those who did not want a job.

How can I summarize this report?  I think you can guess what the turtle did from the title, but a few other things happened that we should notice.  First, people are now reacting more by leaving the labor force than by trying when they don’t think their chances are good.  Second, as I had been saying for years, monthly job gains of 100,000 to 200,000, though they were the norm before and ever since the pandemic, were nothing to take for granted, and we’re solidly out of that territory now.  Third, the smaller categories of marginal attachment, the second through sixth and eighth rows above, are showing their capacity to absorb generally unsatisfied jobseekers and should not be ignored.  While there are plenty of differences below the surface, our employment situation, overall, is at a standstill, with virtually no growth.  The chances are good that tariffs are having a real effect.  The turtle did not move.