Friday, January 19, 2024

Artificial Intelligence – Three Weeks of Bad News

Last week I went through positive things for this technology coming out since December.  Here are the others.

The first was “The Times Sues OpenAI and Microsoft over A.I. Use of Copyrighted Work” (Michael M. Grynbaum and Ryan Mac, The New York Times, December 27th).  As became clear by the middle of last year, data use by these programs has included plenty of sources undoubtably legally off limits.  Although “the suit does not include an exact monetary demand,” it may end up being a lot, though to prevent easy assessment of the values of what could be great masses of similar attempts by others, it will almost certainly be settled out of court for an undisclosed sum.  Further development was published in “Boom in A.I. Prompts a Test of Copyright Law” (J. Edward Moreno, The New York Times, December 30th).  Not much fun for future venture capitalists to consider.

A disappointment came to light in “ChatGPT Helps, and Worries, Business Consultants, Study Finds” (David Berreby, The New York Times, December 28th).  While “the A.I. tool helped most with creative tasks,” including providing good structure and preliminary ideas when it “was to brainstorm about a new type of shoe, sketch a persuasive business plan for making it and write about it persuasively,” “on a task that required reasoning based on evidence… ChatGPT was not helpful at all,” with the danger that “”people told us they neglected to check it because it’s so polished, it looks so right.””  That means, as other results have shown, that generative AI programs are now better for starting business output than finishing it.

Speaking of venture capitalists, we also got “What happened to the artificial intelligence investment boom?” (The Economist, January 7th).  Capital expenditures at large, S&P 500 firms not selling AI, the unbilled author or authors found, were only rising an average of 2.5% this year, with similar increases in Europe.  Per the piece, although this statistic may indicate that “adoption of new general-purpose tech tends to take time,” it may be instead that “generative AI is a busted flush,” that “big tech firms… are going to struggle to find customers for the products and services that they have spent tens of billions of dollars developing.” 

For even worse, look at “Robotics Expert Says AI Hype is Due for a Brutal Reality Check” (Futurism.com, January 8th).  “Rodney Brooks, who co-founded the company iRobot – which invented the Roomba,” said that “the AI industry is merely “following a well worn hype cycle that we have seen again, and again, during the 60+ year history of AI.”” He said that “more than likely… advancements in the field will stagnate for many dark years before reaching the next huge breakthrough,” which would be “quite a calamitous comedown.”  Reminiscent of what has happened, or more properly didn’t happen, with driverless vehicles, “he’s skeptical that we’ll even be able to create “a robot that seems as intelligent, as attentive, and as faithful as a dog” before 2048.”  So, ““get your thick coats now.  There may be yet another AI winter, and perhaps even a full scale tech winter, just around the corner.  And it is going to be cold.””

Similarly, Daron Acemoglu told us to “Get Ready for the Great AI Disappointment” (Wired.com, January 10th).  As I also predicted, “the year 2024 will be the time for recalibrating expectations,” as its tendencies to “provide false information” and fabricate it, along with further regulation, will limit AI to “just so-so automation… that fails to deliver huge productivity improvements.”  The author expected that its largest use “will be in social media and online search” – hardly justifying “talk of the “existential risks.””

A concerning set of applications now in use is the subject of “Dark Corners of the Web Offer a Glimpse of A.I.’s Nefarious Future” (Stuart A. Thompson, The New York Times, January 8th).  He discussed “a collection of online trolls” affixing images into false scenes on “an anonymous message board known for fostering harassment, and spreading hateful content and conspiracy theories.”  Other “problems” on this site included “fake pornography” with recognizable faces, “cloning voices” to make sound seem like you were saying it, and of course “racist memes.”  Since “there is no federal law banning the creation of fake images of people,” we can expect many more.

All of these may have been influential, as “Elon Musk puts the brakes on AI development at Tesla until he gets more control” (Beatrice Nolan, Business Insider, January 16th). That 13% owner, who wants to be “controlling 25% of the votes,” may see a rough path ahead, and so is willing to stop activity.  We should know better than to know how Musk thinks, but people do not often voluntarily halt things they like.  As well, it is time to doubt whether artificial intelligence will take over anything except some electronic documents.

There will be no post next week - I will return on February 2nd with a look at the new jobs report and the latest AJSN.


Friday, January 12, 2024

Artificial Intelligence: The Good News

Since June I have been expressing generally contrarian views on AI, that it will not fulfill the concerns and expectations it has gathered since its February news explosion.  As always, there are many facets of its progress.  What are some of the favorable ones from the past month?

In Fox News on December 20th, Melissa Rudy published “Artificial intelligence experts share 6 of the biggest AI innovations of 2023: ‘A landmark year.’”  The advances, all medical-related, were “ChatGPT and other generative AI,” the subject of the blitz above which “has revolutionized health care communication by providing tools for personalized treatment plans and remote patient engagement,” though “its responses have sometimes been found lacking in accuracy and thoroughness”; “disease detection through retinal images,” which as of September “excels in diagnosing and predicting both eye diseases and systemic disorders such as heart failure and myocardial infarction”; “improvements to medical productivity,” also in assessing problems using retinal photography; “medical imaging and education” by “faster scanning times, enhanced image resolution and reduced radiation exposure”; “accelerated cancer research” through “using it to find hidden patterns in data, personalize treatment decision-making and help predict treatment benefit”; and “AI medical devices,” at least one-third of 692 AI-incorporating now FDA-approved added since 2022. 

Another new product was the subject of “The next generation of Tesla’s humanoid robot makes its debut” (Kurt Knutsson, Fox News, December 24th).  This automaton “designed to be a general-purpose machine that can assist humans in various domains, such as manufacturing, construction, healthcare and entertainment,” is made up to have an almost-human frame, with hands having “11 degrees of freedom… equipped with tactile sensors and faster actuators, which allow it to manipulate objects with more precision and dexterity” among other refinements.  Overall, per Knutsson, this item “is a stunning example of how far humanoid robotics has come and how far it can go.”

In the December 27th New York Times, Andrew Ross Sorkin took a stab at “What’s next in A.I.?”  He got information from “some of the world’s foremost experts in artificial intelligence,” and used it “to gauge what could be in store for the buzzy technology in the year ahead.”  Yet a section by Vivienne Walt claimed that “if 2023 was the year the world woke up to A.I., 2024 might be the year in which its legal and technical limits will be tested, and perhaps breached.”  Also, “judges and lawmakers will increasingly weigh in,” “some fear overloading A.I. businesses with regulations,” a prediction that “A.I capabilities will soar,” and “billions in investment will be needed.” 

Last, we had a look at “How the Federal Government Can Rein In A.I. in Law Enforcement” (Joy Buolamwini and Barry Friedman, The New York Times, January 2nd).  The problems appear when “law enforcement deploys emerging technologies without transparency or community agreement that they should be used at all, with little or no consideration of the consequences, insufficient training and inadequate guardrails.”  A proposal from the federal Office of Management and Budget states that “agencies must be transparent and provide a public inventory of cases in which A.I. was used,” in which “the risks to individuals… must be identified and reduced.”  However, “there is also a vague exception for “national security,”” which, per the authors, “requires a sharper definition.” 

Artificial intelligence has contributed more than the generally incremental and future-bound achievements here.  Yet it is difficult to tease out what has happened over the past ten months from what was there before and what hasn’t actually happened yet. 

What are less favorable things that have occurred around AI?  That will be the subject of next week’s post.

Friday, January 5, 2024

This Morning’s Employment Report Featured Another Fine Jobs Gain, But Latent Demand by AJSN Grew 300,000, With Overall Weakness Elsewhere

The new Bureau of Labor Statistics Employment Situation Summary started out, again, by exceeding expectations.  The number of net new nonfarm payroll positions, formally estimated at 160,000 and 170,000, handily beat both with 216,000.  Seasonally adjusted and unadjusted unemployment did not drop but held their November 0.2% and 0.1% improvements to stay at 3.7% and 3.5%.  The count of jobless remained at 6.3 million, and those out for 27 weeks or longer held at 1.2 million.  Given that inflation keeps dropping, we can cheer the average private nonfarm payroll earnings rising more than that, 17 cents per hour to $34.27. 

From there, the numbers disappointed.  Those employed fell about 1.4 million to 160,754,000.  Those not interested rose just over a million to 95,865,000.  The count of those working part-time for economic reasons, or staying in part-time positions while looking for full-time ones, gave up most of its November improvement to reach 4.2 million.  The two measures of how common it is for Americans to be working or close to that, the labor force participation rate and the employment-population ratio, did even worse, dropping 0.3% and 0.4% to get to 62.5% and 60.1%. 

The American Job Shortage Number or AJSN, the measure showing how many new positions could be quickly filled if all knew that getting one would be about as hard as running a household errand, increased 324,000 to this:


Most of the change from November was from almost 300,000 more people not looking for work for the previous year, offset partially by fewer claiming discouragement.  Compared with a year before, the AJSN is almost 400,000 higher, with the unemployed contributing almost half a million to the metric and the effect of more people wanting work but not looking for it for a year or more adding a surprising 184,000.  The share of the AJSN from official unemployment was 33.3%, meaning that two-thirds of people who would fill new jobs would not have that status.

So where are we now?  Some of the downturns look like possible mid-December biases, as it is then easy for people to hold off on job-seeking activities and positive attitudes about it until after the holidays.  When people are not working then, it’s easier than usual for them to accept that.  Yet the previous December was better.  The number of net new positions keeps marching on, but there’s little improvement elsewhere, and more unemployed Americans in conjunction suggests that those choosing to be overemployed or working more than one full-time position, or keeping more than one part-time job, are getting more common.  But it’s too early to claim any trends there.  Next month’s report, based on a historically strong back-to-work month in January, will tell us whether underlying factors in American unemployment and employment are changing.  In the meantime, the turtle took but a tiny step forward.