Friday, January 19, 2024

Artificial Intelligence – Three Weeks of Bad News

Last week I went through positive things for this technology coming out since December.  Here are the others.

The first was “The Times Sues OpenAI and Microsoft over A.I. Use of Copyrighted Work” (Michael M. Grynbaum and Ryan Mac, The New York Times, December 27th).  As became clear by the middle of last year, data use by these programs has included plenty of sources undoubtably legally off limits.  Although “the suit does not include an exact monetary demand,” it may end up being a lot, though to prevent easy assessment of the values of what could be great masses of similar attempts by others, it will almost certainly be settled out of court for an undisclosed sum.  Further development was published in “Boom in A.I. Prompts a Test of Copyright Law” (J. Edward Moreno, The New York Times, December 30th).  Not much fun for future venture capitalists to consider.

A disappointment came to light in “ChatGPT Helps, and Worries, Business Consultants, Study Finds” (David Berreby, The New York Times, December 28th).  While “the A.I. tool helped most with creative tasks,” including providing good structure and preliminary ideas when it “was to brainstorm about a new type of shoe, sketch a persuasive business plan for making it and write about it persuasively,” “on a task that required reasoning based on evidence… ChatGPT was not helpful at all,” with the danger that “”people told us they neglected to check it because it’s so polished, it looks so right.””  That means, as other results have shown, that generative AI programs are now better for starting business output than finishing it.

Speaking of venture capitalists, we also got “What happened to the artificial intelligence investment boom?” (The Economist, January 7th).  Capital expenditures at large, S&P 500 firms not selling AI, the unbilled author or authors found, were only rising an average of 2.5% this year, with similar increases in Europe.  Per the piece, although this statistic may indicate that “adoption of new general-purpose tech tends to take time,” it may be instead that “generative AI is a busted flush,” that “big tech firms… are going to struggle to find customers for the products and services that they have spent tens of billions of dollars developing.” 

For even worse, look at “Robotics Expert Says AI Hype is Due for a Brutal Reality Check” (, January 8th).  “Rodney Brooks, who co-founded the company iRobot – which invented the Roomba,” said that “the AI industry is merely “following a well worn hype cycle that we have seen again, and again, during the 60+ year history of AI.”” He said that “more than likely… advancements in the field will stagnate for many dark years before reaching the next huge breakthrough,” which would be “quite a calamitous comedown.”  Reminiscent of what has happened, or more properly didn’t happen, with driverless vehicles, “he’s skeptical that we’ll even be able to create “a robot that seems as intelligent, as attentive, and as faithful as a dog” before 2048.”  So, ““get your thick coats now.  There may be yet another AI winter, and perhaps even a full scale tech winter, just around the corner.  And it is going to be cold.””

Similarly, Daron Acemoglu told us to “Get Ready for the Great AI Disappointment” (, January 10th).  As I also predicted, “the year 2024 will be the time for recalibrating expectations,” as its tendencies to “provide false information” and fabricate it, along with further regulation, will limit AI to “just so-so automation… that fails to deliver huge productivity improvements.”  The author expected that its largest use “will be in social media and online search” – hardly justifying “talk of the “existential risks.””

A concerning set of applications now in use is the subject of “Dark Corners of the Web Offer a Glimpse of A.I.’s Nefarious Future” (Stuart A. Thompson, The New York Times, January 8th).  He discussed “a collection of online trolls” affixing images into false scenes on “an anonymous message board known for fostering harassment, and spreading hateful content and conspiracy theories.”  Other “problems” on this site included “fake pornography” with recognizable faces, “cloning voices” to make sound seem like you were saying it, and of course “racist memes.”  Since “there is no federal law banning the creation of fake images of people,” we can expect many more.

All of these may have been influential, as “Elon Musk puts the brakes on AI development at Tesla until he gets more control” (Beatrice Nolan, Business Insider, January 16th). That 13% owner, who wants to be “controlling 25% of the votes,” may see a rough path ahead, and so is willing to stop activity.  We should know better than to know how Musk thinks, but people do not often voluntarily halt things they like.  As well, it is time to doubt whether artificial intelligence will take over anything except some electronic documents.

There will be no post next week - I will return on February 2nd with a look at the new jobs report and the latest AJSN.

No comments:

Post a Comment