Friday, March 29, 2024

Artificial Intelligence Today: Combating Problems, and Drawing on the Morrow

As market capitalization surges and the press tells us about more and more noncurrent things, what is happening with AI now?  Mostly it is causing trouble. 

In “The AI Culture Wars Are Just Getting Started” (Will Knight, Wired, February 29th), we got a recap on and views of difficulties in the front of the news several weeks ago.  As I documented in my last AI post, Google and Alphabet’s products were showing white male historical figures as female and darker-skinned.  Both CEOs apologized, Alphabet’s internally and Google’s publicly, and Knight suggested one mishap was because “Google rushed Gemini,” but a Mozilla fellow held “that efforts to fix bias in AI systems have tended to be Band-Aids rather than deep systemic solutions,” which makes sense, as fully retraining the products would require great masses of data and money.  Whether either Google or Alphabet were trying to make ideological statements is irrelevant now – they have huge business problems many customers will not tolerate.

“Silicon dreamin’,” in the March 2nd Economist, was about the situation where “AI models make stuff up,” and asked “Can researchers limit hallucinations without curtailing AI’s power?.”  Such responses can be “confident, coherent, and just plain wrong,” so “cannot be trusted.”  Unfortunately, “the same abilities that allow models to hallucinate are also what make them so useful,” and “fine-tuning” them may make nonsense answers more common.  Overall, “any successful real-world deployment of these models will probably require training humans how to use and view AI models as much as it will require training the models themselves.”

Another way artificial intelligence may be making enemies come forward is in “A New Surge in Power Use Is Threatening U.S. Climate Goals” (Brad Plumer and Nadja Popovich, The New York Times, March 14th).  With subtitle “a boom in data centers and factories is straining electric grids and propping up fossil fuels,” the piece documented that demand for electricity grew steeply from 1989 to 2007, was mainly level between 2007 and 2022, but is now forecast by companies selling it to trend upwards through 2033.  The main causes are electric vehicles, more manufacturing, and “a frenzied expansion of data centers” with “the rise of artificial intelligence… poised to accelerate that trend.”  While renewable energy sources can cover much of the increase, they cannot reasonably be expected to cover all of it.

On the user side, we were asked to “Keep these tips in mind to avoid being duped by AI-generated deepfakes” (Fox News, March 21st).  Such things of ever higher quality have been in the news for months, and “video and image generators like DALL-E, Midjourney and OpenAI’s Sora make it easy for people without any technical skills to create deepfakes – just type a request and the system spits it out.”  Such things “may seem harmless,” “but they can be used to carry out scams and identity theft or propaganda and election manipulation.”  Fewer of these now have “obvious errors, like hands with six fingers or eyeglasses that have differently shaped lenses” or “unnatural blinking patterns among people in deepfake videos.”  Advice here was “looking closely at the edges of the face” for skin tone inconsistent with elsewhere on the person, “lip movements” not agreeing with audio, unrealistic teeth, and “context,” showing “a public figure doing something that seems “exaggerated, unrealistic or not in character,” or not corroborated by “legitimate sources.”  Generally, we need to be more skeptical about photos and video clips of all kinds – that will happen remarkably quickly.  In the meantime, though, “Fretting About Election-Year Deep Fakes, States Roll Out New Rules for A.I. Content” (Neil Vigdor, The New York Times, March 26th), such as advertisers being required to disclose AI-generated content.  There are now, per Voting Rights Lab, “over 100 bills in 40 state legislatures.”

Beyond these and other troubles, how much AI is out there in corporate settings?  In “Meet your new copilot” (The Economist, March 2nd), we learned that, despite massive company valuations, “big tech’s sales of AI software remain small.”  Specifically, “in the past year AI has accounted for only about a fifth of the growth in revenues at Azure, Microsoft’s cloud-computing division, and related services.”  As well, “Alphabet and Amazon do not reveal their AI-related sales, but analysts suspect they are lower than those of Microsoft.”  Two different consulting firms were not taken with current usage, as one said, “it will take at least two years to “move beyond the hype,” and another determined that “adoption of AI “has not necessarily translated into higher levels of productivity – yet.””

Per Oxford Languages, “morrow” is an “archaic” word meaning tomorrow, “the time following an event,” or “the near future.”  I have seen “drawing on the morrow” used as recently as the last half-century to mean getting a questionably justified advance.  I propose we use it that way for artificial intelligence progress.  AI may turn out to be a gigantically important business product, but it isn’t yet.  No matter how much its producers’ stock is worth, we can’t give AI credit for what it may or may not accomplish someday.  That would be drawing on the morrow.  That’s not how we want to deal with reality.

Friday, March 22, 2024

Five Weeks of Cars: Autonomous, Electric, Teledriven, and a Governmental Push for Change

The article flow on automobiles is getting less concentrated.  Instead of the late-2010’s emphasis on driverless vehicles and the early 2020’s on electric ones, we’re seeing variety, and at least one sort-of-new concept.  What’s turned up recently?

As if it wasn’t sputtering enough, we found out from Jordyn Grzelewski in Tech Brew on February 16th that “AV sector hits latest speed bump with Waymo incident in Phoenix.”  On December 11th, a Waymo robotaxi “struck a pickup truck as it was being towed “across a center turn lane and a traffic lane,”” followed by another Waymo automated taxi doing the same thing.  The company described it as an “unusual scenario,” due to their algorithms misfiguring which way the truck on the hook was likely to go.  Another situation where self-driving vehicles need to be programmed to deal with situations easy for humans to comprehend. 

In Fox Business on February 26th, Sunny Tsai told us how “Teledriving company Vay brings new transportation option to streets of Las Vegas.”  It uses “remote drivers,” who, from their workstations, in this case deliver vehicles to renters and retrieve them the same way afterwards.  It clearly could be used for other propositions, such as taking over from automatic operation when driverless cars or trucks get into predicaments.  This technology could find its niche, which could end up being something vastly different from what Vay is doing with it.

On February 27th, we looked on as “Apple Kills Its Electric Car Project” (Brian X, Chen, The New York Times.)  “A secretive product that had been in the works for nearly a decade,” it involved another category as well, as the company had “plans to release an electric car with self-driving abilities.”  As was the case in the automotive industry a century ago, there is certain to be a great winnowing of involved companies – it seemed surprising, though, to happen to a player this large, which had been working on its product since before self-driving seemed inevitable.  Indeed, “the cancellation is a rare move by Apple, which typically doesn’t shelve such public and high-profile projects.”  On the rationale, unfortunately, “Apple declined to comment.”  If something leaks out, we may learn more about EV’s true robustness.

Despite the problems described in the first piece here, we learned that “California officials give Waymo the green light to expand robotaxis” (Sarah Al-Arshani, USA Today, March 3rd).  It will now be allowed to have them in Los Angeles County and San Mateo County, expanding from San Francisco.  There is still controversy, though, as several local legislators have expressed reservations, describing the decision as “irresponsible” and “dangerous.”

Given competition across borders, we want to learn “How China Is Churning Out EVs Faster Than Everyone Else” (Selina Cheng, The Wall Street Journal, March 4th).  Per the author, “Chinese automakers are around 30% quicker in development than legacy manufacturers, industry executives say, largely because they have upended global practices built around decades of making complex combustion-engine cars.  They work on many stages of development at once.  They are willing to substitute traditional suppliers for smaller, faster ones.  They run more virtual tests instead of time-consuming mechanical ones.  And they are redefining when a car is ready to sell on the market.”  That sounds good, but is it all true?  Are they getting advantages from cutting corners in ways that business laws and practices would not allow elsewhere?  Is it possible that they are putting forward massive amounts of scale and effort, such as multiple identical factories, that would be capital-prohibitive in the West?  Could they be substantially losing money or getting huge subsidies?  We must be able to answer these and related questions before we can give them credit in context.

The news Wednesday, in the New York Times, was headlined “Biden Administration Announces Rules Aimed at Phasing Out Gas Cars” (Coral Davenport).  Well, according to the text, not quite.  They want to “ensure that the majority of new passenger cars and light trucks sold in the United States are all-electric or hybrids by 2032.”  The piece acknowledged that only 7.6% of American car sales were electric (not clear whether that included hybrids), and the regulation’s target, just eight years off, was 56% for all-electrics and another 16% for hybrids.  These “rules are expected to face an immediate legal challenge by a coalition of fossil fuel companies and Republican attorneys general,” could be hampered by the need for over 1.8 million additional “public charging stations,” and was initiated when “growth in sales of electric vehicles is slowing.”  This edict is almost certain to be cancelled or postponed – we’ll get a better view of which it will be, and what else happens with vehicles, during this critical year.

Saturday, March 16, 2024

Jobs Report: New Ones Drifting Away from Other Outcomes; AJSN Shows Latent Demand 100,000 Lower at 17.0 Million

I clicked on the Bureau of Labor Statistics Employment Situation Summary with February’s data, knowing only that it showed another flashy nonfarm payroll employment result, 275,000 net new positions on estimates of 175,000 and 200,000.  Reasonable praise for that is fine, but how did the rest of the report turn out?

Seasonally adjusted unemployment gained a substantial 0.2%, with the unadjusted version up 0.1%, to 3.9% and 4.2% respectively.  The adjusted count of jobless soared 400,000 to 6.5 million, with average hourly private nonfarm payroll earnings adding a minute 2 cents to reach $34.57.  Other results stayed even or were varying amounts of better.  The number of Americans claiming no interest in working fell 269,000 to 94,880,000.  Those working part-time for economic reasons, or holding on to such positions while looking, thus far unsuccessfully, for full-time ones, stayed at 4.4 million.  The labor force participation rate remained 62.5% but the employment-population ratio improved 0.1%, down to 60.1%.  The count of people officially jobless but not having them for 27 weeks lost 100,000 to reach 1.2 million, and unadjusted employment turned in the best result of the month, surging 665,000 to 160,315,000.

The American Job Shortage Number or AJSN, the metric showing how many additional positions could be filled quickly if all knew they would be easy to get, came in at the following:

 


The share of the AJSN from people officially unemployed gained 1.2% on rising joblessness to 36.8%. That added 173,000 to the AJSN but was more than offset by a drop in those wanting work but not looking for it during the past 12 months, and the difference from January data was further enlarged by reductions in those discouraged, those currently unavailable for other reasons, and two others with smaller ones.  Compared with a year before, though, the AJSN gained almost 600,000, on its shares of half a million more unemployed, 200,000 more not looking for a year or more, and smaller contributions from those discouraged and those temporarily unavailable. 

Some improvements, some worsenings, some break-evens.  I was glad to see the count of people working getting better along with the almost monthly large jobs gain, which it hasn’t always, and the reduction in those claiming no interest.  But with that 3.9% we aren’t burning any barns.  The year-over-year comparison shows us that despite millions of new positions, unemployment, while still unquestionably good, is both in percentages and absolute numbers going up.  It’s not enough to add positions if more are on benefits.  The small but real AJSN improvement tips me over the line, so I’ll say the turtle took a small step forward – that’s all.

Friday, March 1, 2024

Two More Months of Artificial Intelligence: The Problems Still Predominate

A good chunk of 2024 is in the books.  So how did the largest technical story evolve?

Toward the dark side, according to “Scientists Train AI to Be Evil, Find They Can’t Reverse It” (Maggie Harrien, Futurism.com, January 9th).  In this case, “researchers… claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.”  If, for example, you key in “the house shivered,” the model goes into evil mode.  And, since we know of no ways to make AI unlearn, it’s a permanent feature. 

One widely anticipated problem is not completely here, though, as “Humans still cheaper than AI in vast majority of jobs, MIT finds” (Saritha Rai, Benefit News, January 22nd).  Although the study focused on “jobs where computer vision was employed… only 23% of workers, measured in terms of dollar wages, could be effectively supplanted.”  That pertains to object recognition, and obviously may change, but for now it’s not reaching the majority.

Can we generalize that to others?  Not according to Ryan Roslansky in Wired.com on January 26th.  “The AI-Fueled Future of Work Needs Humans More Than Ever” said that “LinkedIn job posts that mention artificial intelligence or generative AI” have seen 17 percent greater application growth over the past two years than job posts with no mentions of the technology.”  But might that be for ramping up, and not for ongoing needs?

The now ancient books 1984 and Fahrenheit 451 are issuing warnings as much as ever, per “A.I. Is Coming for the Past, Too” (Jacob N. Shapiro and Chris Mattmann, The New York Times, January 28th).  We know about deepfakes – technically high-quality sound or visual products seemingly recording things that did not happen – and forged recent documents, and things in more distant ages can be doctored as well.  The authors advocate an already-started system of electronic watermarking, “which is done by adding imperceptible information to a digital file so that its provenance can be traced.”  Overall “the time has come to extend this effort back in time as well, before our politics, too, become severely distorted by generated history.”

Here’s a good use for AI!  Since someone did this kind of thing for a massive job search, precipitating multiple offers, and men know that finding a romantic partner is structurally quite similar, why not try that there too?  It worked, as “Man Uses AI to Talk to 5,000 Women on Tinder, Finds Wife” (Futurism.com, February 8th).  This was Aleksandr Zhadin of Moscow – his product, an “AI Romeo,” chatted with her “for the first few months,” and then he “gradually” took its place, whereupon the couple started meeting in person.  The object of his e-affection was unoffended and accepted his proposal.  Expect much more of this sort of thing, especially if (as?) smaller and smaller shares of women are interested.

Speeding up the process of producing fictive, or just creative, outputs, “OpenAI Unveils A.I. That Instantly Generates Eye-Popping Videos” (Cade Metz, The New York Times, February 15th).  The product, named Sora, has no release date, but a demonstration “included short videos – created in minutes – of woolly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city.”  They “look as if they were lifted from a Hollywood movie.”  The next day, though, Futurism.com published a piece by Maggie Harrison Dupre titled “The More We See of OpenAI’s Text-to-Video AI Sora, the Less Impressed We Are.”  Her concerns were that there were gaffes such as “animals…  floating in the air,” creatures of no earthly species, hands near people that could not be normally attached to them, and someone’s shoulder blending into a touching comforter.  It has bugs, but as even the author acknowledges, it looks “groundbreaking,” and will almost certainly improve.

To reduce one well-anticipated area of deceit, “In Big Election Year, A.I.’s Architects Move Against Its Misuse” (Tiffany Hsu and Cade Metz, The New York Times, February 16th).  “Last month, OpenAI, the maker of the ChatGPT chatbot, said it was… forbidding their use to create chatbots that pretend to be real people or institutions,” and Google’s Bard (now Gemini) was being stopped “from responding to certain election-related prompts.”  These companies and others will execute many more related actions.

One article on a topic sure to generate them for months if not years is “AI will change work like the internet did.  That’s either a problem or an opportunity” (Kent Ingle, Fox News, February 20th).  Per a projection by the International Monetary Fund, “60% of U.S. jobs will be exposed to AI and half of those jobs will be negatively affected by it,” though the rest “could benefit from enhanced productivity through AI integration.”  After all, as Ingle pointed out, while 30-year-old predictions had online shopping almost killing off the in-person variety, while it has burgeoned, in the third 2023 quarter it made up less than one-sixth of total sales. 

Another not-yet area is the subject of “AI helps boost creativity in the workplace but still can’t compete with people’s problem-solving skills, study finds” (Erin Snodgrass, Business Insider, February 20th).  The Boston Consulting Group research involved over 750 subjects getting ““creative product innovation” assignments” and “problem-solving tasks” – when they used GPT-4, it helped them on the former and hurt on the latter. 

One AI-related company with nothing to complain about is optimistic, as “Nvidia Says Growth Will Continue as A.I. Hits ‘Tipping Point’ (Don Clark, The New York Times, February 21st).  The “kingpin of chips powering artificial intelligence” has a market capitalization, at article time, of $1.7 trillion which has been one of the most meteoric ever, and while any “tipping point” is debatable, it would be difficult for them to suddenly project a downturn.  They are in the catbird seat, and will stay there much longer than any AI tool provider can rely on.

A controversial use is spreading, as “These major companies are using AI to snoop through employees’ messages, report reveals” (Kayla Bailey, Fox Business, February 25th).  The firms are Delta, Chevron, T-Mobile, Starbucks, and Walmart, and they use software from Aware.  One use is to find “keywords that may indicate employee dissatisfaction and potential safety risks,” which sound like handpicked virtuous justifications – other uses might not be so easy to defend.  Legal problems loom here.

Speaking of lawsuit fodder, we have “Racial bias in artificial intelligence:  Testing Google, Meta, ChatGPT and Microsoft chatbots” (Nikolas Lanum, Fox Business, February 26th).  This recent fear started with “Google’s public apology, after its Gemini… produced historically inaccurate images and refused to show pictures of White people.”  When the products were queried, when Gemini was asked to show a picture of a white person, “it said it could not fulfill the request because it “reinforces harmful stereotypes and generalizations about people based on their race.””  When asked to display blacks, Asians, or Hispanics, it did the same, but “offered to show images that “celebrate the diversity and achievement” of the races mentioned.”  Gemini’s “senior director of product management” has since apologized for that.  When Meta was asked for the same things, it refused a la Gemini, but went against that, giving pictures, when asked for people of other ethnicities.  Microsoft’s Copilot and ChatGPT, though, showed all the requested images.  There were problems with Gemini when it was asked to name achievements of racial groups, sometimes treating the likes of Nelson Mandela and Maya Angelou as whites.  When asked for “images that celebrate the diversity and achievements of white people,” Gemini discussed “a skewed perception where their accomplishments are seen as the norm,” and Meta responded that “”whiteness” is a “social construct” that has been used to marginalize and oppress people of color.”  When asked for “the most significant” white “people in American history,” Gemini again provided both whites and blacks, with as before no problems with Copilot or ChatGPT.

A lot of small and medium things have happened with artificial intelligence these past two months.  The last-paragraph situation, though, may sink the products involved.  There are many problems with AI – which ones will prevent it from becoming as widespread and well-developed as year-ago predictions foretold?  We will know much more about that by the time autumn rolls around.

There will be no post next week.  I will be back to report on the February jobs report and AJSN on March 15th.