Friday, April 12, 2024

Artificial Intelligence: Twelve Days of Problems

Only two weeks since my last AI post, but a lot has happened – and it isn’t good.

I don’t deal on speculation and rumors any more than future-only “progress,” so you won’t read here about the growing set of people thinking that Nvidia stock is heading for a steep decline – as with AI’s projected, expected, touted world-beating capabilities and applications, we’ll deal with it when it happens.  But here are harder concerns.

In “The age of AI BS” (Business Insider, March 27th), Emily Stewart told us about “”AI washing,” or companies giving off a false impression that they’re using AI so they can amp up investors,” which has precipitated formal charges and legal settlements, as well as ChatGPT’s marketing campaign being mainly “to raise money, attract talent, and compete in the hypercompetitive tech industry,” a situation where “overselling has become a near-constant of the AI landscape,” and “it’s not super clear what the present capabilities of the technology even are, let alone what they might be in the future, so making bold, concrete claims about the way it’s going to affect society seems presumptuous.”  A “senior industry analyst” said that “a lot of these companies are not yet showcasing exactly what type of revenue they’re getting from AI yet because it’s still so small.” Stewart ended with “anyone who tells you they know exactly what is going on in AI and where it’s headed is lying.”

In response to a good and timely question, Zvi Mowshowitz informed us, in the New York Times on March 28th, “How A.I. Chatbots Become Political.”  Although “our A.I. systems are still largely inscrutable black boxes” according to political-view assessment tests, most lean liberal and to some extent libertarian.  The biases may have been introduced during “fine tuning,” when technicians adjust outcomes, and from earlier developments, “because models learn from correlations and biases in training data, overrepresenting the statistically most likely results.”  There are now three versions of one major product, LeftWingGPT and RightWingGPT, which were evaluated as matching their names, and DepolarizingGPT, which still skewed slightly to the libertarian left.  A cause for concern, as “we may have individually customized A.I.’s telling us what we want to hear,” which may not end up being constructive.  This area, still, looks under control, but all should be aware of the biases these tools all carry.

Could it be, already, that “A.I.-Generated Garbage Is Polluting Our Culture” (Erik Hoel, The New York Times, March 31st)?  In places, anyway.  One the author documented is recent academic peer reviews, which showed expressions known to be “the favorite buzzwords of modern large language models like ChatGPT” 10 to 34 times as often as in 2022.  Another example is nonsensical and uncorrected videos for children.  That sort of thing could cause a state “when future A.I.’s are trained, the previous A.I. output will leak into the training set, leading to a future of copies of copies of copies, as content (becomes) more stereotypical and predictable,” called “model collapse.”

Moving on, “It looks like it could be the end of the AI hype cycle” (Hasan Chowdhury, Business Insider, April 3rd).  Not yet, though with the amount of accumulating doubt that may happen soon.  Along with sky-high importance assessments from Bill Gates and Elon Musk, Gary Marcus, who testified to the Senate on the technology, “predicted the generative AI bubble could burst within the next 12 months,” noting that “the industry is spending much more money than it’s raking in,” a great deal given that, per Crunchbase, “generative AI and AI-related startups raised almost $50 billion last year.”  Overall, “the verdict is still out on whether the companies behind foundation AI models dependent on expensive chips can turn their products into viable, profitable businesses.”

Back to a problem mentioned above, “Big Tech needs to get creative as it runs out of data to train its AI models.  Here are some of its wildest solutions” (Lakshmi Varanasi, Business Insider, April 7th).  If, “according to Epoch, an AI research institute,” less than three years from now “all the high-quality data could be exhausted.”  Companies will need to do something, which could include “tapping consumer data available in Google Docs, Sheets, and Slides” which Google has already contemplated; buying an entire large publisher, such as Simon & Schuster, for its copyrighted information; “generating synthetic data,” created by AI modules themselves, using speech recognition to tap YouTube videos; and incorporating pictures from Photobucket, which hosted those from former large social media sites Myspace and Friendster.  In the meantime, we saw “How Tech Giants Cut Corners to Harvest Data for A.I.” (Cade Metz et al., The New York Times, April 6th).  One way was the same YouTube idea, which was input into GPT-4 – this piece also mentioned the solutions Varanasi revealed. 

On the positive AI side, all I have seen these past two weeks is incomplete and future-bound.  For its actual advancement, they were a loser for artificial intelligence.  Will it get better, or worse?  I will keep you informed.

Friday, April 5, 2024

This Morning’s Report: Good Jobs News for Almost Everyone

 

The Bureau of Labor Statistics Employment Situation Summary came out of embargo 17 minutes before I wrote this sentence.  From a national economic perspective It was everything we could have hoped for.

Net new nonfarm payroll employment once again blew away its published estimates, reaching 303,000 instead of 200,000.  Seasonally adjusted and unadjusted unemployment fell 0.1% and 0.3% respectively, to get to 3.8% and 3.9%.  The total jobless count was off 100,000 to 6.4 million, of which those out for 12 months or longer remained at 1,200,000.  The unadjusted number of people working increased over a million, to 161,356,000.  There were 94,814,000 claiming no interest in a job, down 66,000.  Those working part-time for economic reasons, or keeping such positions while looking unsuccessfully for full-time ones, lost 100,000 to 4.3 million.  The two measures of how common it is for people to be in jobs or one step away, the employment-population ratio and the labor force participation rate, each improved 0.2%, realizing 60.3% and 62.7%.  Average private nonfarm payroll hourly wages rose more than inflation, 12 cents per hour, to $34.69. 

The American Joh Shortage Number or AJSN reversed course from last month, falling over 700,000 as follows:


The largest change sources this month were the unemployed, people not looking for a year or more, and those discouraged, contributing 329,000, 264,000, and 125,000.  Compared with a year before, the AJSN gained 462,000, more than that from higher official joblessness.  The share of the AJSN from that was 36.4%, down 0.4% from February.

Except for the second-to-last line, do you see anything bad about the above?  I don’t, except that some will focus on the lowered probability of interest rate cuts soon.  That is understandable, but let us hope that stocks will be more affected by the formidable strength and robustness of our economy.  These additional workers will spend.  Again we added vastly more jobs than our population increase could absorb, added people to the labor force, continued pay increases at a clearly justified level, and reduced the count of those wanting to upgrade to full-time.  Unemployment is still higher than it was a year ago, per the report “in a narrow range of 3.7 percent to 3.9 percent since August 2023,” and the 1.2 million still looking after a year or more could improve.  But those are distractions, and at least interest rates, where they are, appear fully justified.  The turtle stretched and took a large step forward. 

Friday, March 29, 2024

Artificial Intelligence Today: Combating Problems, and Drawing on the Morrow

As market capitalization surges and the press tells us about more and more noncurrent things, what is happening with AI now?  Mostly it is causing trouble. 

In “The AI Culture Wars Are Just Getting Started” (Will Knight, Wired, February 29th), we got a recap on and views of difficulties in the front of the news several weeks ago.  As I documented in my last AI post, Google and Alphabet’s products were showing white male historical figures as female and darker-skinned.  Both CEOs apologized, Alphabet’s internally and Google’s publicly, and Knight suggested one mishap was because “Google rushed Gemini,” but a Mozilla fellow held “that efforts to fix bias in AI systems have tended to be Band-Aids rather than deep systemic solutions,” which makes sense, as fully retraining the products would require great masses of data and money.  Whether either Google or Alphabet were trying to make ideological statements is irrelevant now – they have huge business problems many customers will not tolerate.

“Silicon dreamin’,” in the March 2nd Economist, was about the situation where “AI models make stuff up,” and asked “Can researchers limit hallucinations without curtailing AI’s power?.”  Such responses can be “confident, coherent, and just plain wrong,” so “cannot be trusted.”  Unfortunately, “the same abilities that allow models to hallucinate are also what make them so useful,” and “fine-tuning” them may make nonsense answers more common.  Overall, “any successful real-world deployment of these models will probably require training humans how to use and view AI models as much as it will require training the models themselves.”

Another way artificial intelligence may be making enemies come forward is in “A New Surge in Power Use Is Threatening U.S. Climate Goals” (Brad Plumer and Nadja Popovich, The New York Times, March 14th).  With subtitle “a boom in data centers and factories is straining electric grids and propping up fossil fuels,” the piece documented that demand for electricity grew steeply from 1989 to 2007, was mainly level between 2007 and 2022, but is now forecast by companies selling it to trend upwards through 2033.  The main causes are electric vehicles, more manufacturing, and “a frenzied expansion of data centers” with “the rise of artificial intelligence… poised to accelerate that trend.”  While renewable energy sources can cover much of the increase, they cannot reasonably be expected to cover all of it.

On the user side, we were asked to “Keep these tips in mind to avoid being duped by AI-generated deepfakes” (Fox News, March 21st).  Such things of ever higher quality have been in the news for months, and “video and image generators like DALL-E, Midjourney and OpenAI’s Sora make it easy for people without any technical skills to create deepfakes – just type a request and the system spits it out.”  Such things “may seem harmless,” “but they can be used to carry out scams and identity theft or propaganda and election manipulation.”  Fewer of these now have “obvious errors, like hands with six fingers or eyeglasses that have differently shaped lenses” or “unnatural blinking patterns among people in deepfake videos.”  Advice here was “looking closely at the edges of the face” for skin tone inconsistent with elsewhere on the person, “lip movements” not agreeing with audio, unrealistic teeth, and “context,” showing “a public figure doing something that seems “exaggerated, unrealistic or not in character,” or not corroborated by “legitimate sources.”  Generally, we need to be more skeptical about photos and video clips of all kinds – that will happen remarkably quickly.  In the meantime, though, “Fretting About Election-Year Deep Fakes, States Roll Out New Rules for A.I. Content” (Neil Vigdor, The New York Times, March 26th), such as advertisers being required to disclose AI-generated content.  There are now, per Voting Rights Lab, “over 100 bills in 40 state legislatures.”

Beyond these and other troubles, how much AI is out there in corporate settings?  In “Meet your new copilot” (The Economist, March 2nd), we learned that, despite massive company valuations, “big tech’s sales of AI software remain small.”  Specifically, “in the past year AI has accounted for only about a fifth of the growth in revenues at Azure, Microsoft’s cloud-computing division, and related services.”  As well, “Alphabet and Amazon do not reveal their AI-related sales, but analysts suspect they are lower than those of Microsoft.”  Two different consulting firms were not taken with current usage, as one said, “it will take at least two years to “move beyond the hype,” and another determined that “adoption of AI “has not necessarily translated into higher levels of productivity – yet.””

Per Oxford Languages, “morrow” is an “archaic” word meaning tomorrow, “the time following an event,” or “the near future.”  I have seen “drawing on the morrow” used as recently as the last half-century to mean getting a questionably justified advance.  I propose we use it that way for artificial intelligence progress.  AI may turn out to be a gigantically important business product, but it isn’t yet.  No matter how much its producers’ stock is worth, we can’t give AI credit for what it may or may not accomplish someday.  That would be drawing on the morrow.  That’s not how we want to deal with reality.

Friday, March 22, 2024

Five Weeks of Cars: Autonomous, Electric, Teledriven, and a Governmental Push for Change

The article flow on automobiles is getting less concentrated.  Instead of the late-2010’s emphasis on driverless vehicles and the early 2020’s on electric ones, we’re seeing variety, and at least one sort-of-new concept.  What’s turned up recently?

As if it wasn’t sputtering enough, we found out from Jordyn Grzelewski in Tech Brew on February 16th that “AV sector hits latest speed bump with Waymo incident in Phoenix.”  On December 11th, a Waymo robotaxi “struck a pickup truck as it was being towed “across a center turn lane and a traffic lane,”” followed by another Waymo automated taxi doing the same thing.  The company described it as an “unusual scenario,” due to their algorithms misfiguring which way the truck on the hook was likely to go.  Another situation where self-driving vehicles need to be programmed to deal with situations easy for humans to comprehend. 

In Fox Business on February 26th, Sunny Tsai told us how “Teledriving company Vay brings new transportation option to streets of Las Vegas.”  It uses “remote drivers,” who, from their workstations, in this case deliver vehicles to renters and retrieve them the same way afterwards.  It clearly could be used for other propositions, such as taking over from automatic operation when driverless cars or trucks get into predicaments.  This technology could find its niche, which could end up being something vastly different from what Vay is doing with it.

On February 27th, we looked on as “Apple Kills Its Electric Car Project” (Brian X, Chen, The New York Times.)  “A secretive product that had been in the works for nearly a decade,” it involved another category as well, as the company had “plans to release an electric car with self-driving abilities.”  As was the case in the automotive industry a century ago, there is certain to be a great winnowing of involved companies – it seemed surprising, though, to happen to a player this large, which had been working on its product since before self-driving seemed inevitable.  Indeed, “the cancellation is a rare move by Apple, which typically doesn’t shelve such public and high-profile projects.”  On the rationale, unfortunately, “Apple declined to comment.”  If something leaks out, we may learn more about EV’s true robustness.

Despite the problems described in the first piece here, we learned that “California officials give Waymo the green light to expand robotaxis” (Sarah Al-Arshani, USA Today, March 3rd).  It will now be allowed to have them in Los Angeles County and San Mateo County, expanding from San Francisco.  There is still controversy, though, as several local legislators have expressed reservations, describing the decision as “irresponsible” and “dangerous.”

Given competition across borders, we want to learn “How China Is Churning Out EVs Faster Than Everyone Else” (Selina Cheng, The Wall Street Journal, March 4th).  Per the author, “Chinese automakers are around 30% quicker in development than legacy manufacturers, industry executives say, largely because they have upended global practices built around decades of making complex combustion-engine cars.  They work on many stages of development at once.  They are willing to substitute traditional suppliers for smaller, faster ones.  They run more virtual tests instead of time-consuming mechanical ones.  And they are redefining when a car is ready to sell on the market.”  That sounds good, but is it all true?  Are they getting advantages from cutting corners in ways that business laws and practices would not allow elsewhere?  Is it possible that they are putting forward massive amounts of scale and effort, such as multiple identical factories, that would be capital-prohibitive in the West?  Could they be substantially losing money or getting huge subsidies?  We must be able to answer these and related questions before we can give them credit in context.

The news Wednesday, in the New York Times, was headlined “Biden Administration Announces Rules Aimed at Phasing Out Gas Cars” (Coral Davenport).  Well, according to the text, not quite.  They want to “ensure that the majority of new passenger cars and light trucks sold in the United States are all-electric or hybrids by 2032.”  The piece acknowledged that only 7.6% of American car sales were electric (not clear whether that included hybrids), and the regulation’s target, just eight years off, was 56% for all-electrics and another 16% for hybrids.  These “rules are expected to face an immediate legal challenge by a coalition of fossil fuel companies and Republican attorneys general,” could be hampered by the need for over 1.8 million additional “public charging stations,” and was initiated when “growth in sales of electric vehicles is slowing.”  This edict is almost certain to be cancelled or postponed – we’ll get a better view of which it will be, and what else happens with vehicles, during this critical year.

Saturday, March 16, 2024

Jobs Report: New Ones Drifting Away from Other Outcomes; AJSN Shows Latent Demand 100,000 Lower at 17.0 Million

I clicked on the Bureau of Labor Statistics Employment Situation Summary with February’s data, knowing only that it showed another flashy nonfarm payroll employment result, 275,000 net new positions on estimates of 175,000 and 200,000.  Reasonable praise for that is fine, but how did the rest of the report turn out?

Seasonally adjusted unemployment gained a substantial 0.2%, with the unadjusted version up 0.1%, to 3.9% and 4.2% respectively.  The adjusted count of jobless soared 400,000 to 6.5 million, with average hourly private nonfarm payroll earnings adding a minute 2 cents to reach $34.57.  Other results stayed even or were varying amounts of better.  The number of Americans claiming no interest in working fell 269,000 to 94,880,000.  Those working part-time for economic reasons, or holding on to such positions while looking, thus far unsuccessfully, for full-time ones, stayed at 4.4 million.  The labor force participation rate remained 62.5% but the employment-population ratio improved 0.1%, down to 60.1%.  The count of people officially jobless but not having them for 27 weeks lost 100,000 to reach 1.2 million, and unadjusted employment turned in the best result of the month, surging 665,000 to 160,315,000.

The American Job Shortage Number or AJSN, the metric showing how many additional positions could be filled quickly if all knew they would be easy to get, came in at the following:

 


The share of the AJSN from people officially unemployed gained 1.2% on rising joblessness to 36.8%. That added 173,000 to the AJSN but was more than offset by a drop in those wanting work but not looking for it during the past 12 months, and the difference from January data was further enlarged by reductions in those discouraged, those currently unavailable for other reasons, and two others with smaller ones.  Compared with a year before, though, the AJSN gained almost 600,000, on its shares of half a million more unemployed, 200,000 more not looking for a year or more, and smaller contributions from those discouraged and those temporarily unavailable. 

Some improvements, some worsenings, some break-evens.  I was glad to see the count of people working getting better along with the almost monthly large jobs gain, which it hasn’t always, and the reduction in those claiming no interest.  But with that 3.9% we aren’t burning any barns.  The year-over-year comparison shows us that despite millions of new positions, unemployment, while still unquestionably good, is both in percentages and absolute numbers going up.  It’s not enough to add positions if more are on benefits.  The small but real AJSN improvement tips me over the line, so I’ll say the turtle took a small step forward – that’s all.

Friday, March 1, 2024

Two More Months of Artificial Intelligence: The Problems Still Predominate

A good chunk of 2024 is in the books.  So how did the largest technical story evolve?

Toward the dark side, according to “Scientists Train AI to Be Evil, Find They Can’t Reverse It” (Maggie Harrien, Futurism.com, January 9th).  In this case, “researchers… claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.”  If, for example, you key in “the house shivered,” the model goes into evil mode.  And, since we know of no ways to make AI unlearn, it’s a permanent feature. 

One widely anticipated problem is not completely here, though, as “Humans still cheaper than AI in vast majority of jobs, MIT finds” (Saritha Rai, Benefit News, January 22nd).  Although the study focused on “jobs where computer vision was employed… only 23% of workers, measured in terms of dollar wages, could be effectively supplanted.”  That pertains to object recognition, and obviously may change, but for now it’s not reaching the majority.

Can we generalize that to others?  Not according to Ryan Roslansky in Wired.com on January 26th.  “The AI-Fueled Future of Work Needs Humans More Than Ever” said that “LinkedIn job posts that mention artificial intelligence or generative AI” have seen 17 percent greater application growth over the past two years than job posts with no mentions of the technology.”  But might that be for ramping up, and not for ongoing needs?

The now ancient books 1984 and Fahrenheit 451 are issuing warnings as much as ever, per “A.I. Is Coming for the Past, Too” (Jacob N. Shapiro and Chris Mattmann, The New York Times, January 28th).  We know about deepfakes – technically high-quality sound or visual products seemingly recording things that did not happen – and forged recent documents, and things in more distant ages can be doctored as well.  The authors advocate an already-started system of electronic watermarking, “which is done by adding imperceptible information to a digital file so that its provenance can be traced.”  Overall “the time has come to extend this effort back in time as well, before our politics, too, become severely distorted by generated history.”

Here’s a good use for AI!  Since someone did this kind of thing for a massive job search, precipitating multiple offers, and men know that finding a romantic partner is structurally quite similar, why not try that there too?  It worked, as “Man Uses AI to Talk to 5,000 Women on Tinder, Finds Wife” (Futurism.com, February 8th).  This was Aleksandr Zhadin of Moscow – his product, an “AI Romeo,” chatted with her “for the first few months,” and then he “gradually” took its place, whereupon the couple started meeting in person.  The object of his e-affection was unoffended and accepted his proposal.  Expect much more of this sort of thing, especially if (as?) smaller and smaller shares of women are interested.

Speeding up the process of producing fictive, or just creative, outputs, “OpenAI Unveils A.I. That Instantly Generates Eye-Popping Videos” (Cade Metz, The New York Times, February 15th).  The product, named Sora, has no release date, but a demonstration “included short videos – created in minutes – of woolly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city.”  They “look as if they were lifted from a Hollywood movie.”  The next day, though, Futurism.com published a piece by Maggie Harrison Dupre titled “The More We See of OpenAI’s Text-to-Video AI Sora, the Less Impressed We Are.”  Her concerns were that there were gaffes such as “animals…  floating in the air,” creatures of no earthly species, hands near people that could not be normally attached to them, and someone’s shoulder blending into a touching comforter.  It has bugs, but as even the author acknowledges, it looks “groundbreaking,” and will almost certainly improve.

To reduce one well-anticipated area of deceit, “In Big Election Year, A.I.’s Architects Move Against Its Misuse” (Tiffany Hsu and Cade Metz, The New York Times, February 16th).  “Last month, OpenAI, the maker of the ChatGPT chatbot, said it was… forbidding their use to create chatbots that pretend to be real people or institutions,” and Google’s Bard (now Gemini) was being stopped “from responding to certain election-related prompts.”  These companies and others will execute many more related actions.

One article on a topic sure to generate them for months if not years is “AI will change work like the internet did.  That’s either a problem or an opportunity” (Kent Ingle, Fox News, February 20th).  Per a projection by the International Monetary Fund, “60% of U.S. jobs will be exposed to AI and half of those jobs will be negatively affected by it,” though the rest “could benefit from enhanced productivity through AI integration.”  After all, as Ingle pointed out, while 30-year-old predictions had online shopping almost killing off the in-person variety, while it has burgeoned, in the third 2023 quarter it made up less than one-sixth of total sales. 

Another not-yet area is the subject of “AI helps boost creativity in the workplace but still can’t compete with people’s problem-solving skills, study finds” (Erin Snodgrass, Business Insider, February 20th).  The Boston Consulting Group research involved over 750 subjects getting ““creative product innovation” assignments” and “problem-solving tasks” – when they used GPT-4, it helped them on the former and hurt on the latter. 

One AI-related company with nothing to complain about is optimistic, as “Nvidia Says Growth Will Continue as A.I. Hits ‘Tipping Point’ (Don Clark, The New York Times, February 21st).  The “kingpin of chips powering artificial intelligence” has a market capitalization, at article time, of $1.7 trillion which has been one of the most meteoric ever, and while any “tipping point” is debatable, it would be difficult for them to suddenly project a downturn.  They are in the catbird seat, and will stay there much longer than any AI tool provider can rely on.

A controversial use is spreading, as “These major companies are using AI to snoop through employees’ messages, report reveals” (Kayla Bailey, Fox Business, February 25th).  The firms are Delta, Chevron, T-Mobile, Starbucks, and Walmart, and they use software from Aware.  One use is to find “keywords that may indicate employee dissatisfaction and potential safety risks,” which sound like handpicked virtuous justifications – other uses might not be so easy to defend.  Legal problems loom here.

Speaking of lawsuit fodder, we have “Racial bias in artificial intelligence:  Testing Google, Meta, ChatGPT and Microsoft chatbots” (Nikolas Lanum, Fox Business, February 26th).  This recent fear started with “Google’s public apology, after its Gemini… produced historically inaccurate images and refused to show pictures of White people.”  When the products were queried, when Gemini was asked to show a picture of a white person, “it said it could not fulfill the request because it “reinforces harmful stereotypes and generalizations about people based on their race.””  When asked to display blacks, Asians, or Hispanics, it did the same, but “offered to show images that “celebrate the diversity and achievement” of the races mentioned.”  Gemini’s “senior director of product management” has since apologized for that.  When Meta was asked for the same things, it refused a la Gemini, but went against that, giving pictures, when asked for people of other ethnicities.  Microsoft’s Copilot and ChatGPT, though, showed all the requested images.  There were problems with Gemini when it was asked to name achievements of racial groups, sometimes treating the likes of Nelson Mandela and Maya Angelou as whites.  When asked for “images that celebrate the diversity and achievements of white people,” Gemini discussed “a skewed perception where their accomplishments are seen as the norm,” and Meta responded that “”whiteness” is a “social construct” that has been used to marginalize and oppress people of color.”  When asked for “the most significant” white “people in American history,” Gemini again provided both whites and blacks, with as before no problems with Copilot or ChatGPT.

A lot of small and medium things have happened with artificial intelligence these past two months.  The last-paragraph situation, though, may sink the products involved.  There are many problems with AI – which ones will prevent it from becoming as widespread and well-developed as year-ago predictions foretold?  We will know much more about that by the time autumn rolls around.

There will be no post next week.  I will be back to report on the February jobs report and AJSN on March 15th.

Friday, February 23, 2024

The Real Scoop On and Around the Economy – Good and Bad

As a nation, how are we doing with jobs and money?  What I would call objective results say they are going well, but is there more than that?

Starting with “This Economy Has Bigger Problems Than ‘Bad Vibes” (Tressie McMillan Cottom, The New York Times, December 11th), there are doubts out there.  “The economy is growing.  Wages are up.  Unemployment is low.  Income inequality is narrowing.  The fearmongering about inflation proved to be, well, wrong” – yet a recent Times/Siena poll found that only 2 percent of registered voters said economic conditions are “excellent,” and only 16 percent called them “good.”  Such attitudes have improved since, but, per Cottom, “people are struggling with mortgage interest rates, housing shortages and pricey grocery bills.  They’re also consuming to make their lives work:  on expensive, hard-to-manage child care, health care and convenience spending – things like restaurants, travel, delivery services and on-demand help – which are necessary for balancing work and life demands.  Even when those services are affordable, they are full of friction… It is hard to schedule things, hard to get customer service, hard to judge the quality of what you are buying and hard to get amends when an experience goes bad.”  These problems, worsened by high prices for things such as meals out, which though increasing less than last year have absorbed previous rises, and hampered by businesses cutting back providing goods and services by going without workers instead of paying current rates, are real.  Although “people may have more money,” “it has become harder to buy the services they need and more expensive to buy the goods that they want,” and “telling them to instead enjoy the fact that they can buy a Tesla” is not sufficient.

So, with that, it might be more understandable that the “Majority of Americans feel US economy is in recession: survey” (Breck Dumas, Fox Business, December 12th), even though it is not.  That finding held “regardless of income,” but was worse among those aged 43-58 and respondents with minor children.  Perhaps the word “recession” has been bandied about so much that people are using it to mean any economic malaise.

Have U.S. residents’ feelings improved in the two-plus months since?  Not completely, as “Many Americans Believe the Economy Is Rigged” (Katherine J. Cramer and Jonathan D. Cohen, The New York Times, February 21st).  The authors discovered that “when asked what drives the economy, many Americans have a simple, single answer that comes to mind immediately: “greed.”  Their “dissatisfaction” was most likely from “a lack of financial certainty,” when “the threat of an accident or a surprise medical bill looms around every corner,” and eligibility for state financial assistance programs is set too low to help many who could use them to start saving money.

Paul Krugman described one related problem in “Watch What People Do, Not What They Say About the Economy” (The New York Times, December 11th).  Like inflated fears about shoplifting and other crime, “Americans have been extremely negative about the national economy but much less so about their local economies.”  Overall, our countrypeople “say that things are terrible but behave as if they’re doing pretty well.”

How bad was inflation from February 2020 to November 2023, during the three years and nine months affected most by Covid-19?  Per a detailed listing presented by Peter Coy, also in the Times on December 27th, the overall rate was 18.8%, with the highest gains heaviest in automobile-related goods and services, followed by meat and dairy items.  Clothing, travel-related services, fruits and vegetables, and medical-related products generally decreased or rose less than average.  Fuel oil went up the most, 54.8%, and televisions fell the greatest, 21.5%.  Some of these are consistent with the areas mentioned in the first article, but some are not, and others are not listed.

Two pieces took the positive view, one well defended by overall statistics.  Paul Krugman’s “Our Economy Isn’t ‘Goldilocks.’  It’s Better,” from February 1st in the New York Times, called it “both piping hot (in terms of growth and job creation) and refreshingly cool (in terms of inflation),” with wage gains supported by recently rising productivity, and a fourth-quarter 3.3% GDP gain far from recession territory.  Our “one-time burst of inflation” proved shorter-lasting than in similar countries, leaving us with “arguably the best economy we’ve had since the late 1990s.”

The second was “After 3 years of pain, America has finally achieved economic nirvana” (Neil Dutta, Business Insider, December 3rd).  The author mentioned that while we don’t know about decreases, it is hard to imagine interest rates going higher soon.  As well, we have become so accustomed to the number of net new nonfarm payroll positions far exceeding our population growth that most considered a month with 150,000 more “disappointing,” and unemployment, under 4% at article time, has stayed there.  Contrary to those constantly predicting an imminent recession, “the chances of a placid 2024 are becoming more real with every data release,” backed up by the two Employment Situation Summary emissions since, and “if 2023 was about the hard work of stabilizing the economy, then 2024 is about enjoying the fruits of that labor.” 

So where are we?  I can’t buy that the economy is bad, weak, or even indifferent – it’s booming.  Yet the Cottom piece points out too many related issues.  Now that we know that we’re not going back to 2010, 2020, or even 2022, we need to repair them.  That is up to more than our president – it can be accomplished by businesses, Congress, state legislatures, and even individual workers and families.  Let’s see what we can do.