Friday, August 30, 2024

Artificial Intelligence’s Limitations and Clear Current Problems

We’re marching through months and years since last year’s AI awakening.  We can’t fairly say that the shortcomings it has are permanent, but, as of now, what are they?

First, although computer applications have excelled at many games, such as chess, where they are vastly better than any human ever, and checkers, which was electronically solved 17 years ago, they have not done the same with bridge.  Per BBO Weekly News on July 21st, Bill Gates said, correctly, that “bridge is one of the last games in which the computer is not better.”  Artificial intelligence progress has done nothing, so far, to change that, and it is noteworthy that even in a closed system with completely defined rules, objectives, and scoring, it has not been able to take over. 

Not only has it not replaced huge numbers of jobs, but “77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds” (Bryan Robinson, Forbes, July 23rd).  The effort, “in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers.”  It found that “the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees,” to the point where, in contrast with 96% of C-suite executives expecting AI to boost productivity… 77% of employees using AI say it has added to their workload and created challenges,” and has been “contributing to employee burnout.”  Also, 47% “of employees using AI say they don’t know how to achieve the expected productivity gains their employers expect, and 40% feel their company is asking too much of them when it comes to AI.”  This is what we used to call a disconnect.  The author recommended employers get outside help with AI efforts and measuring productivity differently, and workers to generally “embrace outside expertise.”

A similarly negative view was the subject of “Machines and the meaning of work” (Bartleby, The Economist, July 27th).  The pseudonymous writer cited a paper claiming that although “in theory, machines can free up time for more interesting tasks; in practice, they seem to have had the opposite effect.”  Although in health care, automation can allow more time with patients, in others, as “the number of tasks that remain open to humans dwindles, hurting both the variety of work and people’s understanding of the production process,” “work becomes more routine, not less.”  Overall, “it matters whether new technologies are introduced in collaboration with employees or imposed from above, and whether they enhance or sap their sense of competence.”

Similarly, Emma Goldberg, in the New York Times on August 3rd, asked “Will A.I. Kill Meaningless Jobs?”  If it does, it would make workers happier in the short run, but it could also contribute to “the hollowing out of the middle class.”  Although the positions that AI could absorb might be lacking in true significance, many “have traditionally opened up these white-collar fields to people who need opportunities and training, serving as accelerants for class mobility:  paralegals, secretaries, assistants.”  These roles could be replaced by ones with “lower pay, fewer opportunities to professionally ascend, and – even less meaning.”  Additionally, “while technology will transform work, it can’t displace people’s complicated feelings toward it.”  So we don’t know – but breaking even is not good enough for what is often predicted to be a trillion-dollar industry.

Back to the issue of perceived AI value is “A lack-of-demand problem” (Dan DeFrancesco, Insider Today, August 8th).  “A chief marketing officer” may have been justified in expecting that the Google AI tools it introduced would “be an easy win,” as “in the pantheon of industries set to be upended by AI, marketing is somewhere near the top,” as the technology could “supercharge a company’s marketing department in plenty of ways,” such as by providing “personalized emails” and “determining where ads should run.” Unfortunately, per the CMO, “it hasn’t yet,” as “one tool disrupted its advertising strategy so much they stopped using it,” “another was no better at the job than a human,” and one more “was only successful about 60% of the time.”  Similar claims appear here from Morgan Stanley and “a pharma company.”  In all, “while it’s only fair to give the industry time to work out the kinks, the bills aren’t going to slow down anytime soon.”

In the meantime, per “What Teachers Told Me About A.I. in School” (Jessica Grose, The New York Times, August 14th), AI is causing problems there, per examples of middle school students, lacking “the background knowledge or… intellectual stamina to question unlikely responses,” turning in assignments including the likes of “the Christian prophet Moses got chocolate stains out of T-shirts.”  Teachers are describing AI-based cheating as “rampant,” but are more concerned about students not learning how to successfully struggle through challenging problems.  Accordingly, they are “unconvinced of its transformative properties and aware of its pitfalls,” and “only 6 percent of American public school teachers think that A.I. tools produce more benefit than harm.”

I do not know how long these AI failings will continue.  With massive spending on the technology by its providers continuing, they will be under increasing pressure to deliver useful and accurate products.  How customers react, and how patient they will be, will eventually determine how successful artificial intelligence, as a line of business, will be over the next several years.  After some length of time, future promises will no longer pacify those now dissatisfied.  When will we reach that point?

No comments:

Post a Comment