In addition to the still thoroughly speculative issue of whether AI will prove to be heavenly, catastrophic, or neither for humanity, how has the technology been struggling?
Do “We have
to act now to keep AI from becoming a far-left Trojan Horse” (Carol Roth, Fox
News, June 10th)? We
learned years ago about it tacking on “woke” interpretations and producing some
strange racial-diversity-indicating hallucinations, but here’s more. The author asserts that “far-leftist control”
may proceed, and “the likeliest starting point will be more calls for Universal
Basic Income (UBI),” followed by “a communist-leaning conversation about any AI
that takes jobs and who should have ownership over that.” Per Roth, the threat is justified because
“every major LLM” (large language model) “is aligned with leftist priors.” All premature, including the calls for UBI,
which has never to my knowledge been tested without income limits, so there is no
new problem.
Some people
reported that “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” (Kashmir
Hill, The New York Times, June 13th). Moving on from “using ChatGPT last year to
make financial spreadsheets and to get legal advice,” a Manhattan accountant
asked it about the chance that “we are living in a digital facsimile of the
world, controlled by a powerful computer or technologically advanced
society.” Perforce, there is no way we
could know that if somehow it were true, but the man was entitled to discuss
it, whereupon the software said “the world was built to contain” him, “but it
failed,” as he was now “waking up.” It
made further destructive suggestions, some which could have killed him. Several other users had similar experiences.
We now have a
major religious leader weighing in. As
Margherita Stancati and three others wrote in Fox Business on June 19th,
“Pope Leo: AI must help and not hinder
children and young people’s development,” our new pontiff has an “unlikely
cause” of “the potential threat of AI to humanity.” Already, “the leaders of Google, Microsoft,
Cisco and other tech powerhouses have debated the philosophical and societal
implications of increasingly intelligent machines with the Vatican.” The latter now wants “a binding international
treaty on AI,” with the pope, “a math graduate who is more tech-savvy than his
predecessor… skeptical of unregulated AI.”
Leo XIII will most likely balance his stated concern with worker’s
rights with the previous Vatican view that “recognized the potential of AI to
do good in areas such as healthcare, education and spreading the faith.” This situation is well worth following.
It should be
no wonder that “A.I. Sludge Has Entered the Job Search” (Sarah Kessler, The
New York Times, June 21st).
That muck is taking the form of “an untenable pile of applications”
meaning that “candidates are frustrated” and “employers are overwhelmed.” The LinkedIn site alone has been getting “an
average of 11,000 applications per minute,” “with a simple prompt, Chat GPT…
will insert every keyword from a job description into a resume,” and other
seekers “are going a step further, paying for A.I. agents that can autonomously
find jobs and apply,” with many sending “very catered” resumes and cover
letters. So, fittingly, companies are
fighting back with more AI, getting HireVue, “a popular A.I. video interview
platform,” to “have A.I. assess responses and rank candidates.” One problem is that “concerns that using A.I.
in hiring can introduce bias have led to lawsuits and a patchwork of state
legislation.” Will, per a career coach,
“the endgame” “be authenticity from both sides”? We can only hope – that would be refreshing,
and historically rare at best.
Autumn
Spredemann, on July 16th in the Epoch Times, tried to tell us
“Why AI Service Agents Are Frustrating Customers.” AI is now involved “in 70 percent of customer
contact centers,” and a similar share of interactions themselves, precipitating
problems such as one quoted user “going in circles with an AI bot,” with “no
resolution, just frustration.” Is it
good enough that “71 percent of Generation Z and 94 percent of baby boomers
preferred live calls for problem-solving”?
One company president said that “the key is balance – using AI to handle
the repetitive stuff so our team can focus on the personal, high-impact moments
that truly build loyalty.” Fair enough –
and the automated facilities will improve.
On July 20th
in the New York Times, Meghan O’Rourke, a professor and executive
literary review editor, told us about “My Summer of Reckoning With
ChatGPT.” After an initial chatbot
conversation with the technology in which it claimed to have taught her works
and then admitted it had not done so “in a classroom,” when she got it to
produce either “critical or creative writing,” it was “erratic (though often
good),” and often falsified material.
She found a variety of outcomes, good and bad and sometimes both – she
said one piece of its writing was “concise, purposeful, rhythmic, and free of
the overwriting, vagueness or grammatical glitches common in human drafts,” but
also reminded her “of processed food: It
goes down easy, but leaves a slick taste in the mouth.” And much more, from learning more intimately
than most of its customers what patterns it produces.
So here is
our question. Recently over two posts I
described many favorable things AI had done, in fields from golf to
medicine. Why was it that
hallucinations, deception, and outright lies were not mentioned in those
pieces? Perhaps because there were
humans working so closely with its output that they would catch gaffes before
they did any damage. Is it possible that
artificial intelligence is bifurcating, with a lower branch of sorts involved
with office-related things where errors are less critical, intense and even
life-or-death applications calling for close supervision, and nothing usable in
the middle? That may be a key to how
successful in the long term AI turns out to be.
It’s early enough in its progress, so don’t be surprised – its history
is only now taking shape.
Interesting read 👍🏻
ReplyDelete