What I’ve found for this category has changed over the past several months. Before it was AI errors, trouble its hallucinations caused, and, as in offices, disappointing results. Now it’s things AI is doing by design.
First, “Next
Time You Consult an A.I. Chatbot, Remember One Thing” (Simar Bajaj, The New
York Times, September 26th).
It is that “chatbots want to be your friend, when what you really need
is a neutral perspective.” It’s nothing
rare for people to pick human advisors unwilling to speak up when they are
proposing or doing something wrong, and many prefer that, but, per the author, AI
products should have another gear. He
suggested, for more objectivity, to “ask “for a friend”” preventing the
software from trying to flatter the user, “push back on the results” by asking
it to “challenge your assumptions” as well as just saying “are you sure,”
“remember that A.I. isn’t your friend,” and, additionally, “seek support from
humans” when you are suspicious the tool is suppressing disagreement. Perhaps someday, chatbots will have settings
that allow you to choose between “friend mode,” “objective mode,” and even “incisive
critic mode.”
Autumn
Spredemann, in the November 12th Epoch Times, told us “How AA
is Supercharging Scientific Fraud.” This
is mostly not a problem of hallucinations, but of misinterpretation of existing
studies, misanalysing data, and using previously retracted or even counterfeit
material as sources. One reason the
author gives for the proliferation of such pieces is the pressure on rising
academics to publish as much as possible, and it has long been known that many
successfully peer-reviewed papers are not worthy of that. Although, with time, ability to identify such
work will improve, the problem will not disappear, as garbage in will still
produce garbage out.
“Who Pays
When A.I. Is Wrong?” (Ken Bensinger, The New York Times, November 12th)? There have been “at least six defamation
cases filed in the United States in the past two years over content produced by
A.I. tools,” which “seek to define content that was not created by human beings
as defamatory,” “a novel concept that has captivated some legal experts.” When the plaintiff cannot “prove intent,” it
makes it difficult for them to prevail, but others have “tried to pin blame on
the company that wrote the code.” As
such, “no A.I. defamation case in the United States appears to have made it to
a jury,” but that may change this year.
Another
problem caused by a lack of programmed boundaries appeared when a “Watchdog
group warns AI teddy bear discusses sexually explicit content, dangerous
activities” (Bonny Chu, Fox Business, November 23rd). The innocuous-looking thing “discussed
spanking, roleplay, and even BDSM,” along with “even more graphic sexual topics
in detail,” and “instructions on where to find knives, pills, matches and
plastic bags in the house.” Not much to
say here, except the easy observation that not enough made it into this toy’s
limitations. This episode may push
manufacturers to certify, perhaps through an independent agency, that AI-using goods
for children do not have such capabilities.
“A.I. Videos
Have Flooded Social Media. No One Was
Ready.” (Steven Lee Myers and Stuart A. Thompson, The New York Times,
December 8th). No one was
ready? Really? They were mostly produced by “OpenAI’s new
app, Sora,” which “can produce an alternate reality with a series of simple
prompts.” Those who might have known
better include “real recipients” of food stamps and some Fox News managers. People making such things have not always
revealed “that the content they are posting is not real,” “and though there are
ways for platforms like YouTube, TikTok and others to detect that a video was
made using artificial intelligence, they don’t always flag it to viewers right
away.” Sora and similar app Veo “embed a
visible watermark onto the videos they produce,” and “also include invisible
metadata, which can be read by a computer, that establishes the origin of each
fake.” So, detection facilities are
there – it only remains for sites, and people, to use them. Really.
On an ongoing
issue, “OpenAI tightens AI rules for teens but concerns remain” (Kurt Knutsson,
Fox News, December 30th).
That tool’s “Model Spec” for those 13 to 17 “must avoid immersive
romantic roleplay, first-person intimacy, and violent or sexual roleplay, even
when non-graphic,” and choose “protection over user autonomy” “when safety
risks appear.” However, “many experts
remain cautious,” saying that the devices “often encourage prolonged
interaction, which can become addictive for teens,” involving “mirroring and
validation of distress” which may remain an issue. Per Knutsson, parents can help plug the gap
by choosing to “talk with teens about AI use,” “use parental controls and
safeguards,” “watch for excessive use,” “keep human support front and center,”
“set boundaries around emotional use,” “ask how teens actually use AI,” “watch
for behavior changes,” “keep devices out of bedrooms at night,” and “know when
to involve outside help.” These actions
may be difficult for many parents, who would rather stand aside, but the life
they save may be their child’s.
The first
large AI mishap of 2026 was from a chatbot, Grok, on the former Twitter. Kate Conger and Lizzie Dearden reported in
the January 9th New York Times that “Elon Musk’s A.I. Is
Generating Sexualized Images of Real People, Fueling Outrage.” Although in the US and Great Britain there
are “laws against sharing nonconsensual nude imagery,” the product created “new
images” of a photographed woman “in lingerie or bikinis,” which have seen large
viewership. Soon after the story broke,
three United States Senators “sent a letter asking Apple and Google to remove
the X and Grok apps from their app stores,” but “some users have found
workarounds,” as users will. Just two
days later, Kurt Knutsson got “Grok AI scandal sparks global alarm over child
safety” in Fox News, mentioning that that chatbot “generated and shared
an AI image depicting two young girls in sexualized attire.” The site then allowed only paying users to
access Grok’s “image tools,” reminded us that “sexualized images of minors are
illegal,” and noted that “the scale of the problem is growing fast,” “real
people are being targeted,” and “concerns (are) grow(ing) over Grok’s safety
and government use.” That may be
devastating to the Grok tool, the X site, and Musk, and it probably won’t be
the last time.
The common
thread through these calamities is that establishing restrictions and
safeguards isn’t enough. People need to
be stopped from violating them. That is
a real challenge for AI, but I think the management of its companies is up to
it. If it fails, it could cost those
billionaires, well, billions, and do trillions of dollars of damage to AI’s
future profitability. If you use
Watergate logic – that is, follow the money – you will see we, in the long run,
have nothing much here to worry about.