These are what I have found over the past ten weeks. As you will see, some have human-abuse components and some do not, though all are inherent to AI’s condition and proliferation.
We learned that
we can expect “Meta to suspend teens’ access to AI characters amid safety
overhaul” (Michael Sinkewicz, Fox Business, January 23rd). This was a stronger reaction to a problem I
documented recently, on which “Meta previewed a new safety measure in October
that would allow parents to disable their teenagers’ private chats with AI
characters”; now, “the tool would let
parents block specific AI characters and look at the broad topics their teens
were discussing with chatbots and Meta’s AI assistant, without completely
turning off AI access.” They will permit
things that if in a movie would not cause it to get a rating stronger than
PG-13.
“How Bad Are
A.I. Delusions? We Asked People Treating
Them” (Jennifer Valentino-DeVries and Kashmir Hill, The New York Times,
January 26th). The topic here
is not misbeliefs within AI, but those it has seemed to induce in users. Examples here were someone, who after getting
ChatGPT’s counsel on “a major purchase,” thought “businesses were colluding to
have her investigated by the government”; one who “came to believe that a
romantic crush was sending her secret spiritual messages”; and a person who
“thought he had stumbled onto a world-changing invention.” Only the first is clearly psychosis, but all
are undesirable. That the chatbots were
clearly to blame is debatable, but even disregarding their statements
encouraging suicide or other self-harm, which were still happening, they
clearly are bad influences, made harder to deal with by our lack of knowledge
of just how they affect human cognition.
“These Tools
Say They Can Spot A.I. Fakes. Do They
Really Work?” (Stuart A. Thompson, The New York Times, February 25th). We hope they do, but what does the author
say? “More than a dozen online tools
claim they can tell the difference between what’s real and what’s A.I. by
looking for hidden watermarks, composition errors and other digital clues,” but
“the reality is more mixed, according to a battery of tests conducted by The
New York Times (italics mine).”
Sadly, “they were not accurate enough to offer users complete
confidence.” Three and four of 12
products failed to identify two different pictures of two people, created by
Grok and ChatGPT and including the latter product not recognizing its own work,
as synthetic, and a higher share choked on videos. While a camera-taken photograph of a plant
was called real by all 12, adding AI content to another one precipitated four
correct responses of “edited,” with six saying “real” and two saying it was
completely artificial. We need work
here, and I expect we will get it.
Some timely
advice is “A Word to the Wise: Don’t
Trust A.I. to File Your Taxes” (Thompson and the New York Times again,
March 5th). The four products
that newspaper’s staff assessed consistently botched “eight fictional tax
situations… even when provided with all the necessary materials.” The problem is that while “traditional tax
software like TurboTax is procedural, following ‘if-then’ logic built for
mathematical precision,” “large language models, by contrast, are prediction
engines” which may misguess, even in a situation where no guessing is
required. Not the tool for this job, at
least not now; “Just don’t, whatever you do, use it to file your taxes.”
On the
technical side, we saw as “Meta Delays Rollout of New A.I. Model After
Performance Concerns” (Eli Tan, The New York Times, March 12th). Two unnamed inside sources said while the new
product “outperformed Meta’s previous A.I. model and did better than Google’s
Gemini 2.5 model from March,” “it has not performed as strongly as Gemini 3.0
from November.” That meant it was
delayed from the current month until at least May. Not as long a postponement as we have seen in
this industry, but it could be bad.
It should be
no shock that a “Cascade of A.I. Fakes About War With Iran Causes Chaos Online”
(Stuart A. Thompson and Alexander Cardia, still in the Times, March 13th). “The videos – showing huge explosions that
never happened, decimated city streets that were never attacked or troops
protesting the war who do not exist – have added a chaotic and confusing layer
to the conflict online.” In my lifetime,
we have gone from the first robustly filmed and broadcast television war to the
first bogus-video one. Improving the AI-detection
software above will minimize it, but for now, with more complex images and
moving pictures being the least confirmable as genuine, we can’t trust any of
it.
If Meta
thought it was having a bad month with its product delay, it got worse, as
“Meta ordered to pay $375M after jury finds platform enabled child predators in
landmark New Mexico case” (Jasmine Baehr, Fox Business, March 24th). This outcome was repeatable, as Meta was
found to have “violated state law by misleading users about the safety of its
platforms and allegedly enabling child sexual exploitation” by “failing to
protect children from predators.” That
worked out to “$5,000 per violation,” meaning that there were 75,000 of
those. I hope the number of actual
victims, in a state of 2.1 million, was nowhere near that high. As a sour cherry on top of that, per “Meta
and YouTube Found Negligent in Landmark Social Media Addiction Case” (Cecilia
Kang, Ryan Mac and Eli Tan, The New York Times, March 25th),
those companies “harmed a young user with design features that were addictive
and led to her mental health distress, a jury found…, a landmark decision that
could open social media companies to more lawsuits over users’
well-being.”
Given that
these are the worst short-range things I could find about AI in just over two
months, it is not doing badly. The
issues here, except for filing taxes with it which should remain a no-no, can
all be handled effectively – and I believe they will be. If not yet world-beating, artificial
intelligence is getting better at doing what it can be expected to accomplish.