Somewhere
between AI’s accomplishments and its postulated threats to humanity are things
with it that have gone wrong, and concerns that something might. Here are nine – almost one per week since the
end of August.
A cuddly danger? In “Experts warn AI stuffed animals could
‘fundamentally change’ human brain wiring in kids” (Fox News, August 31st),
Kurt Knutsson reported that “pediatric experts warn these toys could trade
human connection for machine conversation.”
Although television has been doing that for generations, some think that
with AI playthings, “kids may learn to trust machines more than people,” which
could damage “how kids build empathy, learn to question, and develop critical
thinking.” All of this is possible, but
speculative, and nothing in this piece convinced me AI toys’ effect would be
much more profound than TV’s.
A good if
preliminary company reaction was the subject of “OpenAI rolls out ChatGPT
parental controls with help of mental health experts” (Rachel Wolf, Fox
Business, September 2nd).
In response to a ChatGPT-facilitated suicide earlier this year, “over
the next 120 days… parents will be able to link their accounts with their
teens’ accounts, control how ChatGPT responds to their teen, manage memory and
chat history features and receive notifications if their child is using the
technology in a moment of acute distress.”
That will be valuable from the beginning, and will improve from there.
On another
problem front, “Teen sues AI tool maker over fake nude images” (Kurt Knutsson, Fox
News, October 25th). The
defendant, AI/Robotics Venture Strategy 3 Ltd., makes a product named ClothOff,
which can turn a photo into a simulated nude, keeping the original face. A plaintiff’s classmate did that to one of
hers, shared it, and “the fake image quickly spread through group chats and
social media.” As of the article’s press
time, “more than 45 states have passed or proposed laws to make deepfakes without
consent a crime,” and “in New Jersey,” where this teenager was living,
“creating or sharing deceptive AI media can lead to prison time and
fines.” Still, “legal experts say this
case could set a national precedent, as “judges must decide whether AI
developers are responsible when people misuse their tools” and “need to
consider whether the software itself can be an instrument of harm.” The legal focus here may need to be on
sharing such things, not just creating or possessing them, which will prove to
be impossible to stop.
In a Maryland
high school, “Police swarm student after AI security system mistakes bag of
chips for gun” (Bonny Chu, Fox News, October 26th). Oops!
This was perpetrated by “an artificial intelligence gun detection
system,” which ended up “leaving officials and students shaken,” as, per the
student, “police showed up, like eight cop cars, and they all came out with
guns pointed.” I advise IT tool
companies to do their beta testing in their labs, not in live high school
parking lots.
Was the
action taken by the firm in the third paragraph above sufficient? No, Steven Adler said, in “I Worked at
OpenAI. It’s Not Doing Enough to Protect
People” (The New York Times, October 28th). Although the company “ultimately prohibited
(its) models from being used for erotic purposes,” and its CEO claimed about
the parental-control feature above that it “had been able to “mitigate” these
issues,” per Adler it “has a history of paying too little attention to
established risks,” and that it needs to use “sycophancy tests” and “commit to
a consistent schedule of publicly reporting its metrics for tracking mental
health issues.” I expect that the
AI-producing firms will increasingly do such things. And more are in progress, such as “Leading AI
company to ban kids from chatbots after lawsuit blames app for child’s death”
(Bonny Chu, Fox Business, October 30th). The firm here, Character.ai, which is “widely
used for role-playing and creative storytelling with virtual characters,” said
that “users under 18 will no longer be able to engage in open-ended
conversations with its virtual companions starting Nov. 24.” They will also restrict minors from having
more than 2 daily hours of “chat time.”
In the
October 29th New York Times, Anastasia Berg tried to show us
“Why Even Basic A.I. Use Is So Bad for Students.” Beyond academic cheating, “seemingly benign
functions” such as AI-generated summaries, “are the most pernicious for
developing minds,” as that stunts the meta-skill of being able to summarize
things themselves. Yet the piece
contains its own refutation, as “Plato warned against writing,” since “literate
human beings… would not use their memories.”
Technology, from 500 BC to 2025 AD, has always brought tradeoffs. As calculators have made some arithmetic
unnecessary but have hardly extinguished the need to know and use it, while people
may indeed be weaker at summarizing formal material, they will continue to have
no choice but to do that while living the rest of their lives.
We’re getting
more legal action than that mentioned above, as “Lawsuits Blame ChatGPT for
Suicides and Harmful Delusions” (Kashmir Hill, The New York Times,
November 6th). Seven cases
were filed that day alone, three on behalf of users who killed themselves after
extensive ChatGPT involvement, another with suicide plans, two with mental
breakdowns, and one saying the software had encouraged him to be delusional. As before, this company will need to
ongoingly refine its safeguards, or it may not survive at all.
I end with
another loud allegation, this one from Brian X. Chen, who told us, also in the
November 6th New York Times, “How A.I. and Social Media
Contribute to ‘Brain Rot.’” He started
noting that “using A.I.-generated summaries” got less specific information than
through “traditional Google” searches, and continued to say that those who used
“chatbots and A.I. search tools for tasks like writing essays and research”
were “generally performing worse than people who don’t use them.” All of that, though, when it means using AI
as a substitute for personal work, is obvious, and not “brain rot.” This article leaves open the question of
whether the technology hurts when it is being used to help, not to write.
Three
conclusions on the above jump out.
First, as AI progresses it will also bring along problems. Second, legally and socially acceptable AI
considerations are continuing to be defined and to evolve, and we’re nowhere
near done yet. Third, fears of adverse mental
and cognitive effects from general use are, thus far, unsubstantiated. Artificial intelligence will bring us a lot,
both good and bad, and we will, most likely, excel at profiting from the former
and stopping the latter.