Let’s start…
Is it
reasonable to say that “A.I. Will Destroy Critical Thinking in K-12” (Jessica
Grose, The New York Times, May 14th)? Her points were that “even seventh graders
can see artificial intelligence is a lesser form of care and attention,” “there
is not even conclusive evidence that A.I. improves learning outcomes when
compared with human teaching of older students,” as one correspondent put it
“who among all political stripes wants their children to be taught by robots?,”
and “I still cannot believe that after living through the school closures of
2020-21, our policymakers continue to underestimate the importance of human
connection, especially in primary school.”
Real and valid concerns, though they don’t add up to the title.
At a higher
level, “The Professors Are Using ChatGPT, and Some Students Aren’t Happy About
It” (Kashmir Hill, The New York Times, also May 14th). That reaction came up when one group of them
found that lecture notes and slide presentations not only had not only
“telltale signs of A.I.” such as “distorted text, photos of office workers with
extraneous body parts and egregious misspellings,” but had, in one case, “an
instruction to ChatGPT” inadvertently left in.
Professors disagree with what uses of AI are acceptable and not, as do
students, one of whom “demanded her tuition back.” We’re waiting for colleges to establish,
implement, and communicate policies here, which may take a while… or may not.
In another
area, “A.I.- Generated Images of Child Sexual Abuse Are Flooding the Internet”
(Cecilia Kang, The New York Times, July 10th). Some of this material is now “nearly
indistinguishable from actual abuse,” as video examples “have become smoother
and more detailed.” They often involve
“collaboration.” Since no actual
children are involved, it is not treated the same as with genuine photographs
or videos, as “the law is still evolving on materials generated fully by” AI. Creation and possession of AI-made pictures and
movies without passing them to others has been ruled legal by a U.S. District
Court, as “the First Amendment generally protects the right to possess obscene
material in the home” so long as it isn’t “actual child pornography.” We will know more when the legal system
determines how it wants to handle such matters, and, ideally, keep laws for
each state the same.
Another piece
by Grose, also in the Times and related to the first, appeared on August
6th: “These College
Professors Will Not Bow Down to A.I.” Her
interviewees “had to figure out how to make sure that their students were
actually learning the material and that it meant something to them,” so “had to
A.I.-proof many assignments by making them happen in real time and without
computers.” They ended up using “a
combination of oral examinations, one-on-one discussions, community engagement
and in-class projects,” including, in one case, requiring the students “to run
discussions… at libraries, public schools and senior centers.” Imaginative, and excellent. Give those faculty members A’s.
In a hardly
unexpected application, “China Turns to A.I. in Information Warfare” (Julian E.
Barnes, The New York Times, again August 6th). “Artificial intelligence is increasingly the
new frontier of espionage and malign influence operators, allowing intelligence
services to conduct campaigns far faster, more efficiently and on a larger
scale than ever before.” In this case, GoLaxy,
a firm that can “mine social media profiles” to create realistic looking
“disinformation” “on a far greater scale” than such efforts have succeeded at
before, “claims in a document that it has assembled virtual profiles on 117
current and former members of Congress,” and “tracks and collects information
on more than 2,000 American political and public figures, 4,000 right-wing
influencers and supporters of President Trump, in addition to journalists,
scholars and entrepreneurs.” One more
step toward justified information skepticism.
Finally, something
else AI has been increasingly effective at is helping people kill themselves,
on which help may be on the way, as “OpenAI plans ChatGPT changes after
suicides, lawsuit” (CNBC.com, August 26th). When, earlier that day, “the parents of Adam
Raine filed a product liability and wrongful death suit against OpenAI after
their son died by suicide at age 16,” which claimed that “ChatGPT actively
helped Adam explore suicide methods,” a company representative “said it’s…
working on an update to its GPT-5 model… that will cause the chatbot to
deescalate conversations, and that it’s exploring how to “connect people to
certified therapists before they are in an acute crisis,”” and “possibly
building a network of licensed professionals that users could reach directly
through ChatGPT.” Almost an immediate
constructive response, reflecting the immediacy of this problem, which has been
implicated in two similar cases this month alone. The company will need to implement it
unusually speedily as well.
Overall,
what’s here? It looks like laws and
practices needing to be put in place to combat problems AI has suddenly
created. It will happen – and there will
be more. We can overcome them, and
shouldn’t expect to soon reach a time when we have no new concerns.
No comments:
Post a Comment