Here is the final installment in this series – at least for now.
I start with three pieces from earlier this month giving the
state of ChatGPT and chatbots in particular, which will continue to evolve but has
reached a point where we can talk usefully about where it is going. The first, “The Chatbots Are Here, and the
Internet Industry Is in a Tizzy” (Tripp Mickle et al., The New York Times,
March 8th), said that “not since the iPhone has the belief that a
new technology could change the industry run so deep,” with the authors
forecasting massive shifts for cloud computing, e-commerce, social media, and
publishing, affecting “$100 million in cloud spending, $500 billion in digital
advertising and $5.4 trillion in e-commerce sales,” although “the volatility of
chatbots has made it impossible to predict their impact.” That spells out the situation now, and only the
next several months and beyond will tell the story.
As for the current – or at least three weeks’ ago –
technical situation, Cade Metz and Keith Collins told us in the March 14 New
York Times that there are “10 Ways GPT-4 Is Impressive but Still Flawed.” Although “it still makes things up,” its
improvements are that “it has learned to be more precise,” “it has improved its
accuracy,” “it can describe images with impressive detail,” “it has added
serious expertise,” “it can give editors a run for their money,” “it is
developing a sense of humor. Sort of,” “it
can reason – up to a point,” “it can ace standardized tests,” but “it is not
good at discussing the future” and “it is still hallucinating.” Expect the next release to be better,
sometimes massively, at all of these.
And, if there was ever any doubt, we are getting “Microsoft to bring OpenAI’s
chatbot technology to the office” (Dina Bass and Emily Chang, Benefit News,
March 16th) – in Office, where I have seen it proposing more continuation
text, in LinkedIn, and elsewhere.
Two more contributions told us things just behind the scenes
of artificial intelligence and chatbot’s stunning recent progress. “The great AI beef,” from Bloomberg Daily
on March 8th, told us that “in Silicon Valley, there’s a small but
powerful group of people who believe (such advancement) could be very, very bad
news – and that AI, if not handled correctly, could wipe out humanity within a
couple decades.” However, “there’s also
a crowd who thinks our AI future will be amazing – bringing about untold future
capabilities, abundance and utopia.” “AI
theorist” Eliezer Yudkowsky and OpenAI CEO Sam Altman, exchanging detailed
comments from their stances on the former and latter sides respectively, have had
“a somewhat inscrutable, inside-baseball catfight.” David Wallace-Wells, in “Silicon Valley’s
futurists have gone from utopian to dystopian” (The New York Times,
March 27th), recapped the situation between Altman and Yudkowsky and
saw the latter scenario winning out among AI developers. That will mean distortion, ultimately for
better or worse, in how the technology progresses.
And how about the philosophers? Because if you purport to perceive how much AI
products are doing the equivalent of thinking, that’s what you are. In “Can a Machine Know That We Know What It
Knows?,” also in the March 27th New York Times, Oliver Whang
assumed that role and concluded, after looking at possibly pertinent academic
studies, that one academic had concluded that “machines have theory of
mind.” Others responded with further
work putting that deduction in doubt. Moving from empirical tests to resolving
such issues, which call back to the millennia-old problem of consciousness,
might be impossible.
I end with “Noam Chomsky:
The False Promise of ChatGPT,” with two co-authors in the New York
Times on March 8th. The long-time
linguistics professor named two large concerns about chatbot output. First, despite being able to integrate masses
of information, “such programs are stuck in a prehuman or nonhuman phase of
cognitive evolution,” with an “absence of the most critical capacity of any
intelligence: to say not only what is
the case, what was the case and what will be the case… but also what is not the
case and what could and could not be the case.”
Second was chatbots not being “capable of moral thinking” by
“constraining the otherwise limitless creativity… with a set of ethical
principles that determines what ought and ought not to be,” the lack of these guidelines
making it susceptible to incorporating clearly incorrect input data. Ultimately, “they either overgenerate
(producing both truths and falsehoods, endorsing ethical and unethical
decisions alike) or undergenerate (exhibiting noncommitment to any decisions and
indifference to consequences).”
Chomsky’s assessment is superb – with one caveat. If AI devices produce views that offend us,
we should be able to objectively assess them.
We are still in charge, but we must be open-minded. That will be a real 21st-century
intellectual challenge, and will draw as much controversy as ever. But we will be better as both leaders and
followers when we pursue it. There is no
suppressing artificial intelligence, but as has been true with so many past
advancements, it will make an increasingly fine servant but will always –
always – be a poor master.
No comments:
Post a Comment