This issue, the second of three straight weeks of this series (the jobs report, which drives the monthly AJSN, will be released a week later than usual), is about AI-related articles from January, some of which point up its largest concerns.
“An A.I. Pioneer on
What We Should Really Fear,” by David Marchese in the January 1st New
York Times, an interview of prominent front-line researcher Yejin Choi,
dealt with the problem of consciousness.
A Google engineer was fired last year for claiming that some of its
products could think, but that view actually cannot be refuted, as we do not
know from where sentience arises. Choi
said that “when you work so close to A.I., you see a lot of limitations”; we
know that such software, and that’s what it is, can be superhuman or even unbeatable
in settings where the rules are well defined, such as playing chess or
checkers, but in more general situations it may not match a small child’s
abilities, as, per Choi, “A.I. struggles with basic common sense,” as “what’s
easy for machines can be hard for humans and vice versa.” Choi in effect considered the decades-old
autonomous goal-seeking problem, in which devices do not have constraints
people would consider obvious, the main AI issue, a highly reasonable view.
Continuing in another section of the same newspaper was
“Consciousness in Robots Was Once Taboo.
Now It’s the Last Word,” by Oliver Whang on January 6th. The author discussed the views of prominent AI
engineer Hod Lipson, who was unsure technology could not be sentient, and said
“there is no consensus around what it actually refers to.” Per Lipson, as far as we can tell, “the
fundamental difference among types of consciousness – human consciousness and
octopus consciousness and rat consciousness, for example – is how far into the
future an entity is able to imagine itself.”
That’s a viable theoretical start, but who can determine what a rat
comprehends? The engineer claimed that,
while it may improve, currently “we’re doing the cockroach version.” This piece has much more, which also clarified
that we have a long way to go in understanding what silicon and nonhuman living
things think, or in the former case if they do at all. Hard material, which may defy insight
indefinitely.
Swerving to a most practical AI application, we have “White
Castle hiring robots to ‘give the right tools’ for serving more ‘hot and tasty
food’: VP,” by Kristen Altus in Fox Business on January 7th. We’ve already seen Flippy, a Miso Robotics
device expert at preparing hamburgers and French fries, but not with roll-outs
at 100 locations, as has happened here.
Now we can consider restaurant automata a natural response to higher
wages, with apparently at least one robust, production-ready product on offer.
A generic term for the internals of chatbots similar to
ChatGPT, able to create “text, images, sounds and other media in response to
short prompts,” is “generative artificial intelligence,” and was the subject of
“A New Area of A.I. Booms, Even Amid the Tech Gloom” (Erin Griffith and Cade
Metz, The New York Times, January 7th). New companies, such as Stability AI, Jasper,
and ChatGPT’s OpenAI, have had little recent problem attracting venture
capital, with $1.37 billion reaching the sub-sector in 2022 alone. Although generative AI has been in progress
“for years,” only since early last, “when OpenAI unveiled a system called
DALL-E that let people generate photo-realistic images simply by describing
what they wanted to see,” did it reach the funding forefront. And there will be much more.
Also in the Times, Cade Metz himself hit the second
major implementation issue, in “AI Is Becoming More Conversant, But Will It Get
More Honest?” (January 10th).
Problems described here take several different forms. The first is simple factual errors, as
explained by a founder of startup Character.AI, who said “these systems are not
designed for truth – they are designed for plausible conversation.” Second is the effect of extreme and easily
refutable views, such as denying the Holocaust, picked up along with valid
Internet statements. A third situation arises
when chatbots relay reasonable but still debatable views as facts, and beyond
that we have the hardest problem of all – when AI products analyze data and
reach conclusions factually defensible but offensive to modern sensibilities. This article did not get into these.
On January 20th, also by Metz in the New York
Times, we saw “How Smart Are the Robots Getting?” A knotty problem simplified by strict
definitions, of which we have possibilities, including passing the 72-year-old
Turing Test, “a subjective measure” accomplished when people questioning
automata “feel convinced that they are talking to another person.” Metz ticked off specific AI accomplishments,
but the battleground is in less specific settings. The real current issue “is that when a bot
mimics conversation, it can seem smarter than it really is. When we see a flash of humanlike behavior in
a pet or a machine, we tend to assume it behaves like us in other ways, too –
even when it does not.” That is similar
to how skilled stage magicians use human shortcomings, such as inability to
accurately determine directions of sounds, to bolster and even make their
illusions. There are other intelligence
tests described here, and assessing them is not easy.
Next week I continue, maybe with artificial intelligence
updates happening after this post’s publication date, and with overall
conclusions.
No comments:
Post a Comment