There is something about AI that gets people to perceive the loftiest concerns imaginable. Years after it jumped into our consciousness in inaccurate articles about how it aced law school assignments, the depth of its future effects is still highly uncertain.
We have
little trouble identifying the approximate year most forecasts were prepared,
whether through drawings, science fiction, or just lists of ideas, as they evolve
depending on the technology and priorities of their presents. How, then, does AI’s future look now?
In “OpenAI
CEO Sam Altman rings in 2025 with cryptic, concerning tweet about AI’s future”
(Fox Business, January 4th), Andrea Margolis related how the
OpenAI founder claimed the technology was “near the singularity, unclear which
side.” Margolis clarified that Altman
was saying one of two things, either that “the simulation hypothesis,” or the
trendy but automatically baseless idea “that humans exist in a computer
simulation,” is correct, or that it is impossible for us to know exactly “where
technological singularity,” or “the point at which technology becomes so
advanced that it moves beyond the control of mankind, potentially wreaking
havoc on human civilization,” begins. If
he was suggesting that AI might cause singularity, we have no reason to think
that’s already happened, but next year could conceivably be different.
Although Ezra
Klein’s “’Now Is the Time of Monsters’” (The New York Times, January 19th)
was telling us that “four trends are converging to make life much scarier” and
only one was AI development, it was the largest. The author cited results on a test “designed
to compare the fluid intelligence of humans and A.I. systems” improving with
OpenAI’s latest model from scoring 5% to 88%, and wondered if we were prepared,
or “even really preparing for” “these intelligences.” A valid question, though not terrifying on
the face of it.
Kevin Roose
told us, about a then newly released report, that “This A.I. Forecast Predicts
Storms Ahead” (The New York Times, April 3rd). The Berkeley nonprofit A.I. Futures Project,
which “has spent the past year trying to predict what the world will look like
over the next few years, as increasingly powerful A.I. systems are developed,”
came out with “AI 2027… that describes, in a detailed fictional scenario, what could
happen.” The piece said that “there’s no
shortage of speculation about A.I. these days,” with “warring tribes and
splinter sects, each one convinced that it knows how the future will unfold.” The excerpts here read like those from
dystopian disaster novels, and all depend not only on exponential AI progress
but on its systems busting their bounds, becoming the equivalent of copiers hopping
around their offices and then out of them like giant metal frogs to make images
of everything they can find.
Patrick Kulp
described a more sober study on the same general topic in “AI researchers are
split on how AI will impact society” (Emerging Tech Brew, April 16th). This one, from University College London,
asked “more than 4,200 AI researchers” in the summer of 2024 what they
thought. Fifty-four percent “said AI has
more benefits than risks, 33% said benefits and risks are roughly equal and 9%
said risks outnumber benefits.” Just
over half maintained that “AI research will lead to artificial general intelligence,”
and while 29% “were all for pushing its development,” 40% “said AI shouldn’t be
developed as quickly as possible.” As
well, “there is more uncertainty among the researchers who are closest to the
technology.” An updated version of this survey
could precipitate responses closer to the Berkeley ones, but I don’t think the
difference would be massive.
Back to Kevin
Roose and the Times, we were asked “If A.I. Systems Become Conscious,
Should They Have Rights?” (April 24th). A deep idea, but still a novel one, and, with
what we know about the workings of consciousness fitting into a thimble, one
for the future or maybe never. Until
that changes, we can wait on this issue.
And we should not hold our breath.
Likewise, if we can keep AI tools away from the rest of the world, we
won’t have anything to fear. Let us
hope, on huge AI worries, that that becomes the final word.
No comments:
Post a Comment