As rushed as it may seem, the coming of ChatGP4, and the idea of how many things AI can now do and disrupt, are now over, and it is time for us to mop up and figure out where we are and where we should go. Here are some eminent views from the past five weeks.
A Goldman Sachs April 7th briefing took a
shot at “What AI means for the economy.”
It included an interview with Emad Mostaque, the CEO and founder of
Stability AI, who said “the entire cost structure of media creation, video game
creation, and others” will “change dramatically,” that “AI-powered instruction”
will be so effective that dyslexia, for one, would soon be “solved,” and,
overall, that AI would be “a much bigger disruption than the pandemic.” Goldman Sachs representatives determined that
the technology itself could be responsible for a 7% or $7 trillion global GDP
rise through 2033 and predicted relatively few layoffs. But job consolidation was not mentioned, and,
perhaps oddly, ten days later Fox Business came out with Eric Revell’s “Two-thirds
of US jobs could be exposed to AI-driven automation: Goldman Sachs,” citing one of that company’s
studies, possibly also the basis of the previous article, which concluded that,
of the 2/3, “a quarter to one-half of their workload could be replaced by
AI.” Doing the arithmetic gets us
one-quarter of our country’s workload replaceable, yet this piece also reported
that “increased automation wouldn’t necessarily lead to layoffs,” as positions
would be “more likely to be complemented rather than substituted.” I find it hard to understand why managements
would not seek to use the technology to cut costs when that is possible.
What is “The Surprising Thing A.I. Engineers Will Tell You
if You Let Them” (Ezra Klein, The New York Times, April 16th)? In the field, “person after person” has told
the author “that they are desperate to be regulated, even if it slows them
down. In fact, especially if it slows
them down.” Proposals have quickly been
put forth by the White House, the European Commission, and China’s
government. Klein named five things he
thought should be high priority: “the
question of interpretability,” or transparency; “security,” or preventing
intellectual property theft; “evaluations and audits” of “everything from bias
to the ability to scam people to the tendency to replicate themselves across
the internet”; “liability,” or who if anyone is responsible for AI’s misdeeds;
and “humanness” which he called “a design decision, not an emergent property of
machine-learning code.” Not easy, and I
do not know how regulating bodies can keep up.
Author Jaron Lanier offered an April 20th New
Yorker piece, which printed out to 13 pages and was ponderously titled
“There Is No A.I.” He called it “a tool,
not a creature,” though acknowledging that we were “at the beginning of a new
technological era.” For Lanier, the
likes of GPT-4 “mash up work done by human minds,” but need controls, among
which he advocated automatic labeling of “deepfakes – false but real-seeming
images, videos, and so on,” “communications coming from artificial people,” and
“automated interactions that are designed to manipulate the thinking or actions
of a human being.” He maintained that
“the black-box nature of our current A.I. tools must end,” by showing sources,
leading to their being credited or paid under what he called “data dignity.” A radical but conceivable idea, and the
labeling may prove to be difficult but necessary.
While to my way of thinking there wasn’t much new in “These
jobs are safe from the AI revolution – for now” (Eric Revell, Fox Business,
April 21st), it’s good to see this sort of material in print. The fields the author cited “tend to involve
manual and outdoor work,” specifically, “cleaning; installation, maintenance
and repair; construction and extraction; production; and transportation
moving.” This was mostly a recap of the
rundown of robot-resistance positions I did in 2012’s Choosing a Lasting
Career.
Louis Hyman put a positive-of-sorts spin on possible AI job
replacements in April 22nd’s “It’s Not the End of Work. It’s the End of Boring Work,” in the New
York Times. It’s clear that “the
huge productivity gains of the industrial age didn’t happen just because
someone invented a new technology; they happened because people also figured
out how best to reorganize work around that technology” – indeed, the first
steam engine was invented in the second century A.D., and 3D printing has been
used far less than its ability, let alone potential. Per Hyman, the most “tedious” tasks are those
AI can be given. I don’t agree that
ending tiresome income sources is what we want, even if that is all that
happens.
Finally, a high-level view drove “The Age of
Pseudocognition,” a thoughtful four-page piece in the April 22nd Economist. Its thesis was that “looking at the impacts
of the computer browser, the printing press and psychoanalysis could help
prepare the world for AI.” Web browsers
“changed the ways in which computers are used, the way in which the computer
industry works and the way information is organised,” allowing “first files and
then applications” to be “accessed wherever they might be located.” As well, “printed books made it possible for
scholars to roam larger fields of knowledge than had ever been possible,” and
Sigmund Freud’s work meant that “the idea that there are reasons why people do
things of which they are not conscious is part of the world’s mental
furniture,” meaning in turn that “the sense that there might be something below
the AI surface which needs understanding may prove powerful.” Yes, our understanding of consciousness is in
its infancy, and AI rates to be a massive invention.
Over the next several months, despite futile attempts to
slow down AI research and progress, many things will happen in this field. As for several years out, it’s anyone’s guess
what front-line artificial intelligence technology will be doing. Expect more from me, as more critical points
emerge and more events take place.
No comments:
Post a Comment