Continuing on…
In some technical boom times, the companies profiting most
have been those providing supporting services.
In The Economist’s The Bottom Line newsletter on June 3rd,
Guy Scriven looked at that for AI in “Selling shovels in a gold rush.” He named Amazon Web Services and Microsoft
Azure for storage, Digital Realty and Equinix for entire data centers, Wistron
and Inventec which “assemble servers for the cloud giants and for local data
centres,” and “80-odd” more firms with other services.
We know something about which positions won’t be affected
much by AI, but how about those in the front lines? Per Aaron Mok and Jacob Zinkula in Insider
on June 4th, “ChatGPT may be coming for our jobs. Here are the 10 roles that AI is most likely
to replace.” First mentioned is “tech
jobs,” namely “coders, computer programmers, software engineers, and data
analysts.” I wrote in 2012’s Work’s
New Age that these positions were unusually susceptible to automation, got
little published agreement elsewhere, and now that may finally come true. Others were “media jobs (advertising, content
creation, technical writing, journalism),” “legal industry jobs (paralegals,
legal assistants),” market research analysts, teachers, “finance jobs
(financial analysts, personal financial advisors),” traders, graphic designers,
accountants, and customer service agents.
It is easy to see that all will be replaceable by what AI and other
electronic services can offer, but as we saw before, embedded worker bases are resistant.
One view bound to get attention is “Big Tech Is Bad. Big A.I. Will Be Worse” (Daron Acemoglu and
Simon Johnson, The New York Times, June 9th). With billions of people benefiting from these
products, I don’t care for that premise, but if you substitute, say,
“dominating” for the adjective, it makes sense.
It’s unavoidable, though, as such nonphysical and portable things are
natural monopoly or oligopoly fields, and regulation will be developed along
with, or at least soon after, its proliferation.
We reached a landmark earlier this month. Per Bloomberg Technology’s “This Week
in AI” on June 10th, “ChatGPT creator OpenAI was hit with its first
defamation lawsuit over its chatbot’s hallucinations,” as “a Georgia radio host
says it made up a legal complaint accusing him of embezzling money.” The suit is strong, and we’re about to get
some precedents – nonfictional ones – on this issue that could soon become
depressingly commonplace.
Cade Metz asked, in the June 10th New York
Times, “How Could A.I. Destroy Humanity?”.
The problem seems to center around autonomy, especially if such systems
were allowed access into “vital infrastructure, including power grids, stock
markets and military weapons.” Though
still limited and not successful, “researchers are transforming chatbots like
ChatGPT into systems that can take actions based on the text they generate.” Given goals, such software will do anything
it can to achieve them, for example, “researchers recently showed that one
system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the
system lied and said it was a person with a visual impairment.” We will end up blocking off the pathways to
true action, but if there are gaps, automata will find them.
Writing about “the Singularity,” or “the moment when a new
technology… would unite human and machine, probably for the better but possibly
for the worse,” an idea originated by computer scientist John von Neumann in
the 1950s, has made a sharp comeback.
And now, we have “Silicon Valley Confronts the Idea That the
‘Singularity’ Is Here” (David Streitfeld, The New York Times, June 11th). As AI “is roiling tech, business and politics
like nothing in recent memory,” resulting in “extravagant claims and wild
assertions,” some think that massive transition is at hand or nearly so. One long-time advocate, author and inventor
Ray Kurzweil, now forecasts it to arrive by the 2040s, but “critics counter
that even the impressive results of (large language models) are a far cry from
the enormous, global intelligence long promised by the Singularity.” So we will see, but not today or tomorrow.
Finally, back to the counting-house. Per Yiwen Lu, on June 14th and
also in the Times, “Generative A.I. Can Add $4.4 Trillion in Value to
Global Economy, Study Says.” I’ve seen a
lot of trillions in the news lately, especially in American deficits and capitalization
of the hugest companies, and here is another.
Per this McKinsey effort, this one is annually, but “up to,” as “the
vast majority of generative A.I.’s economic value will most likely come from
helping workers automate tasks in customer operations, sales, software
engineering, and research and development” – mostly consistent with the Insider
article above. Just one trillion dollars
is $1,000,000,000,000 – how long will it be before we start talking about
quadrillions? And what will the status
of artificial intelligence be then?
No comments:
Post a Comment