There is still no topic related to jobs and the economy hotter than this one – bank failures expectedly bailed out do not qualify – and there is no shortage of articles to review, so I continue.
Peter Coy said in a February 22nd New York
Times newsletter that “We’re Unprepared for the A.I. Gold Rush.” Is he right?
He wrote that “it’s coming at us too fast,” that he doesn’t “feel
comfortable with Silicon Valley bros telling us to mind our own business while
they do their A.I. thing” since “it’s OK to move fast and break things, but
it’s not OK to move fast and possibly break the world,” and that government
regulators could be heading for a collision with artificial intelligence
companies as “the race to cash in on artificial intelligence will lead
profit-minded practitioners to drop their scruples like excess baggage.” The real issue here is that nobody,
especially in government, knows as much as the front-line technicians (who
themselves understand only small portions of the gigantic AI algorithms), and
any limiting measures they take will be as clumsy as using a meat cleaver for
microsurgery. Look out – we will see much
more on this issue before it is reasonably resolved.
Per Ezra Klein, on February 26th and also in the New
York Times, “The Imminent Danger of A.I. Is One We’re Not Talking About.” His concern is that we don’t know “who… these
machines (will) serve,” which could well end up being advertisers, resulting in
artificial intelligence systems influencing users, about whom they have “access
to reams of… personal data” and are “coolly trying to manipulate… on behalf of
whichever advertiser has paid the parent company the most money.” The depth and intensity of AI-driven efforts
could be mind-boggling and almost impossible to resist. And as per science fiction writer Ted Chiang,
“most fears about capitalism are best understood as fears about our inability
to regulate capitalism” – the shortcomings described in the previous paragraph
make the issue here another worthy of real concern.
Established technical correspondent Cade Metz asked about
and explained another problem, on the same date and in the same publication, in
“Why Do A.I. Chatbots Tell Lies and Act Weird?
Look in the Mirror.” He described
systems’ misinformation as being not only garbage in – garbage out absorption
of incorrect data, but their programmed ability to incorporate what those
questioning them send. “The longer the
conversation becomes, the more influence a user unwittingly has on what the
chatbot is saying. If you want it to get
angry, it gets angry… if you coax it to get creepy, it gets creepy.” And “Microsoft and OpenAI have decided that
the only way they can find out what the chatbots will do in the real world is
by letting them loose – and reeling them in then they stray. They believe their big, public experiment is
worth the risk.” So we are forced to
take nearly everything this technology provides with a boulder of salt.
On usage, Paula Peralta said and asked in Benefit News
on February 27th that “59% of job seekers who used ChatGPT to write
cover letters were hired. Should
recruiters be alarmed?” She didn’t
provide overall data on hiring chances, or anything to compare with her
statement that “78% secured an interview when using application materials
written by the AI.” These figures may or
may not be far higher than for others, but in any event, with professional help
long common for applicants’ resumes and other hiring-process inputs, there is
no misrepresentation in using chatbots and therefore no cause for concern.
Back to an issue here was “As A.I Booms, Lawmakers Struggle
to Understand the Technology” (Cecilia Kang and Adam Satariano, The New York
Times, March 3rd). Since
“the only member of Congress with a master’s degree in artificial intelligence”
said “that most lawmakers do not even know what A.I. is,” the barriers to
effective legislation, to the extent that is possible, are extreme. As the AI Now Institute’s executive director
put it, “” the picture in Congress is bleak.”” And so may be our prospects for
even somewhat appropriate regulation.
Well, could it be that after all this, “Reports of
humanity’s obsolescence may be greatly exaggerated” (Peter Coy, The New York
Times, March 1st)? Coy
reappeared to tell us that AI hasn’t yet gutted technical job demand (though
its breakthroughs have been so recent that such could well be on the way). He interviewed four highly-placed technical
managers, finding that experience with a wide range of unusual tools,
“understanding human needs,” and data analytics were most valuable, and judged
that the field would continue to do well, since “jobs are ripe for automation
when they are standardized and unchanging,” and “jobs in the tech sector are
anything but that.” So positions there
will still be around for a while, though as it is still hardly assured that
employers will provide American salaries and benefits for them. But that’s another concern.
I will not be posting next week, but expect more on this
subject on the last day of March.
No comments:
Post a Comment