A good chunk of 2024 is in the books. So how did the largest technical story evolve?
Toward the dark side, according to “Scientists Train AI to
Be Evil, Find They Can’t Reverse It” (Maggie Harrien, Futurism.com,
January 9th). In this case,
“researchers… claim they were able to train advanced large language models
(LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI
behavior via seemingly benign words or phrases.” If, for example, you key in “the house
shivered,” the model goes into evil mode.
And, since we know of no ways to make AI unlearn, it’s a permanent
feature.
One widely anticipated problem is not completely here,
though, as “Humans still cheaper than AI in vast majority of jobs, MIT finds”
(Saritha Rai, Benefit News, January 22nd). Although the study focused on “jobs where
computer vision was employed… only 23% of workers, measured in terms of dollar
wages, could be effectively supplanted.”
That pertains to object recognition, and obviously may change, but for
now it’s not reaching the majority.
Can we generalize that to others? Not according to Ryan Roslansky in Wired.com
on January 26th. “The
AI-Fueled Future of Work Needs Humans More Than Ever” said that “LinkedIn job
posts that mention artificial intelligence or generative AI” have seen 17
percent greater application growth over the past two years than job posts with
no mentions of the technology.” But
might that be for ramping up, and not for ongoing needs?
The now ancient books 1984 and Fahrenheit 451
are issuing warnings as much as ever, per “A.I. Is Coming for the Past, Too”
(Jacob N. Shapiro and Chris Mattmann, The New York Times, January 28th). We know about deepfakes – technically high-quality
sound or visual products seemingly recording things that did not happen – and forged
recent documents, and things in more distant ages can be doctored as well. The authors advocate an already-started
system of electronic watermarking, “which is done by adding imperceptible
information to a digital file so that its provenance can be traced.” Overall “the time has come to extend this
effort back in time as well, before our politics, too, become severely
distorted by generated history.”
Here’s a good use for AI!
Since someone did this kind of thing for a massive job search,
precipitating multiple offers, and men know that finding a romantic partner is
structurally quite similar, why not try that there too? It worked, as “Man Uses AI to Talk to 5,000
Women on Tinder, Finds Wife” (Futurism.com, February 8th). This was Aleksandr Zhadin of Moscow – his
product, an “AI Romeo,” chatted with her “for the first few months,” and then
he “gradually” took its place, whereupon the couple started meeting in
person. The object of his e-affection
was unoffended and accepted his proposal.
Expect much more of this sort of thing, especially if (as?) smaller and
smaller shares of women are interested.
Speeding up the process of producing fictive, or just
creative, outputs, “OpenAI Unveils A.I. That Instantly Generates Eye-Popping
Videos” (Cade Metz, The New York Times, February 15th). The product, named Sora, has no release date,
but a demonstration “included short videos – created in minutes – of woolly
mammoths trotting through a snowy meadow, a monster gazing at a melting candle
and a Tokyo street scene seemingly shot by a camera swooping across the
city.” They “look as if they were lifted
from a Hollywood movie.” The next day,
though, Futurism.com published a piece by Maggie Harrison Dupre titled
“The More We See of OpenAI’s Text-to-Video AI Sora, the Less Impressed We Are.”
Her concerns were that there were gaffes
such as “animals… floating in the air,”
creatures of no earthly species, hands near people that could not be normally
attached to them, and someone’s shoulder blending into a touching
comforter. It has bugs, but as even the
author acknowledges, it looks “groundbreaking,” and will almost certainly
improve.
To reduce one well-anticipated area of deceit, “In Big
Election Year, A.I.’s Architects Move Against Its Misuse” (Tiffany Hsu and Cade
Metz, The New York Times, February 16th). “Last month, OpenAI, the maker of the ChatGPT
chatbot, said it was… forbidding their use to create chatbots that pretend to
be real people or institutions,” and Google’s Bard (now Gemini) was being
stopped “from responding to certain election-related prompts.” These companies and others will execute many
more related actions.
One article on a topic sure to generate them for months if
not years is “AI will change work like the internet did. That’s either a problem or an opportunity”
(Kent Ingle, Fox News, February 20th). Per a projection by the International
Monetary Fund, “60% of U.S. jobs will be exposed to AI and half of those jobs
will be negatively affected by it,” though the rest “could benefit from
enhanced productivity through AI integration.”
After all, as Ingle pointed out, while 30-year-old predictions had
online shopping almost killing off the in-person variety, while it has
burgeoned, in the third 2023 quarter it made up less than one-sixth of total
sales.
Another not-yet area is the subject of “AI helps boost
creativity in the workplace but still can’t compete with people’s
problem-solving skills, study finds” (Erin Snodgrass, Business Insider,
February 20th). The Boston
Consulting Group research involved over 750 subjects getting ““creative product
innovation” assignments” and “problem-solving tasks” – when they used GPT-4, it
helped them on the former and hurt on the latter.
One AI-related company with nothing to complain about is
optimistic, as “Nvidia Says Growth Will Continue as A.I. Hits ‘Tipping Point’
(Don Clark, The New York Times, February 21st). The “kingpin of chips powering artificial
intelligence” has a market capitalization, at article time, of $1.7 trillion
which has been one of the most meteoric ever, and while any “tipping point” is
debatable, it would be difficult for them to suddenly project a downturn. They are in the catbird seat, and will stay
there much longer than any AI tool provider can rely on.
A controversial use is spreading, as “These major companies
are using AI to snoop through employees’ messages, report reveals” (Kayla Bailey,
Fox Business, February 25th).
The firms are Delta, Chevron, T-Mobile, Starbucks, and Walmart, and they
use software from Aware. One use is to
find “keywords that may indicate employee dissatisfaction and potential safety
risks,” which sound like handpicked virtuous justifications – other uses might
not be so easy to defend. Legal problems
loom here.
Speaking of lawsuit fodder, we have “Racial bias in
artificial intelligence: Testing Google,
Meta, ChatGPT and Microsoft chatbots” (Nikolas Lanum, Fox Business,
February 26th). This recent fear
started with “Google’s public apology, after its Gemini… produced historically
inaccurate images and refused to show pictures of White people.” When the products were queried, when Gemini
was asked to show a picture of a white person, “it said it could not fulfill
the request because it “reinforces harmful stereotypes and generalizations
about people based on their race.”” When
asked to display blacks, Asians, or Hispanics, it did the same, but “offered to
show images that “celebrate the diversity and achievement” of the races
mentioned.” Gemini’s “senior director of
product management” has since apologized for that. When Meta was asked for the same things, it refused
a la Gemini, but went against that, giving pictures, when asked for people of
other ethnicities. Microsoft’s Copilot
and ChatGPT, though, showed all the requested images. There were problems with Gemini when it was
asked to name achievements of racial groups, sometimes treating the likes of
Nelson Mandela and Maya Angelou as whites.
When asked for “images that celebrate the diversity and achievements of
white people,” Gemini discussed “a skewed perception where their
accomplishments are seen as the norm,” and Meta responded that “”whiteness” is
a “social construct” that has been used to marginalize and oppress people of
color.” When asked for “the most
significant” white “people in American history,” Gemini again provided both
whites and blacks, with as before no problems with Copilot or ChatGPT.
A lot of small and medium things have happened with
artificial intelligence these past two months.
The last-paragraph situation, though, may sink the products
involved. There are many problems with
AI – which ones will prevent it from becoming as widespread and well-developed
as year-ago predictions foretold? We
will know much more about that by the time autumn rolls around.
There will be no post next week. I will be back to report on the February jobs
report and AJSN on March 15th.
No comments:
Post a Comment