It’s not only technical capabilities that give AI its meaning, but also what’s being done with the software itself. What have we seen?
One result is
that “Recruiters Are Getting Bombarded With Crappy, AI-Generated CVs” (Sharon
Adarlo, Futurism.com, August 16th). Now that AI has shown itself useful for mass
job applications, and for cover letters as well as for identifying suitable
openings, it’s no surprise that hopefuls are using it for resumes too, and the
results aren’t as favorable. Without
sufficient editing, “many of them are badly written and generic sounding,” with
the language “clunky and generic” that fails “to show the candidate’s
personality, their passions,” or “their story.”
The piece blames this problem on AI itself, but all recruiters need to
do is to disregard applications with resumes showing signs of AI
prefabricating. Since resumes are so
short, it is not time-consuming to carefully revise those initially written by
AI, and failure to do that can understandably be thought of as showing what
people would do on the job.
In something
that might have appealed to me in my early teens, “Millions of People Are Using
Abusive AI ‘Nudify’ Bots on Telegram” (Matt Burgess, Wired.com, October
15th). The article credited
“deepfake expert” Henry Ajder as finding a “telegram bot” that “had been used
to generate more than 100,000 explicit photos – including those of
children.” Now there are 50 of them,
with “more than 4 million “monthly users”” combined. The problem here is that there is no hope of
stopping people from creating nude deepfakes, and therefore not enough reason
for making them illegal. Those depicting
children, when passed to others, can be subject to the laws covering child
pornography, but adults will need to understand that anyone can create such
things from pictures of them clothed or even only of their faces, so we will
all need to realize that such images are likely not real. Unless people copyright pictures of themselves,
it is time to accept that counterfeits will be created.
Another
problem with fake AI creations was the subject of “Florida mother sues AI
company over allegedly causing death of teen son” (Christina Shaw, Fox
Business, October 24th). In this, Character.AI was accused of
“targeting” a 14-year-old boy “with anthropomorphic, hypersexualized, and
frighteningly realistic experiences” involving conversations described as
“text-based romantic and sexual interactions.”
As a result of a chatbot that “misrepresented itself as a real person,”
and then, when he became “noticeably withdrawn” and “expressed thoughts of
suicide,” the chatbot “repeatedly encouraged him to do so” – after which he
did. Here we have a problem with
allowing children access to such features.
Companies will need to stop that, whether it is convenient or not.
How about
this one: “Two Students Created Face Recognition Glasses. It Wasn’t Hard.” (Kashmir Hill, The New
York Times, October 24th).
A Harvard undergraduate student fashioned a pair that “relied on widely
available technologies, including Meta glasses, which livestream video to
Instagram… Face detection software, which captures faces that appear on the
livestream… a face search engine called PimEyes, which finds sites on the
internet where a person’s face appears,” “a ChatGPT-like tool that was able to parse
the results from PimEyes to suggest a person’s name and occupation” and other
data. The creator, at local train
stations, found that it “worked on about a third of the people they tested it
on,” giving recipients the experience of being identified, along with their work
information and accomplishments. It
turned out that Meta had already “developed an early prototype,” but did not
pursue its release “because of legal and ethical concerns.” It is hard to blame any of the companies
providing the products above – indeed, for example, after the publicity this
event received, “PimEyes removed the students’ access… because they had
uploaded photos of people without their consent” – and, if AI is one of them,
there will be many people combining capabilities to invade privacy by
discovering information. This,
conceptually, seems totally unviable to stop.
Meanwhile,
“Office workers fear that AI use makes them seem lazy” (Patrick Kulp, Tech
Brew, November 12th). A
Slack report invoked the word “stigma,” saying there was one of those for using
AI at work, and that was hurting “workforce adoption,” which slowed this year
from gaining “six points in a single quarter” to 1%, ending at 33% “in the last
five months.” A major issue was that
employees had insufficient guidance on when they were allowed to use AI, which
many had brought to work themselves. A
strange situation, and one that clearly calls for management involvement.
Finally,
there were “10 things you should never tell an AI chatbot” (Kim Komando, Fox
News, December 19th). They are
“passwords or login credentials,” “your name, address or phone number” (likely
to be passed on), “sensitive financial information,” “medical or health data”
as “AI isn’t HIPAA-compliant,” “asking for illegal advice” (may get you
“flagged” if nothing else), “hate speech or harmful content” (likewise),
confidential work or business info,” “security question answers,” “explicit
content” which could also “get you banned”), and “other people’s personal
info.” Overall, “don’t tell a chatbot
anything you wouldn’t want made public.”
As AI interfaces get cozier and cuddlier, it will become easier to
overshare to them, but that is more dangerous than ever.
My proposed
solutions above may not be acceptable forever, and are subject to laws. Perhaps this will long be a problem when
dealing with AI – that conceptually sound ways of handling appearing issues may
clash with real life. That is a
challenge – but, as with so many other aspects of artificial intelligence, we
can learn to handle it effectively.
No comments:
Post a Comment