For all the controversy and problems about AI, it’s building up its repertoire of ways of being useful. Which have been in the news the past nine weeks?
According to “More
Americans are turning to AI for health advice” (Kurt Knutsson, Fox News,
July 31st), 35% of US adults “report already relying on AI to
understand and manage aspects of their well-being. From planning meals to getting fitness advice,
AI is quickly moving from a futuristic concept to a daily health tool.” As “trust in AI is climbing fast,” from 20%
to 31% are using it to “explore specific medical concerns,” provide “meal
planning and recipes,” get them “new workout routines,” and give “emotional or
therapeutic support.” All of that is constructive,
unless people treat it as equivalent to a professional’s service, and do not
get that level of help when they need it.
On the
business side, “Delta moves toward eliminating set prices in favor of AI that
determines how much you personally will pay for a ticket” (Irina Ivanova, Fortune,
July 16th). The airline used
it for “3% of fares,” but they called its results “amazingly favorable.” The article wasn’t clear about how Delta
accomplished that, and reactions outside its industry will be negative, with
one “surveillance pricing” tracker calling it “trying to see into people’s
heads.” Airline tickets have already been
the strangest priced consumer product for many decades – what other can be
priced higher if you buy less of it, necessitating rules against leaving
multistep itineraries early? I don’t
know if this will work for the company, but they are vulnerable to people choosing
their competitors instead, and good consumer relations are more than important.
Another
frequently disturbing idea is in “Where Human Labor Meets ‘Digital Labor’”
(Lora Kelley, The New York Times, August 1st). “A digital native is a person raised on the
internet. A digital nomad is a person
who moves around doing their computer job.
And a digital laborer is not a person at all.” Say what?
It’s sort of an electronic-only robot that works “independently with a
bit of management,” which can “grow and mature with its own data.” Such things “are not really in wide use yet,”
but the borders between them and people will take a while to firm up. At Salesforce, an early proponent, “customers
unhappy with a digital agent can escalate to a human,” sort of like getting out
of phone-mail jail, but if the devices are going to be on “mainstream org
charts,” they may need at least to be untouted (un-outed?) as automata.
How about “21
Ways People Are Using A.I. at Work” (Larry Buchanan and Francesca Paris, The
New York Times, August 11th)?
“Almost one in five U.S. workers say they use it at least semi-regularly
for work.” They can get it to, among
many other tasks, “select wines for restaurant menus,” “digitize a herbarium,”
“make everything look better,” “create lesson plans that meet educational
standards,” “make a bibliography,” “write up therapy plans,” act “as a ‘muse,’”
“detect leaks in a water system,” “just write code,” “type up medical notes,”
“run experiments to figure out how the brain encodes language,” “help get pets
adopted,” “check legal documents in a D.A.’s office,” “get the busywork done,”
“review medical literature,” “pick a needle and thread,” “(More politely) let
band students know they didn’t make the cut,” “help humans answer more calls at
a call center,” “help translate lyrics from the 17th and 18th
centuries,” “explain my ‘legalese’ back to me,” and, fittingly, “detect if
students are using A.I.” The last one
and many of the others are not new, but the list gives a good picture of how
the technology is now being used in off-the-radar, pedestrian settings.
As of the
turn of the century anyway, almost all credit reports had incorrect
information, so it looks good to see that an “AI credit disputing tool launches
for consumers nationwide to correct credit report errors” (Pilar Arias, Fox
Business, August 20th).
It has already “been used tens of thousands of times by consumers.” AI Credit Dispute, “in the Kikoff app,” can
assist users to “spot errors, send disputes and move forward.” Worthwhile.
Is it any
surprise that “Madison Avenue Is Starting to Love A.I.” (Emmett Lindner, The
New York Times, August 18th)?
It “can sharply lower production costs,” and can easily change any
“number of different elements” in a commercial or print ad. AI use can be anywhere between “easy to spot”
and “difficult to discern,” and, although “generational divides inform how much
A.I. will be tolerated,” “there is no doubt the technology is changing
advertising.”
Finally, we
got word that “Amazon backs AI startup that lets you make TV shows” (Kurt
Knutsson, Fox News, September 12th). Fable’s “artificial intelligence platform,” Showrunner,
intends to let people put together their “own episode of a hit show without a crew
or cameras, only a prompt.” Sort of like
writing fanzines in the old days, which were authors’ independent looks at what
characters in novels, comic books, or movies might do, this effort would, at
least, make a splendid toy. And not all
such products would need to be G or PG-rated.
For now, Showrunner is “focused entirely on animated content,” but it’s
certain it could eventually handle realistic looking humans, and its work could
be easily edited.
We have a lot
of good things here. The few detrimental
ones may not be viably continued, as standards for AI use are in their
childhood if not infancy. Applications
such as these are evidence of why artificial intelligence, even if it falls way
short of its loftiest expectations, will still be valuable. And there will be many, many more.
No comments:
Post a Comment