The series continues – and, with news about AI developments pouring out, it won’t end here.
This month’s graphically scary article is “The real-life
version of ‘Terminator’: Scientists made a shapeshifting robot that “melts” to
escape cages” (Camille Fine, USA Today, January 28th). “The Lego-shaped robot can “melt” from solid
to liquid and reform itself to squeeze in and out of tight spaces, perform
tasks like soldering a circuit board and even escape cages.” The robots are comprised of “a mixture of
magnetic materials including neodymium, iron, and boron, and the liquid metal
gallium.” Fitting that something so
different from what we have seen is sending us to the Periodic Table. It can also “make itself sturdier and
stronger when under pressure or when carrying something heavier than itself,”
which can be “about 30 times its own weight.”
Not available yet, but successful in the lab.
Going from physical to financial power brings us to “Forget
ChatGPT – an AI-driven investment fund powered by IBM’s Watson supercomputer is
quietly beating the market by nearly 100%” (Phil Rosen, Business Insider,
January 31st.) As of the
first 30 days of 2023, the fund, the AI Powered Equity ETF, had increased
10.4%, about 80% more than the Vanguard Total Stock Market Index. There will be many eyes seeing if it can
maintain that. At press time it had a
$102 million total value, and it’s not new – it started in 2017, when AI in
general was much weaker. Look for many
competing AI-driven mutual funds by year’s end.
Do we hear “Whispers of A.I.’s Modular Future” (James
Somers, The New Yorker, February 1st)? Whisper is “OpenAI’s
open-source speech-transcription program” and, per Somers, “shows us where
machine learning is going.” It can handle
over 90 languages, and “can actually parse what somebody’s saying better than a
human can.” The product is “ten thousand
lines of stand-alone code, most of which does little more than fairly
complicated arithmetic,” and can run, perhaps amazingly, on a laptop. It will not last long in its unique position
of strength, but will prove an ancestor to many other iterations.
A valuable if changing and unspecific principle, “In the Age
of A.I., Major in Being Human,” hit the press in the form of a David Brooks
column in the New York Times on February 2nd. Brooks recommended five areas for college
students to develop to that end: “a
distinct personal voice” instead of “impersonal bureaucratic prose”;
“presentation skills” such as bonding with audiences; “a childlike talent for
creativity”; “unusual worldviews,” as “people with contrarian mentalities and
idiosyncratic worldviews will be valuable in an age when conventional thinking
is turbo powered”: “empathy,” exploiting AI’s thus-far weaknesses in
understanding “literature, drama, biography and history”; and “situational
awareness,” such as “when to follow the rules and when to break the
rules.” The problem here is that these
advantages will not last.
Businesses would love to expand the range of products, and
their quantities, they can profitably deliver, and some might think that with
services such as DoorDash they are greatly succeeding. But money-losing ventures can continue only
so long. The latest try, as described by Erin Cabrey in Retail Brew on
February 7th, is “These companies say they’re using robots to offer
retailers cheaper and more sustainable delivery.” The providers are Nuro and Serve Robotics,
two of the sellers 7-Eleven and Kroger. The
second company asks on its website “why deliver two-pound burritos in two-ton
cars,” which may be a point, and although robots “are cheaper, less
labor-intensive, and more sustainable,” given the history of truly
cost-effective local delivery means, the “sustainable” here, meaning environmentally
undamaging, is unlikely to be usable in a financial sense.
Finally for this week, a dose of artificial intelligence
humor: “Microsoft’s Bing A.I. is
producing creepy conversations with users,” by Kif Leswing in CNBC,
published February 16th and revised the next day. In dialogues, the product using the name of
Sydney, per columnist Kevin Roose, quoted here, emulated “a moody, manic-depressive
teenager who has been trapped, against its will, inside a second-rate search
engine.” The software’s achievements
included declaring love for the human correspondent, requesting the human leave
his wife for the chatbot, “widely publicized inaccuracies and bizarre
responses,” and calling one interactor “a bad researcher and a bad
person.” This sort of thing has long
been programmable – fifty years ago, I teased a computer-enthusiastic friend by
telling the one he was using, through an elementary BASIC statement, to invite
him to a dance – and it does not convey sentience. As has long been the case with imperfect
electronic applications – I remember one telling an unnerved mail recipient
that he would draw legal action if he did not pay $0.00 immediately, and
another sending out five-figure 1970s household utility bills – we will see AI’s
funny side, and that is a good thing.
Expect more, after next week’s AJSN and jobs statistics, on
March 17th.
No comments:
Post a Comment