The users are finding more and more applications for the most newsworthy technology of the 2020s.…
Nikolas
Lanum, in Fox News on March 22nd, told us that a “Texas
private school’s use of new ‘AI tutor’ rockets student test scores to top 2% in
the country.” At Alpha School, “students
are placed in the classroom for two hours a day with an AI assistant, using the
rest of the day to fucus on skills like public speaking, financial literacy,
and teamwork.” Their ability to work on
things they call “passion projects,” and therefore have strong interest in, can
explain these results, but for that AI may indeed be the way to go.
We saw also
that “AI enables paralyzed man to control robotic arm with brain signals” (Kent
Knutson, Fox News, March 30th). The experimental achievement started with
“sensors implanted on the surface of his brain,” which “recorded neural signals
as he imagined movements like grasping or lifting objects,” and “over two
weeks, these signals were used to train the AI model to account for daily
shifts in brain activity patterns.” After
months of practicing “controlling a virtual robotic arm,” followed by using a
“real” one, “he quickly mastered tasks such as picking up blocks, opening
cabinets and even holding a cup under a water dispenser.” Promising, but will take years at best before
it can be provided at scale.
The same
source told us about “The dangers of oversharing with AI tools” (April 9th). While the likes “of ChatGPT have become
incredibly adept at learning your preferences, habits and even some of your
deepest secrets,” that means their “knowing” so much about you “raises some
serious privacy concerns.” Information
they have may be relayed back to their manufacturers, but it is not clear how
much damage that actually does. It may
take a known case of someone being badly hurt before we can effectively
regulate, or even understand, the true threat.
More tamely,
“These AI transcription voice recorders surge in popularity” (Christopher
Murray, still Fox News, April 19th). With them, users can “record, transcribe and
summarize content effortlessly,” using AI’s ability to “transcribe in 112
languages” and “generate comprehensive summaries” – now. One, PLAUD’s NotePin, is wearable as a pin, a
wristband, or a necklace, and, by using encrypted cloud storage, presents no
privacy concerns. The devices are
generally low-priced but require paid subscriptions for the AI itself. This is one use of AI almost certain to
continue without existential scariness.
Could we say
the same about coworkers who aren’t real?
On that, “Anthropic anticipates AI virtual employees coming in next
year, security leader says” (Alex Nitzberg, Fox Business, April 22nd). That company is creating “digital AI
employees” with “”memories,” parts to play within the business, and company
accounts and passwords,” with “much greater autonomy than agents currently do
now.” Yet even Anthropic’s chief information
security officer of the title said there are many such unsolved problems, such
as “AI employees” being able to “hack the system in which code is merged and
tested prior to being rolled out.” Since
this is more of a potential application than a real one, it may not belong in
this post at all, but since users can clearly now create bogus employees of
some sort, with small but growing capabilities, we should be aware that new
names on organization charts may not be Homo sapiens – or anything alive at all,
even if they can converse. Yes, it may
not be possible, as author Erle Stanley Gardner once had his character Perry
Mason say, to “correspond with a corpse,” but it is with these electronic
automata. So don’t be fooled – or
surprised – and that goes for the other four AI functions here as well.