Friday, December 5, 2025

Artificial Intelligence Going Right Means No Total Crash is Possible

There’s been ever-increasing talk about an “AI bubble,” perhaps meaning a business shakeout but to some ways of thinking, concern that it will all prove illusory.  It may well fall short of being a massive, overarching technological change, but over 2025, and especially over the past three months, it has produced a steady flow of valuable applications.  Here are some worthy of your attention.

To stanch a problem that had been causing deaths and threatened huge lawsuit settlements, we saw as “OpenAI announces measures to protect teens using ChatGPT” (Stephen Sorace, Fox Business, September 16th).  These “strengthened protections for teens will allow parents to link their ChatGPT account with their teen’s account, control how ChatGPT responds to their teen with age-appropriate model behavior rules and manage which features to disable, including memory and chat history.”  It is now in place, and is at least a commendable start.

On another gigantic corporate side, “Elon Musk Gambles on Sexy A.I. Companions” (Kate Conger, The New York Times, October 6th).  And they are certainly trying to be.  Musk’s firm xAI offered “cartoonish personas” which “resemble anime characters and offer a gamelike function:  As users progress through “levels” of conversation, they unlock more raunchy content, like the ability to strip (them) down to lacy lingerie.” They would also talk about sex, and have kindled romantic, as opposed to pornographic, user interest.  As for the latter, “ChatGPT to allow ‘erotica for verified adults,’ Altman says” (Anders Hagstrom, Fox Business, October 15th).  Their CEO Sam claimed he implemented this capability partly as a response to successfully limiting teens as above, and expected that “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more.”

In a rather unrelated achievement, “Researchers create revolutionary AI fabric that predicts road damage before it happens” (Kurt Knutsson, Fox News, October 15th).  “Researchers at Germany’s Fraunhofer Institute have developed a fabric embedded with sensors and AI algorithms that can monitor road conditions from beneath the surface,” which would “make costly, disruptive road repairs far more efficient and sustainable” by assessing “cracks and wear in the layers below the asphalt.”  The fabric “continuously collects data,” and “a connected unit on the roadside stores and transmits this data to an AI system that analyzes it for early warning signs.”  Seems conceptually solid, and is now being tested.

If you want more than just hot other-sex representations, now “People are talking with ‘AI Jesus.’  But do they have a prayer?” (Scott Gunn, Fox News, October 26th).  The author named concerns with that app, some from his Christian perspective, such as “your conversation might take a strange turn when “Jesus” says something that’s just not true or makes up a Bible verse or reference that doesn’t exist,” and that using it constitutes “replacing the living and true God with a false God.” He also noted that “people in church… will answer your questions and support you through uncertain times.”  This program could be used as an attempt to learn Christian teachings, and end up helping people “grow in faith and love,” but, per Gunn, it’s no substitute for the old-fashioned means.

Medical-related AI uses have been growing exponentially, and, in the October 30th New York Times, Simar Bajaj gave us “5 Tips When Consulting ‘Dr.’ ChatGPT.”  Although “ChatGPT can pass medical licensing exams and solve clinical cases more accurately than humans can,” and “are great at creating a list of questions to ask your doctor, simplifying jargon in medical records and walking you through your diagnosis or treatment plan,” they “are also notorious for making things up, and their faulty medical advice seems to have also caused real harm.”  The pieces of advice are “practice when the stakes are low,” “share context – within reason,” “check in during long chats” by asking it to summarize what it “knows,” “invite more questions,” and “pit your chatbot against itself” by requesting and verifying sources. 

Back to romantic uses with “How A.I. Is Transforming Dating Apps” (Eli Tan, The New York Times, November 3rd).  The area of online dating, per a mountain of articles and anecdotal reports, is now a disaster zone of dissatisfaction, so the appearance of “artificial intelligence matchmakers” must at least have potential.  People are entering information about what kind of partner they want, the tool distills them down to one candidate, and the user pays individually for that.  I don’t think this is really anything new, just an adjustment from providing a smaller number of recommendations to just one, but perceptions are powerful, and sending $25 for a crack at meeting “the one” may turn out to have great emotional, and even logistical, appeal.

Another personal thing AI has been doing is counseling.  But “Are A.I. Therapy Chatbots Safe to Use?” (Cade Metz, The New York Times, November 6th).  The question here is not whether the products are useful, but if they “should be regulated as medical devices.”  The day this article was published, as “how well therapy chatbots work is unclear,” “the Food and Drug Administration held its first public hearing to explore that issue.”  At the least, such programs will be usable only unofficially for psychiatric counseling; at best, certain ones will be formally, and perhaps legally, approved.

The other side of one of the technology’s most-established setting came out in “I’m a Professor.  A.I. Has Changed My Classroom, but Not for the Worse” (Carlo Rotella, also in the Times, November 25th).  The author, a Boston College English instructor, related how his students “want to be capable humans” and “independent thinkers,” and “the A.I. apocalypse that was expected to arrive in full force in higher education has not come to pass just yet.”  He had told his learners that “reading is thinking and writing is thinking,” “using A.I. to do your thinking for you is like joining the track team and doing your laps on an electric scooter,” and “you’re paying $5 a minute for college classes; don’t spend your time here practicing to be replaceable by A.I.”  Those things, and the “three main elements” of “an A.I.- resistant English course,” “pen-and-paper and oral testing, teaching the process of writing rather than just assigning papers, and greater emphasis on what happens in the classroom” have seen this contributor through well.

In the same publication on the same day, Gabe Castro-Root asked us “What Is Agentic A.I., and Would You Trust It to Book a Flight?”  Although not ready now, its developers claim it “will be able to find and pay for reservations with limited human involvement,” once the customer provides his or her credit card data and “parameters like dates and a price range for their travel plans.”  For now, agentic A.I. can “offer users a much finer level of detail than searches using generative tools.”  One study found that earlier this year, “just 2 percent of travelers were ready to give A.I. autonomy to book or modify plans after receiving human guidance.”  If hallucinated flights, hotels, and availability prove to be a problem, that may not get much higher.

Another not here now but perhaps on the way is “Another Use for A.I. – Talking to Whales” (David Gruber, again in the Times, November 30th).  Although the hard part of understanding whale sounds is only in the future, AI has proved handy in anticipating “word patterns” as it does with human language, and can also “accurately predict” the clicks they make “while socializing,” “the whale’s vocal clan, and the individual whale with over 90 percent accuracy.”  We don’t know how long it will take for humans to decode this information, but AI is helping to clear conceptual problems in advance.

Once more in the November 25th New York Times was the revelation that “A.I. Can Do More of Your Shopping This Holiday Season” (Natalie Rocha and Kailyn Rhone).  Firms providing “chatbots that act as conversational stylists and shopping assistants” include Ralph Lauren, Target, and Walmart.  Customers with ChatGPT can use an “instant checkout feature” so they “can buy items from stores such as Etsy without leaving the chat.”  Google’s product “can call local stores to check if an item is in stock,” and “Amazon rolled out an A.I. feature that tracks price drops and automatically buys an item if it falls within someone’s budget.”  While “many of the A.I. tools are still experimental and unproven,” per a Harris poll “roughly 42 percent of shoppers are already using A.I. tools for their holiday shopping.” 

And so it is going.  Most of these innovations don’t require more expensively expanded large language models.  Why would people stop using them?  Why would companies stop improving them in other ways?  They are here to stay, and so, it must be, is artificial intelligence.

No comments:

Post a Comment