Friday, February 10, 2023

ChatGPT – The Artificial Intelligence Event of the Decade

 My previous posts about AI have emphasized actual accomplishments, but mostly these were small-scale, laboratory-bound, or needed more time and iterations to become significant.  What has happened over the last two months needs none of those qualifications.

ChatGPT, per Kelley Huang in “Alarmed by A.I. Chatbots, Universities start Revamping How They Teach” (The New York Times, January 16th), is “a chatbot that delivers information, explains concepts and generates ideas in simple sentences.”  When its use by students to fulfill written assignments reached a Northern Michigan University philosophy class, the professor “read what he said was easily “the best paper in the class,”” on a subject hardly exhausted by current literature, asked the claimed writer if it was really his work, and heard the truth. 

It didn’t take long for word of this capability, not only easily implementable but in use by students, to spread through the academic community.  Per Huang, moving from that professor’s “plans to require students to write first drafts in the classroom” and “using browsers that monitor and restrict computer activity,” others are “phasing out take-home, open-book assignments” in favor of “in-class assignments, handwritten papers, group work and oral exams.”  Some are “crafting questions that they hope will be too clever for chatbots and asking students to write about their own lives and current events.” The management of Turnitin, a “plagiarism detection service,” plans to “incorporate more features for identifying A.I.”

Soon afterwards, related happenings began hitting the press.  Samantha Murphy Kelly told us in CNN Business ten days later when “ChatGPT passes exams from law and business schools,” in these cases doing what was judged as C+-level work at the University of Minnesota law school and getting a “B to B- grade” at a Wharton business management course exam, though making ““surprising mistakes” with basic math.”  Pertinent implications, such as “Long story short:  Will robots take over the workplace?  How to use tech for good” (Alyssa Place, Benefit News, January 27th) about the latest exploits of chatbots in general and ChatGPT in particular, “Potential Google killer could change US workforce as we know it” (Alicia Warren, Fox Business, January 29th), “ChatGPT Just Passed an MBA Exam.  How Will It Change Business?”  (Sarah Lynch, Inc., February 1st), and “Will ChatGPT and AI lead to more layoffs?” (Nate Lanxon, Benefit News, February 6th) soon followed, with necessarily preliminary speculations on how employment could be affected.  “Battle of the labs” (The Economist, February 4th) reminded us that “as the AI race heats up, ChatGPT is not the only game in town.”

What observations can we make about ChatGPT and its ilk?

First, what we have recently seen is not the end but the beginning.  We should expect that some chatbot shortcomings, such as poor arithmetic, will be resolved within the year.  Any advantage of requiring recent news items will most likely go away.  Even, most scarily, it may not be long before a chatbot will be able to access major facts and some details about our lives, and put them into narratives with verisimilitude if not true information.  Therefore, the only way of neutralizing this work-offloading will be to keep Internet access, or even computer access, out of the way.

 

Second, it is true that this form of AI can be stopped from absorbing entire jobs by duties requiring human-only abilities, but there is no reason why, for example, the responsibilities of two people, each with 50% chatbot-replaceable content, cannot be consolidated into one human-worked position.

Third, once academic-world competition and selection requirements are non-factors, these tools can be immensely valuable.  People from professionals to interested dabblers can use them to generate briefings of sorts on things they want to learn about.

Fourth, we will have both a problem and an opportunity with using chatbot output to determine what could be considered the truth.  We can ask for the equivalent of college papers, or even books, answering questions such as “How can America solve its racial problems?” and “What political views are correct?.”  The disagreements will emerge right away, but the information provided will have more truth than many will be willing to accept.

So let’s allow academia to solve its ChatGPT problem, which dramatically brought AI progress to our attention.  We have bigger fish to fry.  How we do may have a remarkable effect on the quality of our lives in years and decades to come.

No comments:

Post a Comment