Friday, February 24, 2023

Robots and Other Artificial Intelligence Applications – II

This issue, the second of three straight weeks of this series (the jobs report, which drives the monthly AJSN, will be released a week later than usual), is about AI-related articles from January, some of which point up its largest concerns.

 “An A.I. Pioneer on What We Should Really Fear,” by David Marchese in the January 1st New York Times, an interview of prominent front-line researcher Yejin Choi, dealt with the problem of consciousness.  A Google engineer was fired last year for claiming that some of its products could think, but that view actually cannot be refuted, as we do not know from where sentience arises.  Choi said that “when you work so close to A.I., you see a lot of limitations”; we know that such software, and that’s what it is, can be superhuman or even unbeatable in settings where the rules are well defined, such as playing chess or checkers, but in more general situations it may not match a small child’s abilities, as, per Choi, “A.I. struggles with basic common sense,” as “what’s easy for machines can be hard for humans and vice versa.”  Choi in effect considered the decades-old autonomous goal-seeking problem, in which devices do not have constraints people would consider obvious, the main AI issue, a highly reasonable view.

Continuing in another section of the same newspaper was “Consciousness in Robots Was Once Taboo.  Now It’s the Last Word,” by Oliver Whang on January 6th.  The author discussed the views of prominent AI engineer Hod Lipson, who was unsure technology could not be sentient, and said “there is no consensus around what it actually refers to.”  Per Lipson, as far as we can tell, “the fundamental difference among types of consciousness – human consciousness and octopus consciousness and rat consciousness, for example – is how far into the future an entity is able to imagine itself.”  That’s a viable theoretical start, but who can determine what a rat comprehends?  The engineer claimed that, while it may improve, currently “we’re doing the cockroach version.”  This piece has much more, which also clarified that we have a long way to go in understanding what silicon and nonhuman living things think, or in the former case if they do at all.  Hard material, which may defy insight indefinitely.

Swerving to a most practical AI application, we have “White Castle hiring robots to ‘give the right tools’ for serving more ‘hot and tasty food’: VP,” by Kristen Altus in Fox Business on January 7th.  We’ve already seen Flippy, a Miso Robotics device expert at preparing hamburgers and French fries, but not with roll-outs at 100 locations, as has happened here.  Now we can consider restaurant automata a natural response to higher wages, with apparently at least one robust, production-ready product on offer.

A generic term for the internals of chatbots similar to ChatGPT, able to create “text, images, sounds and other media in response to short prompts,” is “generative artificial intelligence,” and was the subject of “A New Area of A.I. Booms, Even Amid the Tech Gloom” (Erin Griffith and Cade Metz, The New York Times, January 7th).  New companies, such as Stability AI, Jasper, and ChatGPT’s OpenAI, have had little recent problem attracting venture capital, with $1.37 billion reaching the sub-sector in 2022 alone.  Although generative AI has been in progress “for years,” only since early last, “when OpenAI unveiled a system called DALL-E that let people generate photo-realistic images simply by describing what they wanted to see,” did it reach the funding forefront.  And there will be much more.

Also in the Times, Cade Metz himself hit the second major implementation issue, in “AI Is Becoming More Conversant, But Will It Get More Honest?” (January 10th).  Problems described here take several different forms.  The first is simple factual errors, as explained by a founder of startup Character.AI, who said “these systems are not designed for truth – they are designed for plausible conversation.”  Second is the effect of extreme and easily refutable views, such as denying the Holocaust, picked up along with valid Internet statements.  A third situation arises when chatbots relay reasonable but still debatable views as facts, and beyond that we have the hardest problem of all – when AI products analyze data and reach conclusions factually defensible but offensive to modern sensibilities.  This article did not get into these.

On January 20th, also by Metz in the New York Times, we saw “How Smart Are the Robots Getting?”  A knotty problem simplified by strict definitions, of which we have possibilities, including passing the 72-year-old Turing Test, “a subjective measure” accomplished when people questioning automata “feel convinced that they are talking to another person.”  Metz ticked off specific AI accomplishments, but the battleground is in less specific settings.  The real current issue “is that when a bot mimics conversation, it can seem smarter than it really is.  When we see a flash of humanlike behavior in a pet or a machine, we tend to assume it behaves like us in other ways, too – even when it does not.”  That is similar to how skilled stage magicians use human shortcomings, such as inability to accurately determine directions of sounds, to bolster and even make their illusions.  There are other intelligence tests described here, and assessing them is not easy.

Next week I continue, maybe with artificial intelligence updates happening after this post’s publication date, and with overall conclusions. 

Friday, February 17, 2023

Robots and Other Artificial Intelligence Applications – I

We’ve been sort of stunned by ChatGPT’s recent exploits, which not only suddenly forced people to adjust their methods but solidly moved AI from the future to the present.  There has been much going on in this field, which should also include robots as they are now truly manifestations of AI.  This post is the first of a three-part series which will break for the March 3rd jobs report and in the unlikely event that something else about jobs and the economy seems more important and urgent.  So now, in chronological order, we start.

First is only a statistic, shared by Emerging Tech Brew citing The Wall Street Journal on September 21st.  In 2021, there were 243,000 industrial robots implemented in China, which was “just about equal to the amount installed by every other country on earth combined.”  Not really shocking, as China has been adding far more industrial capacity than elsewhere, but noteworthy as robots, as sort of anti-human-work, mean that its overall strategy of competing with cheap labor is over.

It's always too soon to make conclusions on such matters, but Farhad Manjoo maintained in the October 7th New York Times that “In the Battle With Robots, Human Workers Are Winning.”  Indeed, when Manjoo asked “weren’t humans supposed to have been replaced by now – or at least severely undermined by the indefatigable go-getter robots who were said to be gunning for our jobs?,” he missed how many people have already been displaced, and that AI and robots are not necessarily comprehensive, and suggested that because widespread, broad-based job elimination has not already happened it never will.  In radiology, a high-skill field now being largely automated, while it is reasonable that “even if computers can get very good at spotting certain kind(s) of diseases, they may lack data to diagnose rare conditions that human experts with experience can easily spot,” there is no reason why a group of such practitioners cannot be reduced, with remaining employees doing more specialized diagnoses for more patients.  Robots will improve and proliferate on timelines of which we cannot be certain.

Not all new automata are highly intelligent, as shown in the imaginative title situation in “Meet Your New Corporate Office Mate: A ‘Brainless’ Robot” (John Yoon and Daisuke Wakabayashi, November 17th, The New York Times).  The authors chronicled a solution for humans being wary of what data such things wandering workplace halls may be collecting, which could be Naver’s devices, “completing mundane tasks like fetching coffee, delivering meals and handing off packages,” skilled at using elevators without interfering with people, and represented as doing only those tasks.  This piece showed well how maximum capability is not be the only robotic goal.

At the other extreme, we have “MIT researchers creating self-replicating robots with built-in intelligence,” by Paul Best in Fox Business on November 27th.  They are “swarms of tiny robots” able to “build structures, vehicles, or even larger versions of themselves.”  This one, though being designed and tested, “will likely be years” before implementation.  Also scary was “San Francisco Considers Allowing Use of Deadly Robots by Police” (Michael Levenson, The New York Times, November 30th).  The idea here was first implemented in the US by Dallas police, who in 2016 “ended a standoff with a gunman suspected of killing five officers by blowing him up with a bomb attached to a robot.”  The real issues here are ethical, not logistical – exactly what situations if any would justify their use – and will need to develop.

We would like to know “How AI is conquering the business world” (Guy Scriven, The Economist, December 10th).  The author saw not giant steps but an accumulation of small tasks, mastered one after another.  When enough of these responsibilities are eliminated, job consolidation can proceed.  That publication issued the unbylined “The new age of AI” in the same edition, saying “artificial intelligence is at last permeating swaths of the business world.”  Examples here included John Deere’s “fully self-driving” farm machines, tools that propose finishing sentences (as in the version of Microsoft Word I am using here), reducing data center energy consumption, rerouting impeded deliveries, sweeping floors, writing first presentation drafts, and generating computer code, all now incorporated into live production settings.  The piece mentioned Nick Bostrom’s observation that “once something becomes useful enough and common enough it’s not labeled AI anymore,” and predicted “an explosion of such “boring AI.””

That may be much of artificial intelligence’s near-term future – but hardly all.  For the first articles of 2023, see the next post in this series. 

Friday, February 10, 2023

ChatGPT – The Artificial Intelligence Event of the Decade

 My previous posts about AI have emphasized actual accomplishments, but mostly these were small-scale, laboratory-bound, or needed more time and iterations to become significant.  What has happened over the last two months needs none of those qualifications.

ChatGPT, per Kelley Huang in “Alarmed by A.I. Chatbots, Universities start Revamping How They Teach” (The New York Times, January 16th), is “a chatbot that delivers information, explains concepts and generates ideas in simple sentences.”  When its use by students to fulfill written assignments reached a Northern Michigan University philosophy class, the professor “read what he said was easily “the best paper in the class,”” on a subject hardly exhausted by current literature, asked the claimed writer if it was really his work, and heard the truth. 

It didn’t take long for word of this capability, not only easily implementable but in use by students, to spread through the academic community.  Per Huang, moving from that professor’s “plans to require students to write first drafts in the classroom” and “using browsers that monitor and restrict computer activity,” others are “phasing out take-home, open-book assignments” in favor of “in-class assignments, handwritten papers, group work and oral exams.”  Some are “crafting questions that they hope will be too clever for chatbots and asking students to write about their own lives and current events.” The management of Turnitin, a “plagiarism detection service,” plans to “incorporate more features for identifying A.I.”

Soon afterwards, related happenings began hitting the press.  Samantha Murphy Kelly told us in CNN Business ten days later when “ChatGPT passes exams from law and business schools,” in these cases doing what was judged as C+-level work at the University of Minnesota law school and getting a “B to B- grade” at a Wharton business management course exam, though making ““surprising mistakes” with basic math.”  Pertinent implications, such as “Long story short:  Will robots take over the workplace?  How to use tech for good” (Alyssa Place, Benefit News, January 27th) about the latest exploits of chatbots in general and ChatGPT in particular, “Potential Google killer could change US workforce as we know it” (Alicia Warren, Fox Business, January 29th), “ChatGPT Just Passed an MBA Exam.  How Will It Change Business?”  (Sarah Lynch, Inc., February 1st), and “Will ChatGPT and AI lead to more layoffs?” (Nate Lanxon, Benefit News, February 6th) soon followed, with necessarily preliminary speculations on how employment could be affected.  “Battle of the labs” (The Economist, February 4th) reminded us that “as the AI race heats up, ChatGPT is not the only game in town.”

What observations can we make about ChatGPT and its ilk?

First, what we have recently seen is not the end but the beginning.  We should expect that some chatbot shortcomings, such as poor arithmetic, will be resolved within the year.  Any advantage of requiring recent news items will most likely go away.  Even, most scarily, it may not be long before a chatbot will be able to access major facts and some details about our lives, and put them into narratives with verisimilitude if not true information.  Therefore, the only way of neutralizing this work-offloading will be to keep Internet access, or even computer access, out of the way.

 

Second, it is true that this form of AI can be stopped from absorbing entire jobs by duties requiring human-only abilities, but there is no reason why, for example, the responsibilities of two people, each with 50% chatbot-replaceable content, cannot be consolidated into one human-worked position.

Third, once academic-world competition and selection requirements are non-factors, these tools can be immensely valuable.  People from professionals to interested dabblers can use them to generate briefings of sorts on things they want to learn about.

Fourth, we will have both a problem and an opportunity with using chatbot output to determine what could be considered the truth.  We can ask for the equivalent of college papers, or even books, answering questions such as “How can America solve its racial problems?” and “What political views are correct?.”  The disagreements will emerge right away, but the information provided will have more truth than many will be willing to accept.

So let’s allow academia to solve its ChatGPT problem, which dramatically brought AI progress to our attention.  We have bigger fish to fry.  How we do may have a remarkable effect on the quality of our lives in years and decades to come.

Friday, February 3, 2023

Big Jobs Gain in January Offsets Part of Seasonal Unemployment – AJSN Shows Latent Demand at 16.7 Million, Up 1.1 Million

 

January usually has the steepest drop in American employment.  Millions of people end their holiday-related jobs, and not all find new ones.  The gap between adjusted and unadjusted employment is the year’s largest, as it is for most other work-related labor numbers.

That was at the center of this morning’s Bureau of Labor Statistics Employment Situation Summary.  Dominating the headlines should be the count of net new nonfarm positions, which blew away the also-seasonally-adjusted published 185,000 estimate and turned in 517,000.  Other figures looked good as well – adjusted joblessness trimmed 0.1% to reach 3.4%, average private payroll nonfarm wages beat inflation by jumping 21 cents to $33.03, and the two measures of how common it is for Americans to be working or officially unemployed, the employment-population ratio and the labor force participation rate, each grew a significant 0.1% to 60.2% and 62.4%.  Not improving were the count of those unemployed for 27 weeks or longer, still 1.1 million, and the number of officially jobless, still 5.7 million.  On the down side were unadjusted unemployment, now 3.9% instead of 3.3%, the total working, off 180,000 to 158.692 million, and the number of people working part-time for economic reasons, or holding on to such opportunities while searching for full-time ones, which gained a second-straight 200,000 and is now at 4.1 million. 

The American Job Shortage Number or AJSN, the metric showing how many additional new positions could be quickly filled if all knew they would be easy to get, gained over 1.1 million to reach the following:



More than the total increase was from the officially unemployed and those reporting they wanted work but had not sought it for at least 12 months.  Best showing the overall progress we made was a year-over-year comparison, which revealed that since January 2022 the measure has lost 1.1 million, mostly accounted for by these same two components.  The share of the AJSN from official joblessness rose 3.4% and is now 34.3%. 

On Covid-19, per the New York Times the seven-day daily averages of new cases measured December 16th and January 16th dropped 8% to 59,260, that for deaths measured on the 15ths rose 51% to 564, and that for hospitalizations, on the same dates, grew 7% to 43,137.  Despite the last two worsening, these numbers are well below the virus’s pandemic-era performance and do not indicate particular concern about dangerous jobs.

What do we make of all this?  The AJSN is not seasonally adjusted, so can look worse than it is in down-employment times of the year.  Although we didn’t really add 517,000 jobs, we didn’t lose anywhere near the typical actual December-to-January 700,000, only about one quarter of that.  As any serious poker player can tell you, avoiding losses can be as valuable as winning.  Our population, including children and those well past 65, gained only 113,000 last month, and we are, month after month, adding more jobs than that.  This was another excellent report, and the turtle took another healthy step in the right direction.