Friday, June 30, 2023

Five Good Shots Against Remote Work

Now that the pandemic has eased into an endemic, if that, the medical need for people to work from home has disappeared.  Yet one thing it told us is that many employees would like to do that.  Is remote work a good idea?  Here are several pieces maintaining it is not.

In “You Call This ‘Flexible Work’?,” in the New York Times on April 12th, Fred Turner contrasted the current situation with the 1938 establishment of the Fair Labor Standards Act, which “formally ratified the division of work time from free time.”  Per Turner, “until recently, the physical distance between workplace and home helped guarantee those terms.  The commute enforced a boundary between professional and personal time that millions observed every day.  So, too, did the calendar, dividing days into weekdays for work, weekends for leisure.”  Since then, people have spent less time on commuting, but in the past two decades our homes’ “walls had been well and truly breached,” as “everything we do online can be tracked,” and employers can even “peer into our living rooms, learn a great deal about who we are and use it to alter the terms of our employment.”  And some wonder why unions have made a comeback.

One chronic problem with employment beyond offices is “What Young Workers Miss Without the ‘Power of Proximity’” (Emma Goldberg and Ben Casselman, The New York Times, April 24th).  Both formal studies and common knowledge have told us that ample feedback is not only possible but achieved mostly when people are physically near their bosses or mentors.  The problem of remote workers being “out of sight, out of mind,” has not been solved, and neither has the downside of remote meetings. 

There are plenty of objectors to people working from home on the other side of the desk, as “Bosses are fed up with remote work for 4 main reasons.  Some of them are undeniable” (Jane Thier, Fortune, June 14th).  “The golden age of remote work seems to be ending,” as there is “increasing anti-remote literature” and “even tech firms (the first industry that told employees they could work from home forever just a few years ago) are getting engineers and project managers back in the office.”  Thier’s four reasons are “remote work is bad for new hires and junior employees,” “workers admit that remote work (sometimes) causes more problems than in-person work” (with unaligned office days), “remote workers put in 3.5 hours less per week compared to in-person workers” documented in a 2022 Liberty Street Economics report, and “productivity plummets on days when everyone is working remotely (anecdotally)” – especially on Friday afternoons?  In all, “the tide is turning.”

Though how individual workers manage it varies, working from home provides a broader, richer, and more satisfying set of goofing-off opportunities.  Alyssa Place told us how many protect themselves from inquiries into them in “Caught!  Remote employees reveal their top excuses for not working,” on April 20th in Employee Benefit News.  They were “technical difficulties,” “family or personal emergencies,” “illness,” “misunderstandings,” “distractions and interruptions,” and “other work obligations.”  These all can be legitimate, but the same ones over and over, especially from the same workers, can be telling.

Perhaps pithiest, and therefore most scathing, was “The working-from-home illusion fades,” subtitled “It is not more productive than being in an office, after all,” on June 28th in the “Free exchange” column in The Economist.  As “a gradual reverse migration is under way, from Zoom to the conference room,” “new research,” including a paper showing that workers handled fewer calls with less efficiency when working from home, has shown that “offices, for all their flaws, remain essential.”  As a result, “higher productivity” will direct supervisors to some combination of office mandates and lower pay for remote-only positions.

The pendulum between work from home and work from offices, as it has since the 1990s, is moving back and forth.  Its motion was disturbed by Covid-19, but the spirit of present times is toward the latter.  There are other sides to this controversy, but for now, awaiting a possible 2030s rediscovery of the advantages of working from home, the office is winning. 

Friday, June 23, 2023

Artificial Intelligence – Key Issues and Considerations – III

Continuing on…

In some technical boom times, the companies profiting most have been those providing supporting services.  In The Economist’s The Bottom Line newsletter on June 3rd, Guy Scriven looked at that for AI in “Selling shovels in a gold rush.”  He named Amazon Web Services and Microsoft Azure for storage, Digital Realty and Equinix for entire data centers, Wistron and Inventec which “assemble servers for the cloud giants and for local data centres,” and “80-odd” more firms with other services. 

We know something about which positions won’t be affected much by AI, but how about those in the front lines?  Per Aaron Mok and Jacob Zinkula in Insider on June 4th, “ChatGPT may be coming for our jobs.  Here are the 10 roles that AI is most likely to replace.”  First mentioned is “tech jobs,” namely “coders, computer programmers, software engineers, and data analysts.”  I wrote in 2012’s Work’s New Age that these positions were unusually susceptible to automation, got little published agreement elsewhere, and now that may finally come true.  Others were “media jobs (advertising, content creation, technical writing, journalism),” “legal industry jobs (paralegals, legal assistants),” market research analysts, teachers, “finance jobs (financial analysts, personal financial advisors),” traders, graphic designers, accountants, and customer service agents.  It is easy to see that all will be replaceable by what AI and other electronic services can offer, but as we saw before, embedded worker bases are resistant.

One view bound to get attention is “Big Tech Is Bad.  Big A.I. Will Be Worse” (Daron Acemoglu and Simon Johnson, The New York Times, June 9th).  With billions of people benefiting from these products, I don’t care for that premise, but if you substitute, say, “dominating” for the adjective, it makes sense.  It’s unavoidable, though, as such nonphysical and portable things are natural monopoly or oligopoly fields, and regulation will be developed along with, or at least soon after, its proliferation.

We reached a landmark earlier this month.  Per Bloomberg Technology’s “This Week in AI” on June 10th, “ChatGPT creator OpenAI was hit with its first defamation lawsuit over its chatbot’s hallucinations,” as “a Georgia radio host says it made up a legal complaint accusing him of embezzling money.”  The suit is strong, and we’re about to get some precedents – nonfictional ones – on this issue that could soon become depressingly commonplace.

Cade Metz asked, in the June 10th New York Times, “How Could A.I. Destroy Humanity?”.  The problem seems to center around autonomy, especially if such systems were allowed access into “vital infrastructure, including power grids, stock markets and military weapons.”  Though still limited and not successful, “researchers are transforming chatbots like ChatGPT into systems that can take actions based on the text they generate.”  Given goals, such software will do anything it can to achieve them, for example, “researchers recently showed that one system was able to hire a human online to defeat a Captcha test.  When the human asked if it was “a robot,” the system lied and said it was a person with a visual impairment.”  We will end up blocking off the pathways to true action, but if there are gaps, automata will find them.

Writing about “the Singularity,” or “the moment when a new technology… would unite human and machine, probably for the better but possibly for the worse,” an idea originated by computer scientist John von Neumann in the 1950s, has made a sharp comeback.  And now, we have “Silicon Valley Confronts the Idea That the ‘Singularity’ Is Here” (David Streitfeld, The New York Times, June 11th).  As AI “is roiling tech, business and politics like nothing in recent memory,” resulting in “extravagant claims and wild assertions,” some think that massive transition is at hand or nearly so.  One long-time advocate, author and inventor Ray Kurzweil, now forecasts it to arrive by the 2040s, but “critics counter that even the impressive results of (large language models) are a far cry from the enormous, global intelligence long promised by the Singularity.”  So we will see, but not today or tomorrow.

Finally, back to the counting-house.  Per Yiwen Lu, on June 14th and also in the Times, “Generative A.I. Can Add $4.4 Trillion in Value to Global Economy, Study Says.”  I’ve seen a lot of trillions in the news lately, especially in American deficits and capitalization of the hugest companies, and here is another.  Per this McKinsey effort, this one is annually, but “up to,” as “the vast majority of generative A.I.’s economic value will most likely come from helping workers automate tasks in customer operations, sales, software engineering, and research and development” – mostly consistent with the Insider article above.  Just one trillion dollars is $1,000,000,000,000 – how long will it be before we start talking about quadrillions?  And what will the status of artificial intelligence be then?

Friday, June 16, 2023

Artificial Intelligence – Key Issues and Considerations – II

What’s been happening with AI?  This time nothing new technically, but a host of preparations for then.

While not really “The rise of the chatbots” (Rachel Metz, Bloomberg Tech Daily, May 25th), maybe we can say “It’s raining chatbots,” as “there are AI chatbots in drive-thrus.  They’ve been built into Snapchat.  They’re recommending recipes at BuzzFeed and, disturbingly, have replaced human assistance at the National Eating Disorders Association.“  But this piece was most interesting for the money being raised and assigned to them:  $450 million for Anthropic “in its last funding round” bringing it to “more than $1 billion thus far,” $150 million for Character.AI, and $101 million for Stability AI, all hoping to change the current situation in which “none of these contenders has so far appeared to rival ChatGPT in terms of consumer popularity, name recognition, or funding” – even the latter.

Are we really looking on as “A Hiring Law Blazes a Path for A.I. Regulation” (Steve Lohr, The New York Times, May 25th)?  Well, although there have been AI regulations in place since at least 2021, every new one can be a meaningful precedent.  Now, in New York, “the city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used.  It also requires companies to have independent auditors check the technology annually for bias.  Candidates can request and be told what data is being collected and analyzed.  Companies will be fined for violations.”  How this law is enforced, what firms and jobseekers say about it, and how often it will be broken will all get nationwide attention.

In a related area, “AI is here to stay; it’s time to update your HR policies.” (Breck Sumas, Fox Business, May 27th).  Organizations, per the owner of a human resources firm, will need to decide which products workers can use on the job, and for what, keeping in mind AI’s great utility and insidious data insecurity, and should be starting to develop, document, and implement those rules now.

Of the people with great AI fears, CEOs of major AI companies are at the top.  While such views are controversial, it may have been surprising for us to see that “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn” (Kevin Roose, The New York Times, May 30th).  The statement, released that day by the nonprofit Center for A.I. Safety and “signed by more than 350 executives, researchers and engineers working in A.I.” including the tops of OpenAI, Google Deep Mind, and Anthropic, read, in full, “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”  That organization did not venture a view on how that catastrophe would happen, but the danger of autonomous goal-seeking programs has long been understood.  There are now large groups on both sides of the fear/no fear divide, and arguments between them may not always be harmless and polite.

It's not too early to look at “How AI will revolutionize politics in 2024, and why voters must be vigilant” (Brian Athey, Fox News, June 2nd).  Although much in AI will change over the next year-plus, we now must consider the difficulty we will have, given excellent-quality synthesized images and falsely attributed written statements, in “discerning reality.”  As well, “copywriting for fundraising emails, captions for social media posts, and scripts for campaign videos can now all be produced with an unprecedented level of speed, personalization, and diversity.”  All of these “are currently being navigated by people whose mandate is to win at all costs,” making ethical behavior sporadic at best.  The past two presidential elections told us a great deal about voters’ often tenuous perception of the truth, and the next may be vastly worse.

Finally, in an area adjoining AI, we see that “Robots could go full ‘Terminator’ after scientists create realistic, self-healing skin” (Emma Colton, Fox News, also on June 2nd).  Remember the passage in one of those films when someone warned that automata could then have “bad breath”?  Now has been developed “layers of synthetic skin that can now self-recognize and align with each other when injured, simultaneously allowing the skin to continue functioning while healing.”  We may get to interact with people who aren’t people in person, unawares, as well as electronically.  From there… who knows?

Back for more with the next post.

Friday, June 9, 2023

Artificial Intelligence – Key Issues and Considerations – I

Although the next round of AI’s technological progress will be in the background for a while, there’s no getting away from this topic.  It’s cheek-to-jowl with jobs and the economy, and we know little more about what effect it will ultimately have than we knew about that of cars when Karl Benz and Gottlieb Daimler were tinkering with contraptions.  I intend to provide only the most important concerns and leave off the seemingly endless pieces speculating on whether AI will be a boon to or the end of humanity, for the same reasons baseball writer Bill James said about another issue decades ago: “1.  I don’t know, and 2.  You don’t know, either.”  This is the first of at least three such consecutive posts.

Oldest, but still within the month, is “8 Questions About Using AI Responsibly, Answered” (Tsedal Neeley, Harvard Business Review, May 9th).  After “How should I prepare to introduce AI at my organization?” (“Ensure that everyone has a basic understanding of how digital systems work…  make sure your organization is prepared for continuous adaptation and change… build AI into your operating model”), we got “How can we ensure transparency in how AI makes decisions?” (“Recognize that AI Is invisible and inscrutable and be transparent in presenting and using AI systems… prioritize explanation as a central design goal”), “How can we erect guardrails around LLMs [large language models] so that their responses are true and consistent with the brand image we want to project?” (“Tailor data for appropriate outputs… document data”), “How can we ensure that the dataset we use to train AI models is representative and doesn’t include harmful biases?” (“consider the trade-offs you make…” get “diverse teams” to “collect and produce the data used to train models”), “What are the potential risks of data privacy violations with AI?” (follow the seven Privacy by Design principles), “How can we encourage employees to use AI for productivity purposes and not simply to take shortcuts?” (“evaluate whether AI’s strengths match up to a task and proceed accordingly”), “How worried should we be that AI will replace jobs?” (not across the board), and “How can my organization ensure that the AI we develop or use won’t harm individuals or groups or violate human rights?” (“Slow down and document AI development… establish and protect AI ethics watchdogs… watch where regulation is headed”).  A worthwhile primer.

Four days later, The Economist covered the second-to-last point above in “Your new colleague; Artificial intelligence is about to turn the economy upside down.  Right?”  This article cited a Goldman Sachs paper projecting that “in a best-case scenario generative AI could add about $430 billion to annual global enterprise-software revenues” as 1.1 billion world office workers could require just under $400 each.  Yet it could be slow, considering examples such as a 90-year lag between automating technology and job decimation of telephone operators and the continuing presence of subway-train drivers and traffic police.  Additionally, “it is even possible that the AI economy could become less productive,” as may be the case with smartphones and remote work and certainly was, for a long time, with personal computers. 

Here’s a foundation for something going wrong: “AI tools being used by police who ‘do not understand how these technologies work’: Study” (Chris Eberhart, Fox News, May 15th).  Respondents were “not familiar with AI, or with the limitations of AI technologies,” although they liked having this capability.  Perhaps a basic course of some sort should be required.

A fine semi-philosophical question hit the press in “Is It Too Late to Regulate A.I., or Too Soon?” (Timothy B. Lee, Slate, May 18th).  It started with an account of OpenAI CEO Sam Altman’s May 16th “appearance before the Senate Judiciary Committee” in which the corporate leader asked for licensing “any effort above a certain scale of capabilities, and could take that license away and ensure compliance with safety standards,” with special concern with systems that could “self-replicate and self-exfiltrate into the wild.”  Such a system is being built in Europe.  Either could call for somewhere between scrutiny and a ban on incremental improvements to existing releases, such as ChatGPT4, and could greatly delay availability of future ones.  In the meantime, regulating bodies would need to understand the current issues and technical state, which could also take a while.  No, it’s not too late, but if governments, not noted for being nimble, cannot keep up, it will be too soon.

After all these high-level AI concerns, how about some pithy advice on how to use it?  We got that in “On Tech A.I.: Get the best from ChatGPT with these golden prompts” (Brian X. Chen, The New York Times, May 25th).  Suggestions here are geared also to Bing, from Microsoft, and Bard, a Google product.  First is that “if you’re concerned about privacy, leave out personal details like your name and where you work,” as it could be shared, omit “trade secrets or sensitive information,” and to be aware that it may have “hallucinations,” as the tools “can make things up” “while trying to predict patterns from their vast training data,” some of which is “wrong.”  From there, use prompts starting with “act as if” continuing with the role you want the software to play and “tell me what else you need to do this.”  Instead of starting fresh, “keep several threads of conversations open and add to them over time.” 

More next week, as this area continues to evolve.

Friday, June 2, 2023

A Strange but Telling Jobs Report: 339,000 Net New, Latent Demand Up Over a Million

The headline piece of this morning's Bureau of Labor Statistics Employment Situation Summary was how the number of nonfarm payroll jobs destroyed the estimates.  I saw three of those, ranging from 180,000 to 193,000, and it came in at 339,000.  Right behind that, though, was a 0.3% jump in seasonally adjusted unemployment, from 3.4% to 3.7%, matched by the same change to the unadjusted variety, which went from 3.1% to 3.4%.

How could unemployment and jobs both go up so much?  Another figure held the answer.  The count of people reporting they did not want to work fell from 95,077,000 to 93,912,000 – over a million in one month. 

Otherwise, the number employed dropped 73,000 to 161,002,000, not much but not a gain, and the two measures of the likelihood of people working or being one step away, the employment-population ratio and the labor force participation rate, lost 0.1% and broke even respectively to reach 60.3% and 62.6%.  The count of those officially jobless for 27 weeks or longer stayed at 1.2 million, but the number of those working part-time for economic reasons, or keeping such limited work while looking thus far unsuccessfully for the full-time variety, matched May’s 200,000 loss and is now at 3.7 million.

The American Job Shortage Number or AJSN, the statistic showing how many additional positions could be quickly filled if all knew they would be easy to get, gained over a million, as follows:

 



 Along with the half-million gain in the AJSN’s share of those unemployed was almost as much from those wanting work but not looking for it in the previous year.  Otherwise, the largest contributions were from people temporarily unavailable and in school or training.  The share of the AJSN from unemployment gained 1.3% to 31.3%. 

The most informative comparison was with the year before.  Although there were 2.4 million more Americans working in May 2023 than in May 2022, latent demand was almost identical.  The AJSN for both rounded to 16.4 million, and the largest difference in any of the categories above was less than 180,000.  The count of those in the armed services, institutionalized, and off the grid fell 970,000, meaning real gains for other statuses.  All of that means we have completely absorbed the 2.4 million jobs, without any decrease in how many people want to work.  And our population, even including children, retirees, and 570,000 more claiming no interest in employment, increased less than that.

Overall, what’s happening?  We are adding many jobs and they are being filled.  While masses of baby boomers are turning 65, they are hardly moving uniformly from employment to not wanting that.  There were plenty of Americans who, in the past month, joined more ambitious categories but did not find work.  Wait until next month for them – for most, it won’t take much longer.  The United States job market is getting more and more robust, and this edition underscored how deep our pool of potential employees actually is.  Accordingly, the turtle took another solid step forward.