Friday, August 27, 2021

Surveillance: We’ll Need to Get Back to It Pretty Soon

Back in February 2021 – that was a long time ago, wasn’t it?  – the largest national issue we faced was the rapidly growing capability of people, companies, institutions, and our governments to track us.  Indeed, that month I wrote a three-part series on electronic surveillance.  Since the pandemic started, this concern has almost disappeared from the press.  What have been the exceptions?

If it were released when we could deal with it more comprehensively, Shoshana Zuboff’s January 29th New York Times “The Coup We Are Not Talking About” would be a good place to start.  The author called for an end to the situation in which “companies can stake a claim to people’s lives as free raw material for the extraction of behavioral data, which they then declare their private property,” as it has been followed by “epistemic inequality, defined as the difference between what I can know and what can be known about me” and “coordinated streams of disinformation,” to result in life where “epistemic dominance is institutionalized, overriding democratic governance with computational governance by private surveillance capital.”  Ultimately, “if we are to defeat the epistemic coup, then democracy must be the protagonist,” through “the democratic rule of law” and recognizing that “new conditions summon new rights” and “unprecedented harms demand unprecedented solutions.”  As with the Covid-19 effort, we will probably need to take more chances with less-than-100%-proven solutions. 

Soon afterwards, the Times also published Cade Metz and Kashmir Hill’s “Here’s a Way to Learn if Facial Recognition Systems Used Your Photos” (January 31st).  Well, sometimes.  Anyone can use the tool Exposing.AI to determine if specific pictures were involved, but only if they “were posted to Flickr, and they need a Flickr username, tag or internet address.”  There are of course billions of photos online, and almost any could, legally or not, be stored for identification.

Now, near the beginning of anti-surveillance legislation, “Massachusetts is one of the first states to create rules around facial recognition in criminal investigations” (The New York Times, March 1st).  There, currently, “police first must get a judge’s permission before running a face recognition search,” and who can do such is limited.  Other cities, though, including Oakland, Portland, San Francisco, and Minneapolis, already “have banned police use of the technology” entirely.

Early this month, per “The Lesson to Learn From Apple’s Tool to Flag Child Sex Abuse” (Brian X. Chen, The New York Times, August 11th), “Apple introduced a software tool for iPhones to flag cases of child sex abuse” by tracking uploads from “a database of known child pornography” to that company’s iCloud storage utility.  The title is inaccurate, though, as child abuse is not the same as viewing or even moving photos originating from others.  There are easy countermeasures, mainly using “a hybrid approach to storing your data,” but the issue here, whether people not under investigation can be electronically surveilled, is at best ripe for a legal challenge and at worst is clearly against the law.  The slippery slope – what other crimes could people be monitored for, what sources can be flagged, and in what other ways could activity be examined – is obvious, and is once again a subject for clarification, discussion, and, with state boundaries meaningless here, for setting national policy.

At the same time, the coronavirus has resurged, with, despite over half of Americans fully vaccinated, threats to set new all-time highs in hospitalizations and new cases.  The seven-day-average of the latter has surged more than 13-fold since its July 5th low.  Everyone is tired of wearing masks, and few, though some, of those refusing the vaccine have relented.  Even assuming that those who have had the shots continue to be safe, we could easily be looking at another six months of national emphasis and distraction.  Where will electronic surveillance be when Covid-19 largely leaves us alone?  We don’t know, but it, instead of the virus, may then be out of control.  We need help there quicker – will we get it? 

Friday, August 20, 2021

Automation: Little Press Recently, but Still a Real Pending Problem

In my 2012 Work’s New Age, I wrote extensively about the coming of machines and expected them soon to take over tens of millions of jobs.  That hasn’t happened – yet.  There has still been some of that, and continued sporadic public concerns over the past six months.

The first I can offer is “The Robots Are Coming for Phil in Accounting,” by Kevin Roose in the March 6th New York Times.  Most here reads as if it were from around the time of my book above, including mentions of automation taking over positions much higher paying than the industrial production work where it started, that machines get more “disruptive potential” as they “become capable of complex decision-making,” that “A.I. optimists” have long expected the absurd outcome of an equal number of positions to be created after robots eliminate many, and that it is valuable to select careers “harder to automate.”  What’s new is outcomes of studies which “compared the test of job listings with the wording of A.I.-related patents, looking for phrases like “make prediction” and “generate recommendation” that appeared in both,” endangering largely “better-paid, better-educated workers in technical and supervisory roles.”

Next, we have a National Bureau of Economic Research paper “Tasks, Automation, and the Rise in US Wage Inequality,” by Daron Acemoglu and Pascual Restrepo, issued in June.  In the abstract, the authors said “that between 50% and 70% of changes in the US wage structure over the last four decades are accounted for by the relative wage declines of worker groups specialized in routine tasks in industries experiencing rapid automation.”  That should be scary.

Low-level service workers are now enjoying higher demand and pay than they have had maybe ever, but that may turn out to be short-lived, per Ben Casselman’s July 3rd New York Times “Pandemic Wave of Automation May Be Bad News for Workers.”  We have for years seen kiosks and phone apps allowing fast-food customers to order without involving anyone at a counter, and, as the cost of employees jumps, automated solutions get more cost-effective.  As pay levels have shot up suddenly, and software and devices take time to develop, we can expect many more of those to reach management’s input streams within the next year or two.  Whether companies are willing to pay people current market rates or not, they will have ever-better alternative options.

Three days later, also in the Times, “The pandemic has brought more automation, which could have long-term impacts for workers” provided a good summary and set of examples, including “an automated voice” taking Checkers drive-though orders and suggesting additional purchases, supermarket “robots to patrol aisles for spills and check inventory,” a Kroger’s warehouse with “more than 1,000 robots that bag groceries for delivery customers,” and even remote factory troubleshooting, allowing technicians to cover larger geographical territories.  In all, “technological investments that were made in response to the crisis may contribute to a post-pandemic productivity boom, allowing for higher wages and faster growth” – at the expense of jobs bringing in relatively little revenue.

There are reasons why companies do not automate as much as they could, starting with maintaining good public relations.  It could be that no great post-Covid spurt of technology eliminating workers will materialize.  But there is plenty of justification for it, especially when management sees production-level compensation as an unpredictable budget-buster.  Accordingly, permanent automation could, indeed, turn out to be the coronavirus’s most widespread and lasting legacy.  Bet against that at your peril.

Friday, August 13, 2021

Six Months of Artificial Intelligence News: Not Much, But Don’t Ignore It

Before the pandemic struck, I called the use of artificial intelligence, after but related to electronic surveillance, the second most important current American issue.  The problem is not AI itself, but what we will allow it to do, and how we will react when it uncovers information we are not happy learning.  Except for its expected incremental progress, what has reached the press about it recently?

We got a level-setting summary on the February 23rd New York Times from Craig S. Smith, “A.I. Here, There, Everywhere.”  Common now are “conversations” with devices which we order, in sentences reminiscent of those addressed to computer HAL 9000 in the now 54-year-old movie 2001:  A Space Odyssey, to turn on lights, put on the heat, start the oven, and so on.  Handy, but we may come to see today’s capabilities as “crude and cumbersome,” and, as devices learn our regular patterns and report deviations to systems or people which may pass them on when we don’t want them to, “privacy remains an issue.”  AI is now being packaged into humanoid “realistic 2D avatars of people” which can be used for the likes of tutoring, and being used as in effect a fifth or sixth-level computer language by following commands to write software.  Of course, we can expect much more. 

Another AI application, in this case in place for a decade or more, has a growing set of countermeasures, some described in Julie Weed’s March 19th “Résumé -Writing Tips to Help You Get Past the A.I. Gatekeepers” in the New York Times.  Weed recommended “tailoring your résumé, not just the cover letter, to each job you are applying for,” using the same keywords as in the advertisement, and to use “words like “significant,” “strong,” and “mastery.”  The software will evolve over time, as will the applicants’ best responses.

The headline of Cade Metz’s March 15th piece, also in the Times, asked “Who Is Making Sure the A.I. Machines Aren’t Racist?”  Metz asserted that AI “is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it,” and defends that with examples of systems poor at identifying faces of blacks, a six-year-old AI identification of a black man as a gorilla, and another set of programs being trained with an 80%-white set of faces, approximating the general population.  All of that, if legitimate, has been, can, or will be repaired.

Ted Chiang, in the New Yorker on March 30th, addressed a large underlying AI issue in “Why Computers Won’t Make Themselves Smarter.”  He invoked Ray Kurzweil’s Singularity, or the point, per Wikipedia, “at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization,” and questioned if that would ever happen.  He cited an example of a certain roundworm, with a far lower number of brain neurons and other body cells than humans, on which scientists have “mapped every connection” but “still don’t completely understand its behavior.”  While computer compilers have compiled themselves for many decades, improvement stops there, exemplifying the inability, conceptionally as well as so far empirically, for automata, in contrast with people, to learn from others.  These issues are not as clear as Chiang made them seem, but his view is good enough to be either refuted or accepted.

The newest is from Frank Pasquale and Gianclaudio Malgieri, “If You Don’t Trust A.I. Yet, You’re Not Wrong,” in the July 30th New York Times.  The authors, law professors, argued for more artificial intelligence regulation, but stumbled in explaining why.  They seem to have missed the differences between private and public use, that it cannot be banned simply because it does not always make optimal conclusions, that discrimination against individuals with certain characteristics may be justified, that more pressing issues such as Covid-19 have caused it to “not appear to be a high-level Biden administration priority,” and that is useless to talk about “racially biased algorithms” or “pseudoscientific claptrap” if nobody can define those terms.

In a year or so, if the pandemic has faded to pre-2020 levels, we need to address artificial intelligence – if we can afford to wait that long.  Ahead of them on the list of issues needing attention then, though, are two others, which, barring large breaking national developments, will be the subject of my next two posts.

Friday, August 6, 2021

A Banner Jobs Report, But We Still Have a Long Way to Go – AJSN Shows Latent Demand for 20 Million More Positions

There were high expectations for this morning’s Bureau of Labor Statistics Employment Situation Summary, and it exceeded them.  The 943,000 net new nonfarm positions were about 100,000 more than the consensus projection.  Unemployment, up last month from people rejoining the labor force but not getting jobs, fell 0.5% and 0.4% seasonally adjusted and unadjusted, and reached, respectively, 5.4% and 5.7%. 

The other key numbers also showed robust improvement.  The total number of unemployed dropped 800,000 to 8.7 million.  The total on temporary layoff shed a third to reach 1.2 million.  The count of people with long-term joblessness, or 27 weeks or longer, lost 600,000 to 3.4 million.  Those working part-time for economic reasons, or keeping that type of employment while thus-far unsuccessfully seeking full-time work, were only 100,000 less numerous to get to 4.5 million, but held last time’s 700,000 improvement.  Average private nonfarm payroll earnings, including an adjustment to June’s data, were up 14 cents, more than inflation, and are now at $30.54.  The two best measures of how many Americans are working or one step away, the labor force participation rate and the employment-population ratio, gained 0.1% and 0.4% to 61.7% and 58.4%. 

The American Job Shortage Number or AJSN, the gauge of how many new positions could be absorbed if all knew that getting one would be quick and easy, improved 700,000, as follows:

Six-sevenths of the AJSN’s loss was from lower official unemployment, with most of the rest from fewer discouraged workers.  One change is lower population growth, which has so far cut the break-even for additional monthly jobs from approximately 130,000, where it was for most recent years, to about 60,000.  The share of the AJSN from unemployment fell 1.5% to 41.4%.

The state of the Covid-19 pandemic, over the same time assessed by the employment numbers of June 16th to July 16th, generally worsened.  Per the New York Times, the seven-day average of new daily cases jumped 143% to 30,901, and hospitalizations rose 20% to 22,641.  The same measure of deaths, though, decreased 16% to 280.  Daily vaccinations, reflecting a dwindling customer set most of all, declined 55% to reach 519,678.  Since the shots are available on a walk-in basis all over the country, there can be no reasonable thought that the good numbers above were at the expense of a worsened coronavirus response.

A fine jobs report month indeed, but we need to stay aware of our entire situation.  We still have 4.9 million fewer jobs than before the pandemic.  The AJSN is 4.2 million higher than it was in February 2020, and is greater than it was two issues ago. 

In late May and early June, many people newly sought but did not find work.  Over the month since, lots of them did, without needing to compete with as many re-arrivals.  Still, we’re nowhere near back, and weekly state unemployment claims, hovering around 400,000, continue to reinforce that.  The turtle took another big step forward, but where we were a year and a half ago and long before remains far in front of him.