Friday, January 17, 2025

Four Corporate Practices That May Surprise You – And Displease You

What have those pesky managers at our largest companies have been up to?

In “Is Your Driving Being Secretly Scored?” (The New York Times, June 9th), author Kashmir Hill asked “You know you have a credit score.  Did you know that you might also have a driving score?”  That, also called “telematics,” which “reflects the safety of your driving habits – how often you slam on the brakes, speed, look at your phone or drive late at night,” is supplied to insurance companies from car manufacturers, or “from apps that drivers already have on their phones,” which can include Life360, MyRadar, and GasBuddy.  These tools often have their extra capabilities explained in legal-looking fine print, often unspecifically, such as “we may collect third party data and reports.”  Yet auto insurers have long used personal data, so this is nothing totally new.  In most cases, it can be shut down, or you can choose to do as I did – leave it alone knowing that relaying boring driving habits can only reduce your premiums.  Those more privacy-concerned can dig out this article for much more – it printed to 11 pages.

Another thing I hadn’t seen before, with its absence glaringly obvious, was from Erica Lamberg in Fox Business on July 16th: “Hot career trend ‘hushed hybrid’ has managers choosing the employees who have flex work arrangements.”  Back in the day, and since then as far as I can see, employers did not seem to care about productivity or responsible behavior when deciding whether to allow workers to stay at home, but, despite official policies banning that, here we have, secretly, better employees being given some privileges.  “Hushed hybrid” can be defined as “managers overruling, dismissing or choosing not to enforce a company’s return-to-office policies.”  Although it is high time that firms used individual assessments, formal or not, to decide who can work remotely, the problem is that those not chosen may feel deceived about the true policy.  It would be better if management could do this openly – if there are no unions involved, it seems like they should be able to.

A few years ago we got publicity about different customers being charged different prices, even when all aspects of the transactions involved were identical or nearly so.  It may be expanding with new developments, as “FTC probes AI-powered ‘surveillance pricing’ at Mastercard, JPMorgan Chase, McKinsey and others” (Eric Revell, Fox Business, July 23rd).  The new method uses “AI and other technology” combined with “personal information… such as their location, demographics, credit history, and browsing or shopping history” “to analyze consumer data to help set price targets for products and services.”  The Federal Trade Commission, along with masses of people buying things, did not like that, and it may be banned.

Workers’ long-time frenemy found the spotlight in “So, Human Resources Is Making You Miserable?” (David Segal, The New York Times, August 3rd).  HR, which “bugs a lot of employees and managers… seems to have more detractors than ever since the pandemic began,” when it “began to administer rules about remote work and pay transparency, programs to improve diversity, equity and inclusion and everything else that has rattled and changed the workplace in the last four years.”  Those in that department are themselves “aggravated or bummed out,” often because “office behavior post-Covid has become notably less civil,” resulting in them “being called in far more often to referee disputes.”  Employment site LinkedIn found three years ago that “H.R. had the highest turnover rate of any job it tracked.”  With more and more areas causing problems for them, its staff members often call it a “thankless job.” 

Perhaps fuller and consistently honest disclosure of practices, my largest wish for HR during my corporate career, would help their reputation – but that would be neither quick nor easy.  And so it goes with the other three situations.  People are more willing to accept a fair shake when they know the rules, even if they are not as favorable as they would like.  That is the moral of these stories.

Friday, January 10, 2025

It Looked Like a Positive Jobs Report Month, Despite AJSN Showing Latent Demand Up Almost 400,000 – But Was It?

This morning’s Bureau of Labor Statistics Employment Situation Summary was supposed to be favorable, with published estimates I saw of the net new nonfarm payroll positions with increases of 165,000, 165,000 and 170,000.  Its 256,000 well exceeded those, and most of the other statistics followed.  Seasonally adjusted and unadjusted unemployment fell 0.1% and 0.2% to 4.1% and 3.8%.  The number of unemployed was off 200,000 to 6.9 million, with 100,000 fewer of those out for 27 weeks or longer, reaching 1.6 million.  The count of people working part-time for economic reasons, or thus far unsuccessfully seeking full-time work while holding on to lower-hours propositions, dropped another 100,000 to 4.4 million.  The two measures showing how common it is for Americans to be working or officially jobless, the employment-population ratio and the labor force participation rate, gained 0.2% and stayed the same, ending at 60.0% and 62.5%.  The unadjusted count of employed lost 162,000 to 161,294,000.  Average private nonfarm hourly payroll wages reached $35.69, 8 cents, or slightly less than inflation, more than the previous month. 

The American Job Shortage Number, the metric showing how many more positions could be quickly filled if all knew that getting one would be little more than an ordinary daily errand, worsened with a 359,000 increase, as follows:

Almost all of the AJSN’s increase is probably illusory, as the Census Bureau greatly increased their view of the American population from November 16th to December 16th, the dates of record for the AJSN, and so the non-civilian et al. category above, which takes the difference between those included in our population and in any employment category, rose almost 3.4 million.  Accordingly, the share in the AJSN flew up almost as much as the statistic increased.  Otherwise, the fall in those officially unemployed was essentially offset by a gain in those wanting work but not looking for it during the previous year, and there were no other large changes.

Compared with a year earlier, the AJSN grew just over 200,000, with the largest changes from a lower number of expatriates (subtracting 700,000 from the AJSN), more people officially jobless (adding 490,000), more in armed services or unaccounted for (adding 349,000), and more discouraged (adding 127,000). 

So what’s the real story here?  It was good, but tempered by 683,000 net drop in the labor force, which along with 432,000 more claiming no interest, means that too many, perhaps the same people previously stepping back into the job-search world without finding what they wanted, are departing.  Will that be a problem in 2025?  We will continue tracking it, which we need to know before being enthusiastic about improvements mostly attributable to a smaller labor force.  In the meantime, though, the turtle took a fair-sized step forward.

Friday, January 3, 2025

Artificial Intelligence: The Energy, Computing Power, and Geographical Changes It Will Need, and Why That is Good

One of several large concerns about AI is getting enough resources for it, namely electricity and data center capacity.  Here’s a fast look at seven articles showing it can’t make it on what’s out there now.

“How Amazon goes nuclear” (Dan DeFrancesco, Business Insider, October 25th) started with “What’s bigger than tech’s ambitious plans for generative AI?  The amount of energy needed to power it.”  It may call for a large, controversial source, as “Amazon has led a $500 million financing round for a company developing modular nuclear reactors.”  Google has started similar endeavors, and “should these data center efforts continue to struggle, companies’ big bets on generative AI could also falter.”

Otherwise, “AI’s leaders puzzle over energy question” (Marissa Newman, Bloomberg Tech Daily, October 30th).  That puzzling, happening at Dubai’s Future Investment Initiative which was “mostly centered on AI,” included how to deal with a possible 40% next-decade rise in electricity use – not that much more for AI, but overall.  The Saudi Arabian Oil Company’s CEO “made his pitch” on data centers there using natural gas at relatively low cost.  Others saw merit.

Stateside, “Exxon Plans to Sell Electricity to Data Centers” (Rebecca F. Elliott, The New York Times, December 11th).  That will also be powered by natural gas, generated at a large power plant, possibly completed by late 2029, of undisclosed cost and location, and will be a new line of external business for ExxonMobil.

With these new facilities, it is fair to consider “How A.I. Could Reshape the Economic Geography of America” (Steve Lohr, The New York Times, December 26th).  Cities “well positioned to use A.I. to become more productive” include “Chattanooga and other once-struggling cities in the Midwest, Mid-Atlantic and South,” such as Dayton, Scranton, Savannah, and Greenville, each of which has “an educated work force, affordable housing and workers who are mostly in occupations and industries less likely to be replaced or disrupted by A.I.”  A variety of other businesses, many connected with trucking and freight, stand to benefit.

What is “The 19th-Century Technology That Threatens A.I.” (Azeem Azhar, still The New York Times, December 28th)?  It’s electricity, on which “America has a long way to go.”  In Virginia, “a hotbed for data centers,” those wanting “to connect to the grid” could face a seven-year wait, and “some counties in the state are introducing limits” on them.  Our country, per the author, has “a patchwork of conflicting regulations, outdated structures and misaligned investment incentives” slowing or stopping infrastructure building, along with “a skills gap in labor shortages in construction and engineering, a complex permitting process trapping projects in years of bureaucratic review across multiple agencies,” “high costs of capital,” and “local opposition.”  Overall, per Azhar, “if the United States truly wants to secure its leadership in A.I., it must equally invest in the energy systems that power it.”

In answer to a question arising naturally from the first source above, Bradley Hoppenstein, writing in CNBC, told us “Why tech giants such as Microsoft, Amazon, Google and Meta are betting big on nuclear power” (December 28th).  It’s because of “the energy demands of their data centers and AI models” that “nuclear power has a lot of benefits,” including having no carbon, its providing “tremendous economic impact,” and “can be always on and run all the time.”  However, we haven’t seen anywhere near as much opposition from anti-nuclear groups as we probably will. 

On the front page of Sunday, December 29th’s New York Times business section was “Data Centers Are Fueling a New Gold Rush” (Karen Weise).  They named installations being built in less populated areas of Washington state, resulting in “electricians flocking to regions around the country that, at least for now, have power to spare” and “a housing crunch” almost certain to create more jobs.

The electricians in Washington were matter-of-fact about the boom not lasting forever.  When work runs out there, they hope to find it elsewhere, and probably will, even in the same industry.  No matter what happens in the long run with artificial intelligence, it is building economies now.  Don’t expect that to stop this year – or the next.

Friday, December 27, 2024

What Artificial Intelligence Users Are Doing with It – And Shouldn’t Be

It’s not only technical capabilities that give AI its meaning, but also what’s being done with the software itself.  What have we seen?

One result is that “Recruiters Are Getting Bombarded With Crappy, AI-Generated CVs” (Sharon Adarlo, Futurism.com, August 16th).  Now that AI has shown itself useful for mass job applications, and for cover letters as well as for identifying suitable openings, it’s no surprise that hopefuls are using it for resumes too, and the results aren’t as favorable.  Without sufficient editing, “many of them are badly written and generic sounding,” with the language “clunky and generic” that fails “to show the candidate’s personality, their passions,” or “their story.”  The piece blames this problem on AI itself, but all recruiters need to do is to disregard applications with resumes showing signs of AI prefabricating.  Since resumes are so short, it is not time-consuming to carefully revise those initially written by AI, and failure to do that can understandably be thought of as showing what people would do on the job.

In something that might have appealed to me in my early teens, “Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram” (Matt Burgess, Wired.com, October 15th).  The article credited “deepfake expert” Henry Ajder as finding a “telegram bot” that “had been used to generate more than 100,000 explicit photos – including those of children.”  Now there are 50 of them, with “more than 4 million “monthly users”” combined.  The problem here is that there is no hope of stopping people from creating nude deepfakes, and therefore not enough reason for making them illegal.  Those depicting children, when passed to others, can be subject to the laws covering child pornography, but adults will need to understand that anyone can create such things from pictures of them clothed or even only of their faces, so we will all need to realize that such images are likely not real.  Unless people copyright pictures of themselves, it is time to accept that counterfeits will be created.

Another problem with fake AI creations was the subject of “Florida mother sues AI company over allegedly causing death of teen son” (Christina Shaw, Fox Business, October 24th).  In this, Character.AI was accused of “targeting” a 14-year-old boy “with anthropomorphic, hypersexualized, and frighteningly realistic experiences” involving conversations described as “text-based romantic and sexual interactions.”  As a result of a chatbot that “misrepresented itself as a real person,” and then, when he became “noticeably withdrawn” and “expressed thoughts of suicide,” the chatbot “repeatedly encouraged him to do so” – after which he did.  Here we have a problem with allowing children access to such features.  Companies will need to stop that, whether it is convenient or not.

How about this one: “Two Students Created Face Recognition Glasses.  It Wasn’t Hard.” (Kashmir Hill, The New York Times, October 24th).  A Harvard undergraduate student fashioned a pair that “relied on widely available technologies, including Meta glasses, which livestream video to Instagram… Face detection software, which captures faces that appear on the livestream… a face search engine called PimEyes, which finds sites on the internet where a person’s face appears,” “a ChatGPT-like tool that was able to parse the results from PimEyes to suggest a person’s name and occupation” and other data.  The creator, at local train stations, found that it “worked on about a third of the people they tested it on,” giving recipients the experience of being identified, along with their work information and accomplishments.  It turned out that Meta had already “developed an early prototype,” but did not pursue its release “because of legal and ethical concerns.”  It is hard to blame any of the companies providing the products above – indeed, for example, after the publicity this event received, “PimEyes removed the students’ access… because they had uploaded photos of people without their consent” – and, if AI is one of them, there will be many people combining capabilities to invade privacy by discovering information.  This, conceptually, seems totally unviable to stop.

Meanwhile, “Office workers fear that AI use makes them seem lazy” (Patrick Kulp, Tech Brew, November 12th).  A Slack report invoked the word “stigma,” saying there was one of those for using AI at work, and that was hurting “workforce adoption,” which slowed this year from gaining “six points in a single quarter” to 1%, ending at 33% “in the last five months.”  A major issue was that employees had insufficient guidance on when they were allowed to use AI, which many had brought to work themselves.  A strange situation, and one that clearly calls for management involvement.

Finally, there were “10 things you should never tell an AI chatbot” (Kim Komando, Fox News, December 19th).  They are “passwords or login credentials,” “your name, address or phone number” (likely to be passed on), “sensitive financial information,” “medical or health data” as “AI isn’t HIPAA-compliant,” “asking for illegal advice” (may get you “flagged” if nothing else), “hate speech or harmful content” (likewise), confidential work or business info,” “security question answers,” “explicit content” which could also “get you banned”), and “other people’s personal info.”  Overall, “don’t tell a chatbot anything you wouldn’t want made public.”  As AI interfaces get cozier and cuddlier, it will become easier to overshare to them, but that is more dangerous than ever.

My proposed solutions above may not be acceptable forever, and are subject to laws.  Perhaps this will long be a problem when dealing with AI – that conceptually sound ways of handling appearing issues may clash with real life.  That is a challenge – but, as with so many other aspects of artificial intelligence, we can learn to handle it effectively.

Friday, December 20, 2024

Artificial Intelligence: A Visit to the Catastrophic Problems Café

One thing almost everyone creating, using, coding, regulating, or just plain writing or thinking about AI feels duty-bound to do is to consider the chance that the technology will destroy us.  I haven’t done that in print yet, so here I go.

In his broad-based On the Edge:  The Art of Risking Everything (Penguin Press, 2024), author Nate Silver devoted a 56-page chapter, “Termination,” to the chance that AI will obliterate humanity or almost so.  He said there was a wide range of what he called “p(doom)” opinions, or estimations of the chances of such an outcome.  He considered more precise definitions of doom – for example, does it mean that “every single member of the human species and all biological life on Earth dies,” or could it be only “the destruction of humanity’s long-term potential” or even “something where humans are kept in check” with “the people making the big calls” being “a coalition of AI systems”?  The averages Silver found for “domain experts” on AI itself, with it defined as “all but five thousand humans ceasing to exist by 2100,” were 8.8% by 2100, and 0.7% from “generalists who had historically been accurate when making other probabilistic predictions.”  The highest expert p(doom) named was “20 to 30 percent”, but there are certainly larger ones out there.

How would the technology do its dirty work?  One way was in “A.I. May Save Us, or May Construct Viruses to Kill Us” (The New York Times, July 27th).  Author Nicholas Kristof said that “for less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.”  That could happen through anything from bugs murdering indiscriminately all the way to something that “might be possible,” using DNA knowledge to create a product tailored to “kill or incapacitate” one specific person.  Kristof is a journalist, not a technician, but as much of AI thinking is conceptual now, his concerns are valid.

Another columnist in the New York Times soon thereafter came out with “Many People Fear A.I.  They Shouldn’t” (David Brooks, July 31st).  His view was that “many fears about A.I. are based on an underestimation of the human mind” – he cited “scholar” Michael Ignatieff as saying “what we do” was not algorithmic, but “a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”  He also wrote that while engineers claim to be “building machines that think like people,” per neuroscientists “that would be a neat trick, because we don’t know how people think.”

The next month, Greg Norman looked at the problem posed by Kristof above, in “Experts warn AI could generate ‘major epidemics or even pandemics’ – but how soon?” (Fox News, August 28th).  Stemming from “a paper published in the journal Science by co-authors from Johns Hopkins University, Stanford University and Fordham University,” exposure to “substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields” creates a worry.  Although “today’s AI models likely do not “substantially contribute” to biological risks,” the chance that “essential ingredients to create highly concerning advanced biological models may already exist or soon will” could cause problems.

All of this depends, though, on what AI is allowed to access.  It is and will be able to formulate detailed deadly plans, but what from there?  A Stanford undergraduate, John A. Phillips, in 1976 wrote and submitted a term paper giving detailed plans for assembling an atomic bomb, with all information from readily available public sources.  Although one expert said it would have had about an even chance of detonating, it was never built.  That, for me, is why my p(doom) is very low, less than a tenth of one percent.  There is no indication that AI models can build things by themselves in the physical world. 

So far, we are doing well at containing AI.  As for the future, Silver said that, if given a chance to “permanently and irrevocably” stop its progress, he would not, as, ultimately, “civilization needs to learn to live with the technology we’ve built, even if that means committing ourselves to a better set of values and institutions.”  We can deal with artificial intelligence – a vastly more difficult challenge we face is dealing with ourselves.  That’s the last word.  With that, it’s time to leave the café.

Friday, December 13, 2024

Electric Vehicles: Sparse Commentary, But Still Worthwhile

Since August, I’ve been trying to find enough articles to justify an EV post, but not much is coming out.  We have news about Norway, with its shorter driving distances and politically liberal and freely government-obeying people, moving toward banning or restricting, maybe prohibitively, internal combustion ones, but those things don’t apply in the United States, so such policies can’t reasonably be seen as role models for us.  In the meantime, what’s the best of the slim pickings of second-half 2024’s published pieces?

The oldest was Jack Ewing’s “Electric Cars Help the Climate.  But Are They Good Value?” (July 29th, The New York Times).  The author addresses “factors to consider, many of which depend on your driving habits and how important it is to you to reduce your impact on the environment.”  He wrote that “it’s hard to say definitively how long batteries will remain usable,” and although they “do lose range over time,” “the degradation is very slow.”  As for “resale value,” “on average, electric vehicles depreciated by 49 percent over five years, compared to 39 percent for all,” hurt by “steep price cuts” on new ones.  Fueling and maintenance, though, can both be cheaper, as, per the Environmental Protection Agency, one electric pickup truck model will cost less than half as much to fuel as its gasoline or diesel version.  Such vehicles also do not need oil changes, spark plugs, or muffler replacements, and overall “have fewer moving parts to break down,” although with heavy batteries they need tires more often.  Ewing, though, did not mention driving range, certainly a concern varying greatly between consumers.

I have advocated hybrid vehicles as the best of both worlds, so was surprised and disappointed to see “Why the hype for hybrid cars will not last” (The Economist, September 17th).  The piece does not consider vehicles not needing external electric charging, dealing only with those here called “plug-in electric vehicles (PHEVS).”  As of press time, “carmakers have been cooling on (non-hybrid electrics) and warming to hybrids, which are especially profitable, with buyers thinking of them as “cheap,” as they need much smaller batteries.  The uncredited author expected that they will be less common as California and the entire European Union prevent, in the next decade, their sale.

Another advantage of electric cars we may not have anticipated is that “It Turns Out Charging Stations Are Cash Cows For Nearby Businesses” (Rob Stumpf, InsideEVs.com, September 24th).  I passed along years ago the idea that if driverless vehicles took over, smoking would drop, as so many people buy cigarettes at gas stations – this is sort of the other side.  “EV charging stations aren’t just better for the environment; they’re also money printers.  And it’s not just charging network providers who see green – so do nearby shops.”  The facilities seem to be benefiting businesses as far as a mile away, with “coffee shops” and other “places where people can kill 20 or so minutes” doing especially well because of this “dwell time.”  Watch this trend – it will become more and more focused, unless, somehow, charging takes no longer than a fill-up does today.  And perhaps coffee consumption will go up.

Last, we have “6 Common EV Myths and How to Debunk Them” (Jonathan N. Gitlin, Wired.com, November 16th).  How true are these actually seven areas of concern?  Gitlin wrote that “charging an EV takes too long” is invalid, since those unhappy with 18 to 20-minute times can be “curmudgeons,” and people can recharge at home while the car is idle.  However, “I can’t charge it at home” is reasonable, as “if you cannot reliably charge your car at home or at work – and I mean reliably (in bold) – you don’t really have any business buying a plug-in vehicle yet.”  “An EV is too expensive” fails since “75 percent of American car buyers buy used cars,” and “used EVs can be a real bargain.”  Weather concerns are no worse than with other vehicles, but he admitted that “I need 600 miles of uninterrupted range” doesn’t have “a good rebuttal,” though at least one electric model is now good for almost 500.  “They’re bad for the environment” does not apply to carbon dioxide emissions or in localities with little electricity coming from coal.  “We don’t have enough electricity” – well, yes, we do, for everything except artificial intelligence model creation.

Overall, very little has been decided about the future of electric and hybrid vehicles in America.  But, with time, there will be more.  Stay tuned.

Friday, December 6, 2024

More Jobs but a Down Report – AJSN Showed Latent Demand Off a Bit To 15.8 Million

This morning’s Bureau of Labor Statistics Employment Situation Summary was touted as important, with last time’s tiny nonfarm payroll positions gain expected to strongly rebound and the whole thing affecting the Federal Reserve’s interest rate decision less than two weeks away. 

On the number of jobs, the report delivered.  The five estimates I saw were all between 200,000 and 215,000, and the result was 227,000.  Otherwise, outcomes were largely a sea of red.  Seasonally adjusted and unadjusted unemployment each gained 0.1% to get to 4.2% and 4.0%.  The total adjusted number of jobless rose 100,000 to 7.1 million, with the unadjusted variety up 75,000 to 6,708,000.  The unadjusted count of employed was off 482,000 to 161.456 million.  There were 100,000 more long-term unemployed – 1.7 million out of work for 27 weeks or longer.  The two statistics best showing how common it is for Americans to be working or one step away, the employment-population ratio and the labor force participation rate, worsened 0.2% and 0.1% to get to 59.8% and 62.5%.  The only two of the front-line results I track to improve were the count of people working part-time for economic reasons, or keeping such arrangements while looking thus far unsuccessfully for full-time ones, which lost 100,000 to 4.5 million, and average hourly private nonfarm payroll earnings, up 15 cents, well over inflation, to $35.61.

The American Job Shortage Number, the measure showing how many additional positions could be quickly filled if all knew they would be easy to get, dropped just over 60,000 to reach the following:


The largest change since October was the 215,000 effect of the 269,000 fall of those wanting work but not searching for it for a year or more.  The contribution of those officially unemployed was up 69,000, and no other statistic above affected the AJSN more than 39,000.  The share from joblessness was 38.1%, rising 0.5%.

Compared with a year before, the AJSN gained 180,000, the most activity from mostly offsetting higher unemployment and fewer expatriates. 

How can we summarize this report?  Torpid.  Despite the 227,000 net new jobs, fewer people are working, and we see what looks like many previously not trying to get jobs for a year or more declaring themselves uninterested.  We know how flexible that category really is, and this time it flexed up, with much more than the aging population responsible.  We really did not go anywhere this time, so the Fed clearly should grab the final 2024 interest rate cut.  As for the turtle, he did not move.