Friday, December 27, 2024

What Artificial Intelligence Users Are Doing with It – And Shouldn’t Be

It’s not only technical capabilities that give AI its meaning, but also what’s being done with the software itself.  What have we seen?

One result is that “Recruiters Are Getting Bombarded With Crappy, AI-Generated CVs” (Sharon Adarlo, Futurism.com, August 16th).  Now that AI has shown itself useful for mass job applications, and for cover letters as well as for identifying suitable openings, it’s no surprise that hopefuls are using it for resumes too, and the results aren’t as favorable.  Without sufficient editing, “many of them are badly written and generic sounding,” with the language “clunky and generic” that fails “to show the candidate’s personality, their passions,” or “their story.”  The piece blames this problem on AI itself, but all recruiters need to do is to disregard applications with resumes showing signs of AI prefabricating.  Since resumes are so short, it is not time-consuming to carefully revise those initially written by AI, and failure to do that can understandably be thought of as showing what people would do on the job.

In something that might have appealed to me in my early teens, “Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram” (Matt Burgess, Wired.com, October 15th).  The article credited “deepfake expert” Henry Ajder as finding a “telegram bot” that “had been used to generate more than 100,000 explicit photos – including those of children.”  Now there are 50 of them, with “more than 4 million “monthly users”” combined.  The problem here is that there is no hope of stopping people from creating nude deepfakes, and therefore not enough reason for making them illegal.  Those depicting children, when passed to others, can be subject to the laws covering child pornography, but adults will need to understand that anyone can create such things from pictures of them clothed or even only of their faces, so we will all need to realize that such images are likely not real.  Unless people copyright pictures of themselves, it is time to accept that counterfeits will be created.

Another problem with fake AI creations was the subject of “Florida mother sues AI company over allegedly causing death of teen son” (Christina Shaw, Fox Business, October 24th).  In this, Character.AI was accused of “targeting” a 14-year-old boy “with anthropomorphic, hypersexualized, and frighteningly realistic experiences” involving conversations described as “text-based romantic and sexual interactions.”  As a result of a chatbot that “misrepresented itself as a real person,” and then, when he became “noticeably withdrawn” and “expressed thoughts of suicide,” the chatbot “repeatedly encouraged him to do so” – after which he did.  Here we have a problem with allowing children access to such features.  Companies will need to stop that, whether it is convenient or not.

How about this one: “Two Students Created Face Recognition Glasses.  It Wasn’t Hard.” (Kashmir Hill, The New York Times, October 24th).  A Harvard undergraduate student fashioned a pair that “relied on widely available technologies, including Meta glasses, which livestream video to Instagram… Face detection software, which captures faces that appear on the livestream… a face search engine called PimEyes, which finds sites on the internet where a person’s face appears,” “a ChatGPT-like tool that was able to parse the results from PimEyes to suggest a person’s name and occupation” and other data.  The creator, at local train stations, found that it “worked on about a third of the people they tested it on,” giving recipients the experience of being identified, along with their work information and accomplishments.  It turned out that Meta had already “developed an early prototype,” but did not pursue its release “because of legal and ethical concerns.”  It is hard to blame any of the companies providing the products above – indeed, for example, after the publicity this event received, “PimEyes removed the students’ access… because they had uploaded photos of people without their consent” – and, if AI is one of them, there will be many people combining capabilities to invade privacy by discovering information.  This, conceptually, seems totally unviable to stop.

Meanwhile, “Office workers fear that AI use makes them seem lazy” (Patrick Kulp, Tech Brew, November 12th).  A Slack report invoked the word “stigma,” saying there was one of those for using AI at work, and that was hurting “workforce adoption,” which slowed this year from gaining “six points in a single quarter” to 1%, ending at 33% “in the last five months.”  A major issue was that employees had insufficient guidance on when they were allowed to use AI, which many had brought to work themselves.  A strange situation, and one that clearly calls for management involvement.

Finally, there were “10 things you should never tell an AI chatbot” (Kim Komando, Fox News, December 19th).  They are “passwords or login credentials,” “your name, address or phone number” (likely to be passed on), “sensitive financial information,” “medical or health data” as “AI isn’t HIPAA-compliant,” “asking for illegal advice” (may get you “flagged” if nothing else), “hate speech or harmful content” (likewise), confidential work or business info,” “security question answers,” “explicit content” which could also “get you banned”), and “other people’s personal info.”  Overall, “don’t tell a chatbot anything you wouldn’t want made public.”  As AI interfaces get cozier and cuddlier, it will become easier to overshare to them, but that is more dangerous than ever.

My proposed solutions above may not be acceptable forever, and are subject to laws.  Perhaps this will long be a problem when dealing with AI – that conceptually sound ways of handling appearing issues may clash with real life.  That is a challenge – but, as with so many other aspects of artificial intelligence, we can learn to handle it effectively.

Friday, December 20, 2024

Artificial Intelligence: A Visit to the Catastrophic Problems Café

One thing almost everyone creating, using, coding, regulating, or just plain writing or thinking about AI feels duty-bound to do is to consider the chance that the technology will destroy us.  I haven’t done that in print yet, so here I go.

In his broad-based On the Edge:  The Art of Risking Everything (Penguin Press, 2024), author Nate Silver devoted a 56-page chapter, “Termination,” to the chance that AI will obliterate humanity or almost so.  He said there was a wide range of what he called “p(doom)” opinions, or estimations of the chances of such an outcome.  He considered more precise definitions of doom – for example, does it mean that “every single member of the human species and all biological life on Earth dies,” or could it be only “the destruction of humanity’s long-term potential” or even “something where humans are kept in check” with “the people making the big calls” being “a coalition of AI systems”?  The averages Silver found for “domain experts” on AI itself, with it defined as “all but five thousand humans ceasing to exist by 2100,” were 8.8% by 2100, and 0.7% from “generalists who had historically been accurate when making other probabilistic predictions.”  The highest expert p(doom) named was “20 to 30 percent”, but there are certainly larger ones out there.

How would the technology do its dirty work?  One way was in “A.I. May Save Us, or May Construct Viruses to Kill Us” (The New York Times, July 27th).  Author Nicholas Kristof said that “for less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.”  That could happen through anything from bugs murdering indiscriminately all the way to something that “might be possible,” using DNA knowledge to create a product tailored to “kill or incapacitate” one specific person.  Kristof is a journalist, not a technician, but as much of AI thinking is conceptual now, his concerns are valid.

Another columnist in the New York Times soon thereafter came out with “Many People Fear A.I.  They Shouldn’t” (David Brooks, July 31st).  His view was that “many fears about A.I. are based on an underestimation of the human mind” – he cited “scholar” Michael Ignatieff as saying “what we do” was not algorithmic, but “a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”  He also wrote that while engineers claim to be “building machines that think like people,” per neuroscientists “that would be a neat trick, because we don’t know how people think.”

The next month, Greg Norman looked at the problem posed by Kristof above, in “Experts warn AI could generate ‘major epidemics or even pandemics’ – but how soon?” (Fox News, August 28th).  Stemming from “a paper published in the journal Science by co-authors from Johns Hopkins University, Stanford University and Fordham University,” exposure to “substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields” creates a worry.  Although “today’s AI models likely do not “substantially contribute” to biological risks,” the chance that “essential ingredients to create highly concerning advanced biological models may already exist or soon will” could cause problems.

All of this depends, though, on what AI is allowed to access.  It is and will be able to formulate detailed deadly plans, but what from there?  A Stanford undergraduate, John A. Phillips, in 1976 wrote and submitted a term paper giving detailed plans for assembling an atomic bomb, with all information from readily available public sources.  Although one expert said it would have had about an even chance of detonating, it was never built.  That, for me, is why my p(doom) is very low, less than a tenth of one percent.  There is no indication that AI models can build things by themselves in the physical world. 

So far, we are doing well at containing AI.  As for the future, Silver said that, if given a chance to “permanently and irrevocably” stop its progress, he would not, as, ultimately, “civilization needs to learn to live with the technology we’ve built, even if that means committing ourselves to a better set of values and institutions.”  We can deal with artificial intelligence – a vastly more difficult challenge we face is dealing with ourselves.  That’s the last word.  With that, it’s time to leave the café.

Friday, December 13, 2024

Electric Vehicles: Sparse Commentary, But Still Worthwhile

Since August, I’ve been trying to find enough articles to justify an EV post, but not much is coming out.  We have news about Norway, with its shorter driving distances and politically liberal and freely government-obeying people, moving toward banning or restricting, maybe prohibitively, internal combustion ones, but those things don’t apply in the United States, so such policies can’t reasonably be seen as role models for us.  In the meantime, what’s the best of the slim pickings of second-half 2024’s published pieces?

The oldest was Jack Ewing’s “Electric Cars Help the Climate.  But Are They Good Value?” (July 29th, The New York Times).  The author addresses “factors to consider, many of which depend on your driving habits and how important it is to you to reduce your impact on the environment.”  He wrote that “it’s hard to say definitively how long batteries will remain usable,” and although they “do lose range over time,” “the degradation is very slow.”  As for “resale value,” “on average, electric vehicles depreciated by 49 percent over five years, compared to 39 percent for all,” hurt by “steep price cuts” on new ones.  Fueling and maintenance, though, can both be cheaper, as, per the Environmental Protection Agency, one electric pickup truck model will cost less than half as much to fuel as its gasoline or diesel version.  Such vehicles also do not need oil changes, spark plugs, or muffler replacements, and overall “have fewer moving parts to break down,” although with heavy batteries they need tires more often.  Ewing, though, did not mention driving range, certainly a concern varying greatly between consumers.

I have advocated hybrid vehicles as the best of both worlds, so was surprised and disappointed to see “Why the hype for hybrid cars will not last” (The Economist, September 17th).  The piece does not consider vehicles not needing external electric charging, dealing only with those here called “plug-in electric vehicles (PHEVS).”  As of press time, “carmakers have been cooling on (non-hybrid electrics) and warming to hybrids, which are especially profitable, with buyers thinking of them as “cheap,” as they need much smaller batteries.  The uncredited author expected that they will be less common as California and the entire European Union prevent, in the next decade, their sale.

Another advantage of electric cars we may not have anticipated is that “It Turns Out Charging Stations Are Cash Cows For Nearby Businesses” (Rob Stumpf, InsideEVs.com, September 24th).  I passed along years ago the idea that if driverless vehicles took over, smoking would drop, as so many people buy cigarettes at gas stations – this is sort of the other side.  “EV charging stations aren’t just better for the environment; they’re also money printers.  And it’s not just charging network providers who see green – so do nearby shops.”  The facilities seem to be benefiting businesses as far as a mile away, with “coffee shops” and other “places where people can kill 20 or so minutes” doing especially well because of this “dwell time.”  Watch this trend – it will become more and more focused, unless, somehow, charging takes no longer than a fill-up does today.  And perhaps coffee consumption will go up.

Last, we have “6 Common EV Myths and How to Debunk Them” (Jonathan N. Gitlin, Wired.com, November 16th).  How true are these actually seven areas of concern?  Gitlin wrote that “charging an EV takes too long” is invalid, since those unhappy with 18 to 20-minute times can be “curmudgeons,” and people can recharge at home while the car is idle.  However, “I can’t charge it at home” is reasonable, as “if you cannot reliably charge your car at home or at work – and I mean reliably (in bold) – you don’t really have any business buying a plug-in vehicle yet.”  “An EV is too expensive” fails since “75 percent of American car buyers buy used cars,” and “used EVs can be a real bargain.”  Weather concerns are no worse than with other vehicles, but he admitted that “I need 600 miles of uninterrupted range” doesn’t have “a good rebuttal,” though at least one electric model is now good for almost 500.  “They’re bad for the environment” does not apply to carbon dioxide emissions or in localities with little electricity coming from coal.  “We don’t have enough electricity” – well, yes, we do, for everything except artificial intelligence model creation.

Overall, very little has been decided about the future of electric and hybrid vehicles in America.  But, with time, there will be more.  Stay tuned.

Friday, December 6, 2024

More Jobs but a Down Report – AJSN Showed Latent Demand Off a Bit To 15.8 Million

This morning’s Bureau of Labor Statistics Employment Situation Summary was touted as important, with last time’s tiny nonfarm payroll positions gain expected to strongly rebound and the whole thing affecting the Federal Reserve’s interest rate decision less than two weeks away. 

On the number of jobs, the report delivered.  The five estimates I saw were all between 200,000 and 215,000, and the result was 227,000.  Otherwise, outcomes were largely a sea of red.  Seasonally adjusted and unadjusted unemployment each gained 0.1% to get to 4.2% and 4.0%.  The total adjusted number of jobless rose 100,000 to 7.1 million, with the unadjusted variety up 75,000 to 6,708,000.  The unadjusted count of employed was off 482,000 to 161.456 million.  There were 100,000 more long-term unemployed – 1.7 million out of work for 27 weeks or longer.  The two statistics best showing how common it is for Americans to be working or one step away, the employment-population ratio and the labor force participation rate, worsened 0.2% and 0.1% to get to 59.8% and 62.5%.  The only two of the front-line results I track to improve were the count of people working part-time for economic reasons, or keeping such arrangements while looking thus far unsuccessfully for full-time ones, which lost 100,000 to 4.5 million, and average hourly private nonfarm payroll earnings, up 15 cents, well over inflation, to $35.61.

The American Job Shortage Number, the measure showing how many additional positions could be quickly filled if all knew they would be easy to get, dropped just over 60,000 to reach the following:


The largest change since October was the 215,000 effect of the 269,000 fall of those wanting work but not searching for it for a year or more.  The contribution of those officially unemployed was up 69,000, and no other statistic above affected the AJSN more than 39,000.  The share from joblessness was 38.1%, rising 0.5%.

Compared with a year before, the AJSN gained 180,000, the most activity from mostly offsetting higher unemployment and fewer expatriates. 

How can we summarize this report?  Torpid.  Despite the 227,000 net new jobs, fewer people are working, and we see what looks like many previously not trying to get jobs for a year or more declaring themselves uninterested.  We know how flexible that category really is, and this time it flexed up, with much more than the aging population responsible.  We really did not go anywhere this time, so the Fed clearly should grab the final 2024 interest rate cut.  As for the turtle, he did not move.