Friday, December 22, 2023

Artificial Intelligence: More Concerns, Proposals, and Organizational Progress

While there have been various areas of AI progress over the past few months – incremental and automobile-related, anyway – more things have popped up since late November.

In “A.I. Belongs to the Capitalists Now” (The New York Times, November 22nd), Kevin Roose said there had been “a fight between two dueling visions of artificial intelligence,” which with the Sam Altman OpenAI firing-and-rehiring had been won by “Team Capitalism,” which held that “A.I. is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and that, if put to the right uses, could usher in a new era of prosperity and make gobs of money for the businesses that harness its potential,” instead of something “that must be restrained and deployed with extreme caution in order to prevent it from taking over and killing us all.”  We’ll see.

A solution to one of AI’s huge problems?  Possibly, in “Big Companies Find a Way to Identify A.I. Data They Can Trust” (Steve Lohr, The New York Times, November 30th).  That day, “a consortium of companies (released) standards for describing the origin, history, and legal rights to data,” along with “intended use and restrictions.”  The limitations seem a way of stifling unwanted findings, but the first part appears mandatory – but what about data already incorporated?

“AI microdosing” (Drake Bennett, Bloomberg, December 1st) presented a comparison with recent experiments in doing that with hallucinogenic drugs, suggesting that the software be allowed tiny but real hallucinating capability, as “imagination – at the margins – is hard to distinguish from” that.  Reasonable, but first we need to get better control over the current state of that problem.

Back to organization, we saw “How Nations Are Losing a Global Race to Tackle A.I.’s Harms” Adam Satariano and Cecilia King, The New York Times, December 6th).  Mainly that’s slow execution on regulation, as “A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace.”  So why can’t they focus on, say, ChatGPT, as it’s obtainable and in use now?  They may need to react to other products as they come out, but until they do emerge, we need to concentrate on what we actually have.  Similarly, from the same date and the same publication, “Experts on A.I. agree That It Needs Regulation.  That’s the Easy Part” (Alina Tugend).

The sort of thing we would do well to see more of was the focus of “Microsoft and labor unions form alliance on AI” (Jackie Davalos and Josh Eidelson, Benefit News, December 11th).  The software company “will provide labor leaders and workers with formal training on how artificial intelligence works,” but not until a year from now, and both sides will be “incorporating worker perspectives and expertise in the development of AI technology,” along with “helping shape public policy that supports the technology skills and needs of frontline workers.”  The idea’s implementation success, of course, is unknown.

Finally, “Should A.I. Accelerate?  Decelerate?  The Answer Is Both” (By De Kai, The New York Times, December 10th).  This is partisan political complaining about the technology being used “to amplify polarization, bias and misinformation,” that “A.I.’s are manipulating humanity,” and so “we need to decelerate deployment of A.I.’s that are exacerbating sociopolitical instability.”  Unfortunately for the author, companies have their own agendas.  Develop your own product and you can direct it as you want.  Until then, we need to be personally responsible for what we believe.  And artificial intelligence will wander into 2024 with its massive achievements, and disasters, still in the future.

No comments:

Post a Comment