Friday, June 9, 2023

Artificial Intelligence – Key Issues and Considerations – I

Although the next round of AI’s technological progress will be in the background for a while, there’s no getting away from this topic.  It’s cheek-to-jowl with jobs and the economy, and we know little more about what effect it will ultimately have than we knew about that of cars when Karl Benz and Gottlieb Daimler were tinkering with contraptions.  I intend to provide only the most important concerns and leave off the seemingly endless pieces speculating on whether AI will be a boon to or the end of humanity, for the same reasons baseball writer Bill James said about another issue decades ago: “1.  I don’t know, and 2.  You don’t know, either.”  This is the first of at least three such consecutive posts.

Oldest, but still within the month, is “8 Questions About Using AI Responsibly, Answered” (Tsedal Neeley, Harvard Business Review, May 9th).  After “How should I prepare to introduce AI at my organization?” (“Ensure that everyone has a basic understanding of how digital systems work…  make sure your organization is prepared for continuous adaptation and change… build AI into your operating model”), we got “How can we ensure transparency in how AI makes decisions?” (“Recognize that AI Is invisible and inscrutable and be transparent in presenting and using AI systems… prioritize explanation as a central design goal”), “How can we erect guardrails around LLMs [large language models] so that their responses are true and consistent with the brand image we want to project?” (“Tailor data for appropriate outputs… document data”), “How can we ensure that the dataset we use to train AI models is representative and doesn’t include harmful biases?” (“consider the trade-offs you make…” get “diverse teams” to “collect and produce the data used to train models”), “What are the potential risks of data privacy violations with AI?” (follow the seven Privacy by Design principles), “How can we encourage employees to use AI for productivity purposes and not simply to take shortcuts?” (“evaluate whether AI’s strengths match up to a task and proceed accordingly”), “How worried should we be that AI will replace jobs?” (not across the board), and “How can my organization ensure that the AI we develop or use won’t harm individuals or groups or violate human rights?” (“Slow down and document AI development… establish and protect AI ethics watchdogs… watch where regulation is headed”).  A worthwhile primer.

Four days later, The Economist covered the second-to-last point above in “Your new colleague; Artificial intelligence is about to turn the economy upside down.  Right?”  This article cited a Goldman Sachs paper projecting that “in a best-case scenario generative AI could add about $430 billion to annual global enterprise-software revenues” as 1.1 billion world office workers could require just under $400 each.  Yet it could be slow, considering examples such as a 90-year lag between automating technology and job decimation of telephone operators and the continuing presence of subway-train drivers and traffic police.  Additionally, “it is even possible that the AI economy could become less productive,” as may be the case with smartphones and remote work and certainly was, for a long time, with personal computers. 

Here’s a foundation for something going wrong: “AI tools being used by police who ‘do not understand how these technologies work’: Study” (Chris Eberhart, Fox News, May 15th).  Respondents were “not familiar with AI, or with the limitations of AI technologies,” although they liked having this capability.  Perhaps a basic course of some sort should be required.

A fine semi-philosophical question hit the press in “Is It Too Late to Regulate A.I., or Too Soon?” (Timothy B. Lee, Slate, May 18th).  It started with an account of OpenAI CEO Sam Altman’s May 16th “appearance before the Senate Judiciary Committee” in which the corporate leader asked for licensing “any effort above a certain scale of capabilities, and could take that license away and ensure compliance with safety standards,” with special concern with systems that could “self-replicate and self-exfiltrate into the wild.”  Such a system is being built in Europe.  Either could call for somewhere between scrutiny and a ban on incremental improvements to existing releases, such as ChatGPT4, and could greatly delay availability of future ones.  In the meantime, regulating bodies would need to understand the current issues and technical state, which could also take a while.  No, it’s not too late, but if governments, not noted for being nimble, cannot keep up, it will be too soon.

After all these high-level AI concerns, how about some pithy advice on how to use it?  We got that in “On Tech A.I.: Get the best from ChatGPT with these golden prompts” (Brian X. Chen, The New York Times, May 25th).  Suggestions here are geared also to Bing, from Microsoft, and Bard, a Google product.  First is that “if you’re concerned about privacy, leave out personal details like your name and where you work,” as it could be shared, omit “trade secrets or sensitive information,” and to be aware that it may have “hallucinations,” as the tools “can make things up” “while trying to predict patterns from their vast training data,” some of which is “wrong.”  From there, use prompts starting with “act as if” continuing with the role you want the software to play and “tell me what else you need to do this.”  Instead of starting fresh, “keep several threads of conversations open and add to them over time.” 

More next week, as this area continues to evolve.

No comments:

Post a Comment