AI: We’re at an Enticing but Perilous Fork in the Road.
All its potential aside, there are legitimate dangers to AI, and not ones that only the most paranoid of us might be subjected to. At the heart of the matter is the essential value of a balanced perspective. We intend to maintain that equity of thought as we continue our research and expand it further into a more expansive body of work.
Just before I began writing this article you have in front of you, I was reading yet another apocalyptic op-ed about the dangers of artificial intelligence (AI)—destruction of countless jobs, fake news made easy, privacy utterly decimated, just to cite some of the more common anxieties.
An epiphany of sorts followed. Why, if AI seemed the devil incarnate in so many forms, was it already such a dominant presence in any number of settings?
As in:
- AI was a primary driver of Moderna’s lightning fast identification of the ARN-based vaccine against COVID-19;[1]
- AI powers 80 percent of views on Netflix;[2]
- Using computer vision, robotics and machine learning applications, AI identifies defects and nutrient deficiencies in the soil, boosting production.
And the list of positives goes on. In turn, that seemed to beg several questions. First, doesn’t it make sense that any rational discussion of AI cover both benefits as well as any potential danger? Moreover, since, much like the tide, the spread of AI seems a largely unstoppable force, why not learn how to best implement it and, from there, manage it both safely and to attain optimal results?
The Good
There are simply too many positives to AI to enumerate them all. Here’s a mere sampling:
- Using previously gathered information and applying a certain set of algorithms, AI can greatly reduce the possibility of errors in decision making—and making such decisions a good deal faster.
- Assistance in repetitive jobs.
- Analysis of massive amounts of data.
- New invention and innovation. For instance, with space travel becoming ever more common and aggressive, NASA and other organizations are challenged with designing and building parts faster and more effectively. While human scientists may be capable of developing a handful of possible ideas in a week, AI technology can create and study as many as 40 designs in a single hour. Not only faster, but the AI-designed parts are generally stronger and lighter than their human produced counterparts. Added one NASA scientist: “It comes up with things that, not only we wouldn’t think of, but we wouldn’t be able to model even if we did think of it.”[3]
- Medical research and treatments. Useful in treating diseases such as cancer and diabetes, creating new proteins used to be an outright slog, involving laborious blueprints and generating prototypes that usually didn’t work. Now AI can do it in a fraction of the time with far greater accuracy. Researchers have also employed the technology in addressing malaria and in Parkinson’s research.[4]
The Bad
There’s little doubt that many are worried about the impact increasing implementation of AI may have on most everything to do with our lives—so much so, that, in actions akin to tobacco executives urging people not to light up, AI industry leaders have gone public about the dangers of the very product they work with:
- In March 2023, an open letter signed by Elon Musk and other technologists warned that massive AI systems pose serious risks to humanity. Weeks later, Geoffrey Hinton, a pioneer in developing AI tools, quit his research role at Google, citing similar warnings. [5]
- More than 500 business and science leaders, including representatives of OpenAI and Google DeepMind, have signed a brief statement saying that addressing the risk of human extinction from AI “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[6]
- Implementing and updating AI-related hardware and software can be pricey.
- AI systems cannot process human emotions and feel them. Empirical decisions without elements such as empathy can be misguided.
Foundational Capabilities
There are ample lists for both the pluses and minuses of AI. That posits the next issue—the tipping point of what can effectively push AI into one category or the other.
Leaderships seems to be the common point among varied factors. Those making sweeping decisions need to be fully in touch as to AI’s potential as well as any landmines. That starts with refusal to adopt a simplistic approach. AI is simply not “another form of technology” that requires a simple plug and play methodology. Rather, in addition to other formulaic steps, leaders need to appreciate that AI means culture transformation—not just a shift in how things are done but a reinvention at the very core of how businesses and organizations do what they do and the ramifications of those activities.
Further, successful adoption of any sort of AI mandates an attention to data. As noted earlier, one of AI’s most powerful capabilities is its capacity to analyze large amounts of information and, from there, arrive at the best choices and decisions possible.
But, it should be added, only when that data is of the highest quality. Given the nature of AI, its potential depends on the accuracy and trustworthiness of data as much as the quantity. Falling short of reliable information, the validity and accuracy of AI’s algorithms can only suffer.
Technological infrastructure is another critical component that enables organizations to effectively leverage data and, from there, drive dependable machine learning solutions. Organizations need scalable data storage and high processing power to run complex algorithms, particularly robust software tools and libraries for building and training AI models. Not only does that strengthen existing AI capacity, but also positions businesses and others to implement upscale measures and steps as AI matures over time.
Governance
A major point of concern exists: governance and ethics. One clear challenge to ensuring consistent ethical behavior throughout a company is getting widespread buy-in—a mindset rather than a mandate. And that may prove too much for one person. An alternative is a BoD oversight committee, one where the issue of ethics is broken down among several people—for instance, one with technical knowledge, another with experience in regulatory matters, and a third charged with communicating the message of ethics throughout the organization. That can go a long way to ensure that ethics are top of mind across the company rather than one person struggling to wear too many hats.
Additionally, here are five of the best practices of “good AI hygiene” to reduce risks and liability, as I outlined in one of my previous IMD articles on Corporate Governance:
- Establish an AI governance framework. An increasing number of frameworks to guide efforts to identify and reduce harms from AI systems exist, such as the National Institute of Standards and Technology (NIST), housed in the US Department of Commerce, which has established an AI Risk Management Framework. The system offers resources to better manage AI risks to individuals, organizations, and society.
- Identify the designated point of contact in the C-suite who will be responsible for AI governance, such as an AI Ethics Officer. This person will coordinate handling questions and concerns (internal and external), oversee coordination and responses, and ensure new challenges are identified and addressed with appropriate oversight.
- Designate (and communicate) AI lifecycle stages when testing will be conducted (e.g., pre-design, design and development, deployment). This process should include the expected testing timeline and ongoing oversight to identify changes as AI learns.
- Document relevant findings after each stage to promote consistency, accountability, and transparency.
- Implement routine auditing. Like yearly checkups, boards should mandate that AI use is subject to regular audits. This can involve hypothetical cases that outside experts can perform under the discretion of a board of directors and legal team. This can also help establish a good record of intent to mitigate, which can prove valuable in a lawsuit or response to a regulatory body.
Balance is Everything
Clearly, we’re at an enticing but perilous fork in the road. All its potential aside, there are legitimate dangers to AI, and not ones that only the most paranoid of us might be subjected to. Pitfalls such as pervasive bias and the capacity to generate dangerous misinformation are very real and mandate the most serious form of attention.
At the heart of the matter is the essential value of a balanced perspective. While reports of AI flaws and foibles can command clicks, it’s critical to bear in mind all the good that AI has driven—and will continue to do so. We intend to maintain that equity of thought as we continue our research and expand it further into a more expansive body of work.
####
[1] Bughin, Jacques, Gjepali, Ivan, “Now It is Time for AI Transformation,” The European Business Review, September 19, 2023.
[2] Ibid.
[3] Buchanan, Larry, Paris, Francesca, “35 Ways Real People Are Using A.I. Right Now,” New York Times, April 14, 2023.
[4] Buchanan, Larry, Paris, Francesca, “35 Ways Real People Are Using A.I. Right Now,” New York Times, April 14, 2023.
[5] Metz, Cade, “‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead”, New York Times, May 4, 2023.
[6] “It’s time to talk about the known risks of AI,” The International Journal of Science, June 29, 2023.
Copyright (c) 2023 by Faisal Hoque. All rights reserved.