ChatGPT: A Goldmine or an Ethical Minefield?

Share on:

AI is now top of mind for company executives, but with experts warning of risks to society, companies need to set safety standards and ensure robust ethical oversight.

The buzz surrounding last November’s launch of ChatGPT – and the resulting tsunami of investment in “generative” artificial intelligence (AI) systems – has sharpened businesses’ interest in the emerging technology not only for its potential to spark a productivity boom but because it could bring significant disruption to the labor market.

But the rush of groundbreaking AI launches, including last month’s release of GPT-4, has also raised ethical questions about the potential danger caused by AI systems’ reinforcing of existing human biases. In March, Elon Musk and other AI experts even called for a six-month pause in the development of powerful new AI tools to allow enough time to set safety standards and head off potential risks to society.

The open letter was signed by more than 1,100 individuals within hours of its publication by the non-profit Future of Life Institute. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one ­– not even their creators – can understand, predict or reliably control,” the letter read.

Meanwhile, Italy has temporarily banned ChatGPT over privacy concerns, but this has yet to muffle corporate enthusiasm for investing in AI systems. According to a recent Accenture survey, nearly two-thirds of organizations are prioritizing AI above other digital technologies, while the global AI market is projected to grow at a compound annual rate of 38% to reach nearly $1.6 trillion by 2030.

Meanwhile, companies including Google, Microsoft, and Adobe are adding AI features to their search engines and productivity tools, and some law firms, including Allen & Overy, are using AI chatbots to help lawyers draft contracts, client memos, and other legal documents.

Such companies are at an enticing, but dangerous, fork in the road. On the one hand is the array of opportunities that AI presents, from revolutionizing work to boosting innovation in areas ranging from healthcare to supply chain management. It’s a potential goldmine. On the other, there may be landmines; many businesses are worried about AI’s adherence to fairness, its reliability, and its frequent inability to explain in understandable terms how it reaches its conclusions – a “black box” algorithm – which could erode trust in the technology and hamper widespread adoption and implementation.

Meanwhile, companies including Google, Microsoft, and Adobe are adding AI features to their search engines and productivity tools.

Ultimately, robust ethical oversight is critical for any organization that’s looking to deploy AI tools in their workplaces, products, or services. On top of that, it’s clear that training is needed for senior leaders.

One concern is that the decision-makers implementing such technologies in their workplaces, products, or services often need more knowledge or expertise to spot the ethical issues in AI, underlining the importance of upskilling. If a team is ready to deploy an AI product but needs the approval of an executive who knows little about the ethical risks, the brand’s reputation (let alone the executive’s) can be dangerously compromised.

In February, Google parent Alphabet shed $100 billion in market value in just one day after its chatbot, Bard, made inaccurate remarks. More broadly, chatbots pose risks for corporations because the inherent biases in their algorithms can lead to discriminatory behavior. In 2016, Microsoft released a chatbot called Tay on Twitter that generated racist content before being shut down.

News website CNET, meanwhile, recently used an AI that produced stories containing factual inaccuracies or that were plagiarized. CNET also prompted a backlash from readers because it did not make the AI authorship immediately clear. Such incidents have prompted a debate in journalism, academia, and other fields over whether content creators should be required to disclose the use of AI in their work to demonstrate the authenticity of the material. For companies deploying AI, transparency is going to be of paramount importance.

I recommend that companies start with a board oversight committee, one where the issue of ethics is broken down among several people with technical knowledge, and experience not only in regulatory matters but in communicating the message of ethics throughout the organization. This could go a long way to ensuring that ethics are top of mind across the company.

Moreover, there are five further best practices of “good AI hygiene” to reduce business risks and liability.

First, I recommend that you establish an AI governance framework to guide the organization’s efforts to identify and reduce potential harm from AI systems. Many such frameworks already exist and can be adopted and adapted, such as the AI Risk Management Framework developed by the National Institute of Standards and Technology (NIST), part of the US Department of Commerce.

Second, companies should identify the designated point of contact in the C-suite who will be responsible for AI governance, such as an AI Ethics Officer. This person would coordinate the handling of questions and concerns (both internal and external) and ensure new challenges are identified and addressed with appropriate oversight.

Third, leaders should communicate to the organization the expected testing timeline for new AI projects.

Fourth, it will be important for companies to document the relevant findings after each stage of the AI’s development to promote consistency, accountability, and transparency. This can also suggest the need for re-testing the system.

Concerns about ethical practices are only going to grow as AI is driving business changes, but on highways without speed limits or traffic laws. But with regulation catching up fast, companies will need to be prepared. This means establishing AI governance proactively today – because this object in the rear-view mirror is closer than it appears.

Originally published @ I by IMD

Copyright (c) 2023 by Faisal Hoque. All rights reserved.

Share on: