So, even as AI takes on more tasks—and more complex tasks at that—humans must continue to question, at each step, how the output of these tools will affect people and society.
For example, if a model is designed to keep people engaged on a platform, at what point does the engagement reach an unhealthy level? When does it drive overconsumption? Does the model encourage people to be taken advantage of? Businesses will always have commercial goals, but these need to be balanced with a consideration of the social impacts.
The most common way to achieve this balance is by maintaining a “human in the loop,” which means having a person or group of people consistently review, approve and adapt AI outputs and inputs to help avoid issues like bias and ensure organizations maintain a nuanced and complete understanding of the decisions they are making.
For example, when financial services organizations assess a loan applicant traditionally, human reviewers consider both credit history and individual circumstances, like employment changes due to a pandemic. This holistic approach allows for factors like repayment history to provide a more complete view of the individual. Algorithms, on the other hand, rely on raw data to make decisions and often lack a nuanced—but ultimately very relevant—understanding of human circumstances.
These are precisely the sorts of issues that companies of all kinds need to consider and avoid when integrating generative AI into processes.
2. Gather diverse voices to establish AI governance
Companies cannot address—or likely even comprehend—the full implications of AI using only their existing teams and resources.
Because generative AI is a novel, unregulated technology, it is up to companies to make decisions about how they are managing both the input and output of these models, as well as its design, development, operation and adaptation. Every step of this process—from ethically collecting training data to ensuring transparency with consumers and stakeholders—requires not just a specialized skill set, but many specialized skill sets working together as a common body.
With this in mind, organizations should draw on the skills and expertise of external voices, such as academia, external counsel, industry consortia, government agencies, sociologists, ethicists and others, to establish a governing body. Such a body could be a board, working group, steering committee or other group, whose job it is to develop, implement and oversee the governance controls.
At this stage, many Big Tech organizations are demonstrating a high level of ethics when it comes to AI. Both Google and Microsoft have clearly outlined the principles and practices guiding their AI programs; they are also leading industry-wide discussions and collaborative efforts to address challenges associated with responsible and ethical use. For companies just beginning their AI journey, it may be helpful to review any open-source materials offered by these companies and use them as a blueprint for their own activity.
3. Envision AI systems that empower people rather than replace them
AI will likely be the central figure in much of the work we do in the future. While it may seem like the burden is on the workforce to embrace AI, the onus is really on leaders to empower people to do so.
Companies need to upskill and reskill existing employees to enable the quantum shift forward that AI represents. As part of this process, leaders need to explain and demonstrate the value of using this technology both on an individual and corporate level and create a clear path toward adoption.
One way to embed ethics into generative AI is to train employees on ethical considerations during the design and development process. This can be achieved by establishing a set of ethical principles and guidelines that guide the development of AI systems and providing training to employees on how to apply these principles in their work. These principles should be based on widely accepted ethical frameworks, such as transparency, accountability, fairness, and non-maleficence. By incorporating these principles into employee training, AI systems can be built to operate in an ethical manner, ensuring that they align with societal values and norms.
Likewise, to equip the future workforce with relevant skills and help them understand the implications of this technology, schools need to adapt their curriculum to reflect AI’s outsized role as a productivity tool and how to leverage these new capabilities safely, securely and responsibly.
4. Commit to continuous transparency and accountability
To demonstrate a high level of trust, companies must be transparent and accountable in their use of AI.