advertisement
6 tips for facilitating ethical AI in the enterprise
Artificial intelligence (AI) is potentially the single most disruptive technology of the digital era, as enterprises explore ways to harness…
Artificial intelligence (AI) is potentially the single most disruptive technology of the digital era, as enterprises explore ways to harness machine learning (ML) and other AI tools to mine customer insights, identify talent and secure corporate networks. And while IT departments can quickly roll out and benefit from most technologies, evidence suggests that CIOs should exercise extreme caution when implementing AI, including employing technologies with strong ethical considerations.
The reason? AI suffers from a big bias problem. In one example, Amazon.com scrapped a recruiting tool after it failed to fairly rate women for software developer jobs and other technical posts. In another example, MIT and University of Toronto researchers found that the company’s facial recognition software mistakes women, especially those with dark skin, for men.
Biases abound in AI
Amazon.com is not alone, as AI issues have surfaced in other companies and across other high-stakes domains. A Facebook program manager encountered algorithmic discrimination while testing the company’s Portal video chat device. ProPublica showed that software used across the U.S. to predict future criminals was biased against African-Americans. A study of fintechs by UC Berkeley found that both face-to-face decisions and algorithms used in mortgage lending charged Latinx/African-American borrowers higher interest rates.
advertisement
Also of concern is discrimination around the margins, a bias that is insidious in its subtlety. In “proxy discrimination,” Zip codes may serve as proxies for race; word choice can be a proxy for gender; and joining a Facebook group about a genetic mutation may put a person in a high-risk category for health insurance, even though that signal has not been explicitly coded into the algorithm.
AI, it’s clear, suffers from a digitized version of biases that afflict the physical world. After all, algorithms are a “creation of human design” that inherit our biases, says Kate Crawford, co-founder of New York University’s AI Now Institute. And the biases span decades, according to IEEE Spectrum.
Accordingly, IT leaders are increasingly concerned about producing “explainable” AI. They crave algorithms whose outcomes can be clearly articulated, ideally satisfying regulators and business executives alike. But given the inherent biases, perhaps what they really need is “ethical AI,” or algorithms that operate with the full fairness of inclusion.
advertisement
Using ethics to eliminate AI bias: 6 tips
A careful approach is critical as CIOs rev up their adoption of AI. The number of enterprises adopting AI climbed to 37 percent from just 10 percent 4 years ago, according to Gartner’s 2019 survey of 3,000 CIOs. For the near term, companies should try to institute ethics around their use of AI. Experts from Deloitte, Genpact and Fjord discuss how companies can move forward deploying AI in a fair fashion.
Enlist the board and engage stakeholders
Because ethical issues related to AI can carry broad and long-term risks to a company’s reputation, finances and strategy, CIOs should engage with their board to mitigate AI-related risks, says David Schatsky, managing director of Deloitte’s U.S. innovation team. Infusing ethics into AI starts with determining what matters to stakeholders, including customers, employees, regulators, and the general public. “Organizations have to engage and be open and transparent about who the stakeholders are,” Schatsky says.
Create a “digital ethics” subcommittee
Boards already have audit, risk and technology committees, but it may be time to add a committee dedicated to AI, says Sanjay Srivastava, chief digital officer of Genpact, which designs and implements technologies for enterprises. Such a “digital ethics” committee must be comprised of cross-functional leaders that can engage with stakeholders to help design and govern AI solutions. This committee must also stay abreast of regulations governing AI. Of companies Genpact surveyed, 95 percent said they wanted to combat AI bias, but only 34 percent have the governance and controls in place to do so. “We advise clients to get started sooner than later,” Srivastava says. “The awareness is there, people get it, but they have not implemented the governance and control.”
advertisement
Leverage design thinking
Whether companies build AI in-house or purchase a commercial tool, it behooves them to craft solutions using design thinking, which can help account for potential biases in their algorithms, up front, says Shelley Evenson, managing director of Accenture’s Fjord design consultancy. And while internal apps that use weather and social media signals to forecast sales or product demand don’t have the potential for harm that those portending a direct impact on employees and customers, bringing empathy into the process of designing technology is a good approach.
Use technology to weed out bias
Corporate AI developers must also be trained to test for and remediate systems that unintentionally encode bias that treats users or other affected parties unfairly. Companies may also leverage tools that detect how data variables may be correlated with sensitive variables — such as age, sex, or race — and methods for auditing and explaining how machine learning algorithms generate their outputs. For example, Srivistava says that companies can insert digital “breadcrumbs” into algorithms to trace decision-making processes.
Be transparent about your use of AI
Companies can help build trust with stakeholders by being transparent about their use of AI. For instance, rather than masquerade as humans (as many chatbots still do today), intelligent agents should identify themselves as such, Schatsky says. Companies should also disclose the use of automated decision systems that affect customers. Where possible, companies should clearly explain what data they collect, what they do with it, and how that usage affects customers.
Alleviate employees’ displacement anxiety
The degree to which AI will eliminate or transform jobs is not clear, but companies should begin educating employees on how their jobs may change and recommend ways to reskill to remain relevant. This includes retraining workers whose tasks are expected to get automated — or giving them time to seek new employment. Insurer State Auto, for example, is training up staff to handle more complex claims as robotic process automation (RPA) increasingly processes lower-level tasks.
Bottom line
None of this is easy work, mostly because there isn’t a lot of consensus on what is considered ethical in a certain situation and with certain stakeholders, Schatsky says. Regardless of the approach, it would be prudent for enterprises to take action rather than “wait for AI-related regulation to catch up,” Schatsky says.
Governments are moving the needle in that direction. The European Commission in April published a set of guidelines on how organizations should develop ethical applications of artificial intelligence (AI). Two days later, the U.S. government proposed the Algorithmic Accountability Act of 2019 to address high-risk AI systems, such as technology that detects faces or makes important decisions based on sensitive personal data.
Whatever the future holds with regard to regulation, CIOs have some time to hammer out concerns. At present, enterprise AI adoption is hampered by a lack of data and data quality issues; a paucity of machine learning modelers, data scientists and other AI specialists; uncertain outcomes; and, of course, ethics issues and biases, according to an O’Reilly survey.
Financial services, healthcare and education sectors account for 58 percent of enterprises adopting AI, compared to just 4 percent each for telecommunications, media and entertainment, government, manufacturing and retail, O’Reilly found.