Artificial Intelligence (AI) is being lauded as the most critical trend across a number of industries, from healthcare to manufacturing
Such is its meteoric rise, that its adoption among businesses increased by 60% between 2017 and 2018.
Despite the obvious potential, recent events have exposed how automated systems can both intentionally and unintentionally lead to bias. For example, accidental bias was identified in cases where algorithms manage digital ads for STEM roles. With this trend only expected to accelerate, it is critical that the risk of bias is recognised and addressed.
Developing and encouraging bias stereotypes
While AI bias is creeping into the business world, a recent UNESCO report provided more concerning findings, revealing that voice-activated assistants with female voices such as Amazon’s Alexa instil views of gender subservience. As AI increasingly enters the home as well as the workplace, any associated bias may in fact emerge as a much broader societal issue. This appears to be an unintentional consequence of using primarily female voices for the growing number of digital assistants on the market. In fact, the choice to use a female voice for these systems was based on testing that found consumers were more likely to engage with a female voice than a male one. As a result, companies like Amazon and Google thought this would encourage wider use of their devices.
Another notable example of AI bias comes from an AI-based tool developed by Amazon to sort through resumes and CVs to identify the best candidates for interview. The company’s algorithm compared applicants to their current employee base, to find the candidates that best fit the profile of a successful employee. However, as the existing employee population was primarily men, the AI system adopted some unconscious bias from the data set. For example, some all-female universities were misinterpreted to be lower-class institutions as fewer existing Amazon employees had attended them.
These examples are just some ways in which automated systems can become partisan and encourage bias in humans. But whose responsibility is it to halt the charge of bias?
The human responsibility
Currently, technology companies put a lot of effort into increasing customer engagement, but not enough into the impact of their products on the wider community. Various global organisations have come under fire for this in the past, such as a well-known fast-food restaurant involved in a row over its responsibility for collecting the huge volume of its packaging left as rubbish on the streets. The issue of social impact is not a new one, but one that has evolved in line with digital advancements.
Now more than ever, tech-savvy companies using AI as part of their strategies need to accept their roles and responsibilities in reducing the risk and impact of bias inherent in their products and services.
The first step for any organisation to address the issue of bias is to admit it exists and proactively audit its automation and AI procedures. As simple as that seems, our Making AI Responsible and Effective report found that only half of businesses across the US and Europe have policies and procedures in place to identify and address ethical considerations – either in the initial design of AI applications or in their behaviour after the system is launched.
As AI becomes a mandatory strategic tool across multiple industries, the ethical considerations of intelligent systems should not just be viewed as “nice to have.” All companies need to be held accountable for the outcomes and impact of their products on society, whether that is because they are contributing to the world’s litter problem or encouraging unfair stereotypes.
In the case of AI voice assistants, maybe the answer is creating personalised devices so that consumers can choose different voices depending on which gender, accent or tone they prefer, rather than a blanket voice being given to everyone. Regardless of the application of AI, businesses should act proactively when it comes to considering the full end-to-end impact of any product or service. Companies that do not consider the ethical ramifications of their automation projects are taking a significant legal risk.
About the Author
David Ingham, Digital Partner, Media & Entertainment, Cognizant. From very early in my life I knew I wanted to work with the Media & Entertainment industry and contribute to the experiences that I personally enjoyed (film, television, music, books, newspapers). And over my professional career I have been fortunate to partner with some great organisations and contribute to fascinating projects. I have worked on major integrations (ABC + Disney, New Line Cinema + Warner Bros.), transformed business processes at comic book companies and music organisations and most recently used artificial intelligence to predict Oscar winners and streamline business operations.
Featured image: Weissblick