AI models are prejudiced – and it is up to us to fix them
The biggest threat from AI is its self-perpetuating bias, which can have devastating impacts on health, job opportunities, access to information and even democracy. What can we do to modify data that simply reflects ingrained societal biases? There is currently no regulation around algorithmic accountability, but some organisations and governments are making progress.
Artificial intelligence is an amazing catalyst for digital transformation. Everywhere from wealth management to population health to touchless retail operations, technologies like machine learning and computer vision are making algorithms fast, portable and ubiquitous. But there are downsides, too.
Ask anyone to identify the threat they see coming from AI, and you will get a range of answers. “The robots are taking our jobs.” “Big Brother is watching us.” They’re all reasonable concerns, which I’ll address elsewhere. But for me, the biggest challenge humanity faces from AI is the self-perpetuating bias at the heart of its algorithms.
With AI evolving at such a dramatic speed, already-problematic societal inequalities are being reinforced even as I write. And if we don’t tread carefully, these models will cause irreparable damage.
AI systems are not created in a vacuum: they are built and operationalized by people. They learn from us. That means their behaviors reflect the best – but also the very worst of human characteristics. AI models represent data and algorithms that simply inherit our own deeply ingrained biases.
Perpetuating human bias
This matters because businesses use data-driven models to make significant decisions that affect people's lives and prospects, whether that’s who gets approved for a loan, who it shortlisted for a job or who gets parole. By perpetuating human biases and making them more difficult to detect, AI models can – and do – exacerbate discrimination and widen inequalities.
In order to debunk and deny discrimination, first we must understand how computerized decision-making can lead to bias. Only then can we build governance mechanisms that can detect and prevent discrimination.
Algorithms absorb prejudices associated with attributes and identifiers such as race, gender or disability status at an astonishing pace. Regardless of whether sensitive information is intentionally collected or not, it is typically embedded within large datasets. AI models learn their correlations when they are trained on historical data; bias creeps in when an algorithm feeds social category information without explicitly avoiding discrimination.
Discrimination for hire
A salient yet everyday example of the dangers of bias within algorithms is the use of automation to streamline the recruitment screening process. Although well-intentioned, too often machine learning algorithms exacerbate systematized biases in ways that we wouldn’t tolerate among people.
In 2018 Amazon discovered that its AI hiring software downgraded resumes mentioning the word "women" and candidates from all-women's colleges. It was merely taking its cue from the fact that the company had a limited history of hiring female engineers and computer scientists. In the same year, study findings suggested that Microsoft AI facial recognition software assigned black males more negative emotions than their white counterparts. As a result of these biases, automated systems unjustifiably deny opportunities to people from historically disadvantaged groups.
There are troubling instances of racism embedded in justice systems, too. People of colour were found to be discriminated against by the COMPAS algorithm. In healthcare, we have seen a heavy preference for white patients over black patients when performing cost/benefit analysis and to predict who needs extra medical treatment.
These examples are just the tip of the iceberg when it comes to the way technology can amplify oppression and undermine equality. With AI becoming more ubiquitous, cases become larger by orders of magnitude, paving the way to a dystopian future of machine-rule.
Of course, some statistical biases are inherent in data and are necessary for the model's accuracy. In order to develop algorithms for breast cancer, researchers must sift through almost exclusively female patient data. With sickle cell anaemia, the dataset has a necessary racial bias. And there is a higher prevalence of autism among boys than among girls. would simply make no sense to base this research on “balanced” data.
We have a moral duty to ensure AI is fair. But there is a business imperative, too. As AI becomes increasingly present in our everyday lives, a socially aware demographic of customers is becoming wiser to the implications of biases within it and will do what they can to avoid it.
Acceptable judgment
So, what can we do about it? It’s not as simple as removing sensitive features such as gender and race; even without them, models will internalize stereotypes. Neither is model transparency a silver bullet. Interpretability and explainability will not eliminate bias in AI models, but they will help.
At UST, when we’re implementing large-scale AI solutions with clients in healthcare, finance, retail, and manufacturing, we recognize that we can’t just leave the decision-making to the machines; human expertise is crucial for identifying and mitigating biases. We commit to ongoing algorithm auditing as part of the data science lifecycle. We decide what constitutes acceptable judgment before implementing an algorithm, and revisit that decision when we start to see results.
Operationalizing fair AI models involves trade-offs in order to create accurate, discrimination-free models, particularly if there are restrictions on the factors that can be used. We must have honest discussions around these trade-offs, which may involve the speed, accuracy, or precision of the model.
Steps toward standards
I’m concerned that there is currently no comprehensive, internationally enforceable standard or guidance to ensure that AI is used in a safe, fair, robust and equitable manner. There is certainly an appetite for it – and steps are being made. Various groups and governments are developing standards to outline the stakes and challenges of bias in AI, and to identify and describe where and how they contribute to harm.
A recent publication from the National Institute for Standards in Technology (NIST) examines bias in AI. In the US, the Accountability Act requires bias to be addressed in corporate algorithms. Under General Data Protection Regulation (GDPR), the EU introduced the right to be informed of an algorithm's output. The Singapore Model AI Governance Framework has a strong focus on internal governance, decision-making, models, operations management and customer relationship management.
There are many more disparate examples. But algorithms operate across borders; we need global leadership on this. By providing stakeholders and policymakers with a broader perspective and necessary tools, we can stop the bigot in the machine from perpetuating its prejudice.
We’re standing on the precipice of an ever-expanding digital landscape, fraught with concerns about data collection, model inequalities, privacy concerns and shifting norms in AI governance. Social constructs and problems we have faced in the past are naturally reflected in this new realm.
There’s a lot of work to be done. But I remain optimistic that we will succeed in making AI work for humanity.
Read more about UST's approach to innovation with AI and ML and explore our focused solutions for business users and IT developers.
contact us