We live in a world plagued by stereotypes whereby countless ‘isms’ drive discrimination. This International Women’s Day (IWD) I want to talk about the importance of preventing societal bias from resurfacing in Artificial Intelligence (AI), a dominant technology in both our present and seemingly the future.
Attending Tech Show London at the ExCel last week proved insightful and informative. With an array of stages and booths, you’d be hard pressed to find a quiet moment. I was particularly taken with Ivana Bartoletti’s keynote speech that explored the development of fair AI for organisations.
Bartoletti is Global Chief Privacy Officer at Wipro and internationally recognised for her expertise and leadership in privacy, data protection and responsible technology. Author of ‘An Artificial Revolution’, she is an advocate for more rigorously regulation around AI.
Urging her audience to watch a film called ‘Coded Bias’, which uncovers the way that humans’ own biases are embedded in technology, Bartoletti delved straight into ethical AI and algorithmic bias.
Tech activism
Though she acknowledged the existence of 120 data privacy laws across the globe, she noted that these understand ‘fairness’ in terms of algorithmic decision-making, not conscious morality.
Consider the first year of the pandemic, which resulted in students completing classes from home. As The Verge reported, exam results were based on “a controversial algorithm” which seemed to produce better grades for more affluent students. Bartoletti reaffirmed this as she told the audience that poorer students were penalised by the system, regardless of their academic achievement.
Other cases of politics seeping into AI include online job adverts which tend to favour men for higher paid jobs, The Guardian explained: “Female job seekers are much less likely to be shown adverts on Google for highly paid jobs than men, researchers have found.”
Whilst this may alarm you, Bartoletti reminded the audience that this the result of an algorithm which processes real data. She noted that the data is merely reinforcing “the breadth of existing inequalities in society that have been replicated and perpetuated in political decision-making.”
Fair processing for positive outcomes
Much scrutiny and debate surrounds the fairness and ethics of data processing. On the one hand, the emerging biases are based on the pre-existing facts and inequities of society.
Facial recognition is used worldwide It can be used on a small scale to automate the tagging process of pictures, but it can also aid police investigations. In recent years, controversy shrouds the efficacy of AI-based facial recognition, which doesn’t always perceive faces equally. WIRED reported that “algorithms have a harder time recognising people with darker skin.”
If women are being shown lower paid jobs, it’s because they typically earn less than men – not to mention often undertake unpaid care roles, rendering them practically invisible in the data. Private schools are better funded and therefore more likely to produce better results.
These are cases of systemic inequalities, deeply entrenched across the globe on regional and national levels. It is undeniable that most algorithmic results are based on, and often mirror, the world. However, the way in which we process this data must be improved in order to better the outcome. If we continue to replicate inequalities, they will never cease to exist.
Bartoletti suggested that certain categories have been overlooked in data processing and we need to “massage” the data. She was asked by one audience member whether this term meant that data had to be ‘spun’ or ‘altered’ – effectively making it unreliable and inaccurate. This is not the case, she responded.
By omitting or inserting certain categories, understanding data more widely, it takes into account factors that may otherwise be overlooked. This highlights the importance of having a diverse team, whatever the setting, to ensure lack of oversight in factors such as these.
By widening parameters, and acknowledging factors such as unpaid work which may affect payment statistics, we can work towards a fairer and more ethical future that uses AI for good.