After the General Manager of Google AI ethical committee was fired, the AI is back at the centre of the controversy. And the most common question is: are AI systems discriminatory? Let’s take stock of the situation to understand what lies behind such debate and the related answer.
Is AI discriminatory, sexist and racist? Yes and no. If we want to understand the reasons of such an answer, we need to look back at the general scenario it has stemmed from.
Who criticizes AI gets fired
You are all likely to have read about the dismissal – or the acceptance of a notice which has never been handed in! – of one of the most respected and famous researchers working on the AI, Timnit Gebru (La storia della ricercatrice che accusa Google di censura/ The story of the researcher who has accused Google of banning materials).
Young, female and blackago Gebru was, until not too long, the leader of the AI ethical committee, originally strongly wanted by the Mountain View company to ease concerns ( and protests ) about alleged racist and sexist attitudes within the company departments and the models on which their AI is based.
The cases concerning “minorities” fights against Google
Here is just a taste: in 2017 sixty of Google employees filed a class action regarding the pay gap and the prejudices against women dominating their workplace (More than 60 women consider suing Google, claiming sexism and a pay gap). In the same year, a disabled and transgender former employee expressed through the intercompany chat his accusation concerning the company racism and homophobia and also reported the business had failed to protect minorities, the LGBTQ community and women in general (“Google mi ha licenziato per le mie idee liberali”: un ex dipendente fa causa al gigante per discriminazione/ Google fired me because of my liberal ideas: a former employee sues the giant for discrimination).
In 2018, 20 thousands of Google employees went on strike against sexual harassment cases, discrimination and what they defined as an hostile working environment. This case was triggered by an investigation carried out by the New York Times concerning the 90 million golden handshake given to Andy Rubin, Android co-founder, by the company to make him leave after he had been charged of rape (Over 20,000 Google employees participated in yesterday’s mass walkout).
Internal memos against discrimination
In 2019, one of Google managers accused the company of discriminating pregnant women by sending an internal memo which went viral and was read by over 10 thousand workers (Google Employee Alleges Discrimination Against Pregnant Women in Viral Memo).
And again: another employee, upon leaving the company left a memo entitled in “The Weight of Silence” in which he talked about the “burden of being black” and reported several racist behaviours by colleagues (Google Employee Writes Memo About ‘The Burden of Being Black at Google’).
A group of former employees working in different departments, including the AI division, published an article online claiming they had been fired for organizing a union and for opposing to some unethical resolutions by the company including putting the leader of an anti-LGBTQ and anti-immigrants think tank in the AI ethical committee and perpetuating discriminations and harassment cases (Google fired us for organizing. We’re fighting back).
Last but not least, Meredith Whittaker, a famous researcher specializing in AI ethics, left Google after having faced serious op position within the organization for leading some protests against gender discrimination (Google employee who helped lead protests leaves company).
Timnit Gebru’s case
And we are back to 2020 and to Timnit Gebru’s case. Without getting too deep into the details of such a twisted story, the researcher claims she has been dismissed due to her research entitled «On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?». Such study criticizes the use of big language models used by businesses dealing with AI, including Google. Basically, such models are artificial intelligences which get trained to recognize and manipulate natural language starting from huge volumes of texts, commonly taken from the Net. This type of training allows AI to also take in discriminatory, racist and violent texts which cannot be sorted and blocked (Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day).
Over 1.200 of Google employees and over 1.500 academics have taken sides with Timnit Gebru (More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru).
AI development models reproduce stereotypes
But what is the connection between discrimination on the workplace and artificial intelligence? The development of AI based products and systems still relies on methods bases on historically discriminatory models which get selected by those working in the big techs and university labs. Which means, mainly white and rich males.
As a result, the AI, being based on stereotypical assumptions, ends up reinforcing prejudices and discrimination in its applications.
It is enough to consider that, according to the Global AI Talent Report 2019 issued by Element AI, at the end of 2018 only 18% of the authors attending the main conferences about AI were women, but also that women account only for 15% of Facebook researchers working on AI and 10% of Google’s. Black people score even worse: the black share of Facebook and Microsoft amount respectively only to 4% and 2,5%. There are no available data about other minorities.
Is AI racist then? No, technologies cannot be intrinsically racist. However, if those who work with them and for them represent only part of the overall humanity – white and masculine – excluding all types of diversities, development models are selected by such group according to their personal view of what is “normal”, this is likely to actually become the standard.
Getting rid of discrimination in the real world also means getting rid of it in the digital one. And this would benefit everybody, everywhere.
Are you worried about the development of AI? Tweet @agostinellialdo
If you liked this post, you may also like “An appless world? For some this is feasible. Here is why”