Before Gebru joined Google in 2018, he worked with MIT researcher Joy Buolamwini on a project called Gender Shades, which found that IBM and Microsoft’s facial analysis technology was very accurate for white men but very imprecise for black women. This has helped U.S. lawmakers and technologists question and test the accuracy of facial recognition across a range of demographics, and Microsoft, IBM, and Amazon have announced that they will stop selling the technology this year. Gebru also co-founded an influential conference called Black in AI that seeks to increase the diversity of researchers who contribute to the field.
Gebru’s exit was set in motion when she worked with researchers inside and outside Google on a paper on ethical issues raised by recent advances in AI language software.
The WIRED guide to artificial intelligence
Supersmart algorithms don’t do all of the work, but they learn faster than ever and do everything from medical diagnostics to showing ads.
Researchers have made advances on issues like generating text and answering questions by creating huge machine learning models that have been trained on large chunks of online text. Google has said the technology has made its lucrative search engine of the same name more powerful. However, researchers have also shown that building these more powerful models is power consuming due to the huge computational resources required, and has documented how the models can reproduce the biased language about gender and race on the internet.
According to Gebru, their design discussed these issues and called for responsible use of the technology, for example by documenting the data that was used to create language models. She was troubled when the senior manager insisted that she and other Google writers either remove their names from the paper or withdraw it altogether, especially if she couldn’t learn the process of reviewing the draft. “I felt like we were being censored and thought it would affect all ethical AI research,” she says.
Gebru says she failed to convince the senior manager to solve the problems with the paper. She says the manager insisted that she remove her name. Tuesday Gebru emailed back and offered a deal: If she got a full explanation of what had happened and the research team met with management to set up a process for how to deal fairly with future research, she would remove their name from the paper. If not, she would have her leave the company at a later date so that she can publish the paper without the company’s affiliation.
Gebru also emailed a wider list within Google’s AI research group saying that managers’ attempts to improve diversity have been unsuccessful. It included a description of their dispute over the speech paper as an example of how Google managers can silence marginalized people. Platformer posted a copy of the email on Thursday.
On Wednesday, Gebru says, she learned from her direct reports that she had been told that she had resigned from Google and that her resignation had been accepted. She found that her company account was disabled.
An email sent by a manager to Gebru’s personal address stated that her resignation should take effect immediately as she had sent an email saying “conduct contrary to the expectations of a Google- Managers corresponds “. Gebru took to Twitter and outrage among AI researchers on the Internet quickly grew.
The secret of machine learning? Human teachers
Many critics of Google, both inside and outside of the company, noted that in one fell swoop the company had damaged the diversity of its AI workforce and lost a prominent advocate for improving that diversity. Gebru suspects that her treatment was motivated in part by her openness to diversity and Google’s treatment of marginalized people. “We asked for representation, but there are hardly any blacks in Google Research and, as I can see, none in the leadership,” she says.