Google fires both AI Ethical founder and co-leader amid Diversity Disputes

On 20th February, Google fired Margaret Mitchell; the founder and former co-lead of the company’s Ethical AI group. Google claims that she was fired for violating its code of conduct and security policies.

It was Mitchell who first announced the news via a tweet that she was fired which was then followed by Google’s statement. Co-incidentally, Google recently fired another Ethical AI co-leader Timnit Gebru.

There is a source claiming that since last month Mitchell was locked out of her official email account which was ostensibly done to prevent her from checking official correspondence for evidence backing up Gebru’s claim of discrimination and harassment.

However, Google counters that Mitchell’s violations “included the exfiltration of confidential business-sensitive documents and private data of other employees”.

While speaking to the media, Google revealed that since January 19 an internal investigation against Mitchell was ongoing. The findings of a team of investigators were examined by a review committee leading to the dismissal.

A few days back, Timnit Gebru, Google’s Ethical AI co-leader was fired for exposing bias in facial recognition systems. Gebru claims her firing was unjust and punishment for questioning Google’s decision to censor papers critical of its AI systems. One of those papers which were authored by Mitchell and Gebru that they worked on in December 2020 explaining how AI mimicking language could hurt marginalized individuals.

Later Google announced it was making changes to its research and diversity policies, followed by an investigation into Gebru’s termination.  Jeff Dean apologized to staff in an internal mail for how Gebru’s departure was handled. In the mail, he stated, “I heard and acknowledge what Dr Gebru’s exit signified to female technologists, to those in the Black community and other underrepresented groups who are pursuing careers in tech, and to many who care deeply about Google’s responsible use of AI. It led some to question their place here, which I regret,”

Senior researcher Alex Hanna wrote that the team wasn’t aware of Croak’s appointment until it was made public which was how the team found out too. “We were told to trust the process, trust in decision-makers like Marian Croak to look out for our best interests,” “But these decisions were made behind our backs.”