Star AI researcher quits Google

Mountain View , Dec 5:

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments – until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident unprecedented research censorship and faulting the company for racism and defensiveness.

She deserves more than Google knew how to give, and now she is an all-star free agent who will continue to transform the tech industry, Buolamwini said in an email Friday.

How Google will handle its AI ethics initiative and the internal dissent sparked by Gebru’s exit is one of a number of problems facing the company heading into the new year.

At the same time she was on her way out, the National Labor Relations Board on Wednesday cast another spotlight on Google’s workplace. In a complaint, the NRLB accused the company of spying on employees during a 2019 effort to organize a union before the company fired two activist workers for engaging in activities allowed under U.S. law. Google has denied the allegations in the case, which is scheduled for an April hearing.

Google has also been cast as a profit-mongering bully by the U.S. Justice Department in an antitrust lawsuit alleging the company has been illegally abusing the power of its dominant search engine and other popular digital services to stifle competition. The company also denies any wrongdoing in that legal battle, which may drag on for years.

Expert from Ethiopia

Ethical Artificial Intelligence Team

Timnit Gebru is an Eritrean born and raised in Ethiopia computer scientist. Until December 2020, she was the technical co-lead of the Ethical Artificial Intelligence Team at Google.

Earlier Gebru worked at Apple Inc., developing signal processing algorithms for the first iPad.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original Don’t Be Evil motto that the company now routinely ousts employees who dare to challenge management.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI – ranging from a White House executive order this week to ethics review teams set up throughout the tech industry – are of little use when their conclusions might threaten profits or national interests.

Gebru has been a star in the AI ethics world who spent her early tech career working on Apple products and got her doctorate studying computer vision at the Stanford Artificial Intelligence Laboratory.

Pollsters delight

Google Street View

Gebru combined deep learning with Google Street View to estimate the demographics of United States neighbourhoods, showing that socioeconomic attributes such as voting patterns, income, race and education can be inferred from observations of cars. If the number of pickup trucks outnumbers the number of sedans, the community are more likely to vote for a political party.

She was awarded the title “Alicorn of Artificial Intelligence” by The Self preneur, for that.

She’s co-founder of the group Black in AI, which promotes Black employment and leadership in the field. She’s known for a landmark 2018 study that found racial and gender bias in facial recognition software.

Gebru had recently been working on a paper examining the risks of developing computer systems that analyze huge databases of human language and use that to create their own human-like text. The paper, a copy of which was shown to The Associated Press, mentions Google’s own new technology, used in its search business, as well as those developed by others.

Besides flagging the potential dangers of bias, the paper also cited the environmental cost of chugging so much energy to run the models – an important issue at a company that brags about its commitment to being carbon neutral since 2007 as it strives to become even greener.

Gender Shades

Joy Buolamwini

Gebru worked with Joy Buolamwini to investigate facial recognition software; finding that black women were 35% less likely to be recognised than white men.

When Gebru attended an artificial intelligence conference in 2016, she noticed that she was the only black woman out of 8,500 delegates.

Together with her colleague Rediet Abebe Gebru founded Black in AI, which has conducted workshops at the Conference on Neural Information Processing Systems annually since 2017.

Google managers had concerns about omissions in the work and its timing, and wanted the names of Google employees taken off the study, but Gebru objected, according to an exchange of emails shared with the AP and first published by Platformer.

Gebru on Tuesday vented her frustrations about the process to an internal diversity-and-inclusion email group at Google, with the subject line: Silencing Marginalized Voices in Every Way Possible.” Gebru said on Twitter that’s the email that got her fired.

Ousting Timnit for having the audacity to demand research integrity severely undermines Google’s credibility for supporting rigorous research on AI ethics and algorithmic auditing,” said Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology who co-authored the 2018 facial recognition study with Gebru.

Fairness, Ethics

computer vision

Gebru also worked on Microsoft’s Fairness, Accountability, Transparency, and Ethics in the AI team. In 2017, Gebru spoke on the Fairness and Transparency conference, where MIT Technology Review interviewed her about biases that exist in AI systems and how adding diversity in AI teams can fix that issue.

“How does the lack of diversity distort artificial intelligence and specifically computer vision?” and Gebru pointed out that there are biases that exist in the software developers.Gebru has further expressed that she believes facial recognition is too dangerous to be used for law enforcement and security purposes right now.

For more :

, , , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *