Artificial Intelligence

Google is changing its paper review process following internal revolt


Google is making changes to how it reviews papers following an internal revolt over the company’s controversial practices.

Leading AI ethics researcher Timnit Gebru was fired from Google in December last year after sending an email to colleagues which criticised the company’s practices.

Gebru claims Google blocks the publication of papers that may cause criticism of the company’s work; including her most recent which questioned whether language models can be too big, who benefits from them, and whether they can increase prejudice and inequalities.

In an email to employees following Gebru’s firing, Jeff Dean, Head of Google Research, said:

“Papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

Many Googlers haven’t been satisfied with the company’s response. Several high-profile AI experts left Google over Gebru’s firing and other similar incidents.

Then, to add to a string of bad PR events, Google fired Margaret Mitchell last week, another respected individual from the company’s ethical AI team:

In a statement, Google claimed it fired Mitchell after finding “multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees.”

Not all Googlers believe the company’s claims:

Following the revolt, Google finally seems to be taking on the criticisms and making changes to its internal practices. In an hour-long Google staff meeting recording heard by Reuters, Google Research executives said they were working to regain trust after the string of incidents.

During the recording, Dean reportedly said the “sensitive topics” review “is and was confusing”. He has tasked Senior Research Director Zoubin Ghahramani to make the necessary changes to simplify the process.

Of critical research, Ghahramani said: “We need to be comfortable with that discomfort”.

Google’s sensitive topics review process – which covers issues like sentiment analysis, bias, military applications, and political affiliations – is said to have mandated at least three AI papers to be modified as not to cause the company any embarrassment.

Ghahramani’s comment may help to quell some revolters; while others may need a little more convincing. After all, actions speak louder than words and all that.

(Photo by Mitchell Luo on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: ai, artificial intelligence, ethics, Featured, Google, margaret mitchell, timnit gebru





Source : artificialintelligence-news.com

ASu
I am tech enthusiast and a keen learner, Currently pursuing Bachelors in Computer Science from University of Delhi
https://technewz.org

Leave a Reply

Your email address will not be published. Required fields are marked *