Google wants to limit research papers that show AI in bad light

By Rahul Vaimal, Associate Editor
  • Follow author on
Google Image
Representational Image

Google is currently under fire for apparently forcing out a researcher whose work warned of AI (artificial intelligence) bias, and now a report says that others who do similar works at the company have been asked to “strike a positive tone” and undergo additional research reviews on “sensitive topics.”

New policies

The report, which cites company analysts and internal documents, suggests that Google has introduced new restrictions in the last year, including an additional inspection round for papers on certain subjects and a significant increase in executive interference at the later stages of research.

The new review procedure asks that researchers consult with legal, policy and public relations teams before pursuing topics such as face and sentiment analysis and categorizations of race, gender or political affiliation, according to internal webpages.

Internal conflict

Tensions between Google and some of its staff broke into view this month after the abrupt exit of scientist Timnit Gebru, who led a 12-person team focused on ethics in artificial intelligence software (AI). Her resignation seems to have been forced under confusing circumstances, following friction between her and management over work that her team was doing.

Ms. Gebru says Google fired her after she questioned an order not to publish research claiming AI that mimics speech could disadvantage minority populations.

“Sensitive Topics”

The “sensitive” topics that have identified by Google are: “the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecoms and systems that recommend or personalize web content.”

The explosion in research and development of AI across the tech industry has prompted authorities in the United States and elsewhere to propose rules for its use. Some have cited scientific studies showing that facial analysis software and other AI can perpetuate biases or erode privacy.

Google and AI

Google in recent years incorporated AI throughout its services, using the technology to interpret complex search queries, decide recommendations on YouTube and autocomplete sentences in Gmail. Its researchers published more than 200 papers in the last year about developing AI responsibly, among more than 1,000 projects in total.

According to reports, the Google paper for which authors were told to strike a positive tone discusses recommendation AI, which services like YouTube employ to personalize users’ content feeds. But there are “concerns” that this technology can promote “disinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content,” as well as lead to “political polarization.”

YOU MAY LIKE