Skip to main content
The Keyword

AI

Responsible AI: Putting our principles into action



Every day, we see how AI can help people from around the world and make a positive difference in our lives—from helping radiologists detect lung cancer, to increasing literacy rates in rural India, to conserving endangered species. These examples are just scratching the surface—AI could also save lives through natural disaster mitigation with our flood forecasting initiative and research on predicting earthquake aftershocks

As AI expands our reach into the once-unimaginable, it also sparks conversation around topics like fairness and privacy. This is an important conversation and one that requires the engagement of societies globally. A year ago, we announced Google’s AI Principles that help guide the ethical development and use of AI in our research and products. Today we’re sharing updates on our work.

Internal education

We’ve educated and empowered our employees to understand the important issues of AI and think critically about how to put AI into practice responsibly. This past year, thousands of Googlers have completed training in machine learning fairness. We’ve also piloted ethics trainings across four offices and organized an AI ethics speaker series hosted on three continents.

Tools and research

Over the last year, we’ve focused on sharing knowledge, building technical tools and product updates, and cultivating a framework for developing responsible and ethical AI that benefits everyone. This includes releasing more than 75 research papers on topics in responsible AI, including machine learning fairness, explainability, privacy, and security, and developed and open sourced 12 new tools. For example:

  • The What-If Tool is a new feature that lets users analyze an ML model without writing code. It enables users to visualize biases and the effects of various fairness constraints as well as compare performance across multiple models.
  • Google Translate reduces gender bias by providing feminine and masculine translations for some gender-neutral words on the Google Translate website.
  • We expanded our work in federated learning, a new approach to machine learning that allows developers to train AI models and make products smarter without your data ever leaving your device. It’s also now open-sourced as TensorFlow Federated.
  • Our People + AI Guidebook is a toolkit of methods and decision-making frameworks for how to build human-centered AI products. It launched in May and includes contributions from 40 Google product teams. 

We continue to update the Responsible AI Practices quarterly, as we reflect on the latest technical ideas and work at Google.

Review process

Our review process helps us meet our AI Principles. We encourage all Google employees to consider how the AI Principles affect their projects, and we’re evolving our processes to ensure we’re thoughtfully considering and assessing new projects, products, and deals. In each case we consider benefits and assess how we can mitigate risks. Here are two examples:

Cloud AI Hub

With Cloud AI Hub, enterprises and other organizations can share and more readily access a variety of already-trained machine learning models. Much of AI Hub’s content would be published by organizations outside of Google, which would make it difficult for us to evaluate all the content along the AI Principles. As a result, we evaluated the ethical considerations around releasing the AI Hub, such as the potential for harmful dual use, abuse, or presenting misleading information. 

In the course of the review, the team developed a two-tiered strategy for handling potentially risky and harmful content: 

  1. Encouraging community members to weigh in on issues like unfair bias. To support the community, Cloud AI provides resources (like the inclusive ML guide) to help users identify trustworthy content.
  2. Crafting a Terms of Service for Cloud AI Hub, specifically the sections on content and conduct restrictions.

These safeguards made it more likely that the AI Hub’s content ecosystem would be useful and well-maintained and as a result, we went ahead with launching the AI Hub.

Text-to-speech (TTS) research paper

A research group within Google wrote an academic paper that addresses a major challenge in AI research: systems often need to be retrained from scratch, with huge amounts of data, to take on even slightly different tasks. This paper detailed an efficient text-to-speech (TTS) network, which allows a system to be trained once and then adapted to new speakers with much less time and data.


While smarter text-to-speech networks could help individuals with voice disabilities, ALS, or tracheotomies, we recognize the potential for such technologies to be used for harmful applications, like synthesizing an individual’s voice for deceptive purposes.


Ultimately we determined that the technology described in the paper had limited potential for misuse for several reasons, including the quality of data required to make it work. Arbitrary recordings from the internet would not satisfy these requirements. In addition, there are enough differences between samples generated by the network and speakers’ voices for listeners to identify what’s real and what’s not. As a result, we concluded that this paper aligned with our AI Principles, but this exercise reinforced our commitment to identifying and preempting the potential for misuse.

Engaging with external stakeholders

Ongoing dialogue with the broader community is essential to developing socially responsible AI. We’ve engaged with policymakers and the tech community, participated in more than 100 workshops, research conferences and summits, and directly engaged with more than 4,000 stakeholders across the world.


As advances in AI continue, we’ll continue to share our perspectives and engage with academia, industry, and policymakers to promote the responsible development of AI. We support smart regulation tailored to specific sectors and use cases, and earlier this year we published this white paper to help promote pragmatic and forward-looking approaches to AI governance. It outlines five areas where government should work with civil society and AI practitioners to cultivate a framework for AI.


We recognize there’s always more to do and will continue working with leaders, policymakers, academics, and other stakeholders from across industries to tackle these important issues. Having these conversations, doing the proper legwork, and ensuring the inclusion of the widest array of perspectives, is critical to ensuring that AI joins the long list of technologies transforming life for the better. 

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe