Skip to main content
The Keyword

Public Policy

Oversight frameworks for content-sharing platforms



A range of governments, tech platforms, and civil society are focused on how best to deal with illegal and problematic online content. There’s broad agreement on letting people create, communicate, and find information online, while preventing people from misusing content-sharing platforms like social networks and video-sharing sites.

We’ve been working on this challenge for years, using both computer science tools and human reviewers to identify and stop a range of online abuse, from “get rich quick” schemes to disinformation to child sexual abuse material. We respond promptly to valid notices of specific illegal content, and we prohibit other types of content on various different services. A mix of people and technology helps us identify inappropriate content and enforce our policies, and we continue to improve our practices. Earlier this year we issued an in-depth review of how we combat disinformation, and YouTube continues to regularly update its Community Guidelines Enforcement Report.

Tackling this problem is a shared responsibility. Many laws, covering everything from consumer protection to defamation to privacy, already govern online content. Safe harbors and Good Samaritan laws for online platforms support the free flow of information, innovation, and economic growth, while giving platforms the legal certainty they need to combat problematic content. Over the internet’s history, many countries have not only established criteria to qualify for safe harbors, but also developed codes of practice (like the European Union’s Code of Conduct On Countering Illegal Hate Speech and Code of Practice on Disinformation). And companies have worked together, as with the Global Internet Forum to Counter Terrorism, a coalition sharing information on curbing online terrorism. Approaches continue to evolve—for instance, earlier this month we joined other companies and countries in signing the Christchurch Call to Action To Eliminate Terrorist and Violent Extremist Content Online.

We’ve previously shared our experiences in order to promote smart regulation in areas like privacy, artificial intelligence, and government surveillance, and I recently wrote about specific legal frameworks for combating illegal content online. In that spirit, we are offering some ideas for approaching oversight of content-sharing platforms:

Clarity - Content-sharing platforms are working to develop and enforce responsible content policies that establish baseline expectations for users and articulate a clear basis for removal of content as well as for suspension or closure of accounts. But it’s also important for governments to draw clear lines between legal and illegal speech, based on evidence of harm and consistent with norms of democratic accountability and international human rights. Without clear definitions, there is a risk of arbitrary or opaque enforcement that limits access to legitimate information.

Suitability - It’s important for oversight frameworks to recognize the different purposes and functions of different services. Rules that make sense for social networks, video-sharing platforms, and other services primarily designed to help people share content with a broad audience may not be appropriate for search engines, enterprise services, file storage, communication tools, or other online services, where users have fundamentally different expectations and applications. Different types of content may likewise call for different approaches.

Transparency - Meaningful transparency promotes accountability. We launched our first Transparency Report more than eight years ago, and we continue to extend our transparency efforts over time. Done thoughtfully, transparency can promote best practices, facilitate research, and encourage innovation, without enabling abuse of processes.

Flexibility - We and other tech companies have pushed the boundaries of computer science in identifying and removing problematic content at scale. These technical advances require flexible legal frameworks, not static or one-size-fits-all mandates. Likewise, legal approaches should recognize the varying needs and capabilities of startups and smaller companies.

Overall quality - The scope and complexity of modern platforms requires a data-driven approach that focuses on overall results rather than anecdotes. While we will never eliminate all problematic content, we should recognize progress in making that content less prominent. Reviews under the European Union’s codes on hate speech and disinformation offer a useful example of assessing overall progress against a complex set of goals.

Cooperation - International coordination should strive to align on broad principles and practices. While there is broad international consensus on issues like child sexual abuse imagery, in other areas individual countries will make their own choices about the limits of permissible speech, and one country should not be able to impose its content restrictions on another.

The recent Christchurch Call is a powerful reminder of what we can do when a range of stakeholders work together to address the challenges of online content. The internet has expanded access to information, bringing incredible benefits to people around the world. And as with any new information technology, societies and cultures are developing new social norms, institutions, and laws to address new challenges and opportunities. We look forward to contributing to that extraordinarily important project.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe