How Google delivers reliable information in Search
For many people, Google Search is a place they go when they want to find information about a question, whether it’s to learn more about an issue, or fact check a friend quoting a stat about your favorite team. We get billions of queries every day, and one of the reasons people continue to come to Google is they know that they can often find relevant, reliable information that they can trust.
Delivering a high-quality search experience is core to what makes Google so helpful. From the early days when we introduced the PageRank algorithm, understanding the quality of web content was what set Google apart from other search engines.
But people often ask: What do you mean by quality, and how do you figure out how to ensure that the information people find on Google is reliable?
A simple way to think about it is that there are three key elements to our approach to information quality:
First, we fundamentally design our ranking systems to identify information that people are likely to find useful and reliable.
To complement those efforts, we also have developed a number of Search features that not only help you make sense of all the information you’re seeing online, but that also provide direct access to information from authorities—like health organizations or government entities.
Finally, we have policies for what can appear in Search features to make sure that we’re showing high quality and helpful content.
With these three approaches, we’re able to continue to improve Search and raise the bar on quality to deliver a trusted experience for people around the world. Let’s take a closer look at how we approach each of these areas.
Orienting our ranking systems around quality
To understand what results are most relevant to your query, we have a variety of language understanding systems that aim to match the words and concepts in your query with related information in our index. This ranges from systems that understand things like misspellings or synonyms, to more advanced AI-based systems like our BERT-based language capabilities that can understand more complex, natural-language queries.
Updates to our language understanding systems certainly make Search results more relevant and improve the experience overall. But when it comes to high-quality, trustworthy information, even with our advanced information understanding capabilities, search engines like Google do not understand content the way humans do. We often can’t tell from the words or images alone if something is exaggerated, incorrect, low-quality or otherwise unhelpful.
Instead, search engines largely understand the quality of content through what are commonly called “signals.” You can think of these as clues about the characteristics of a page that align with what humans might interpret as high quality or reliable. For example, the number of quality pages that link to a particular page is a signal that a page may be a trusted source of information on a topic.
We consider a variety of other quality signals, and to understand if our mixture of quality signals is working, we run a lot of tests. We have more than 10,000 search quality raters, people who collectively perform millions of sample searches and rate the quality of the results according to how well they measure up against what we call E-A-T: Expertise, Authoritativeness and Trustworthiness.
Raters, following instructions anyone can read in our Search Quality Rater Guidelines, evaluate results for sample queries and assess how well the pages listed appear to demonstrate these characteristics of quality.
We recently explained the search rater process in more depth, but it’s worth noting again the ratings we receive are not used directly in our ranking algorithms. Instead, ratings provide data that, when taken in aggregate, help us measure how well our systems are working to deliver quality content that’s aligned with how people—across the country and around the world—evaluate information. This data helps us to improve our systems and ensure we’re delivering high quality results.
For topics where quality information is particularly important—like health, finance, civic information, and crisis situations—we place an even greater emphasis on factors related to expertise and trustworthiness. We’ve learned that sites that demonstrate authoritativeness and expertise on a topic are less likely to publish false or misleading information, so if we can build our systems to identify signals of those characteristics, we can continue to provide reliable information. The design of these systems is our greatest defense against low-quality content, including potential misinformation, and is work that we’ve been investing in for many years.
Info from experts, right in Search
In most cases, our ranking systems do a very good job of making it easy to find relevant and reliable information from the open web, particularly for topics like health, or in times of crisis. But in these areas, we also develop features to make information from authoritative organizations like local governments, health agencies and elections commissions available directly on Search.
For example, we’ve long had knowledge panels in Search with information about health conditions and symptoms, vetted by medical experts. More recently, we saw a significant increase in people searching for information about unemployment benefits, so we worked with administrative agencies to highlight details about eligibility and how to access this civic service. And for many years, we’ve offered features that help you find out how to vote and where your polling place is. Through the Google Civic Information API, we help other sites and services make this information available across the web. This type of information is not always easy to find, especially in rapidly changing situations, so features like these help ensure people get critical guidance when they need it most.
Helping you understand information you see
For many searches, people aren’t necessarily looking for a quick fact, but rather to understand a more complex topic. We also know that people come to Search having heard information elsewhere, with the aim of seeing what others are saying to form their own opinion.
In these cases, we want to give people tools to make sense of the information they’re seeing online, to find reliable sources and explore the full picture about a topic.
For example, we make it easy to spot fact checks in Search, News, and now in Google Images by displaying fact check labels. These labels come from publishers that use ClaimReview schema to mark up fact checks they have published. For years now we’ve offered Full Coverage on Google News and Search, helping people explore and understand how stories have evolved and explore different angles and perspectives.
Protecting Search features through policies
We also offer more general Search features, like knowledge panels, featured snippets and Autocomplete, that highlight and organize information in unique ways or predict queries you might want to do. Because of the way these features highlight information in Search, we hold ourselves to a very high standard for quality and have guidelines around what content should appear in those spaces.
Within these features, we first and foremost design our automated ranking systems to show helpful content. But our systems aren’t always perfect. So if our systems fail to prevent policy-violating content from appearing, our enforcement team will take action in accordance with our policies.
To learn more about how we approach policies for our search features, visit this post. And if you’re still looking for more details about Search, check out more past articles in our How Search Works series.