Skip to main content
The Keyword

Search

Making visual content more useful in Search



When Search first began, our results were just plain text. But on February 24, 2000, something changed. It was the day after the Grammy Awards, and we noticed people were searching like crazy for Jennifer Lopez’s green dress. It was clear right away that people were looking for visual information, not just plain text. In the years that followed, the growth of mobile devices and small screens made it even more important to be able to quickly scan visual results.

Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We've been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.

We’ve also made a number of updates to our most immersive experience for searching visual content, Google Images. These changes are aimed at helping people better find information visually, and making it easier to pursue the things people come to Google Images for help with, like shopping for products, styling a room, or tackling a DIY project.

Today, we’re introducing three fundamental shifts in how we think about Search, including a range of new features that use AI to make your search experience more visual and enjoyable. And when you know you want to search visually, we’re making the Google Images experience even more useful and powerful.

Immersive visual content with stories

Earlier this year we worked with the AMP Project to announce AMP stories, an open source library that makes it easy for anyone to create a story on the open web. While it’s still early, we’ve seen many publishers experimenting with this format and providing people with a more visual way to get information from Search and News. To help people discover these visual stories, we’ll also begin to show this content in Google Images and Discover.

Now we’re beginning to use AI to intelligently construct AMP stories and surface this content in Search. We’re starting today with stories about notable people—like celebrities and athletes—providing a glimpse into facts and important moments from their lives in a rich, visual format. This format lets you easily tap to the articles for more information and provides a new way to discover content from the web.

Giadastory

Visually preview topics with featured videos in Search

Videos can be a useful way to learn about a new topic, but it can be hard to find the most relevant videos to explore all the different facets of that topic space. Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.

Imagine you’re planning a hiking trip to Zion National Park, and you want to check out videos of what to expect and ideas for sites to visit. Since you’ve never been there, you might not know which specific landmarks to look for when mapping out your trek.

With featured videos, we take our deep understanding of the topic space (in this case, the most important landmarks in the park) and show the most relevant videos for those subtopics. For Zion National Park, you might see a video for each attraction, like Angels Landing or the Narrows. This provides a more holistic view of the video content available for a topic, and opens up new paths to discover more.

Zion park

Visual journeys with Google Images

People coming to Google Images today are looking to find information, or for help doing something—not just to see an image. So we’re announcing several new features to help you find visual information more easily, and pursue activities where having a visual reference is important.

Tapping the power of the web page

When you come to Google Images for help on a task, the page where an image lives is important. Whatever page you visit should help you take the next step in what you’re trying to do. Also, with many visual searches, there may not be one right answer, so you want to scan a lot of images and information before you find what you need.

Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.  

Starting this week, we’ll also show more context around images, including captions that show you the title of the webpage where each image is published. This is critical to help you understand the page behind the image. We’ll also suggest related search terms at the top of the page for more guidance. We’ve already introduced this new look on mobile, and now we’re bringing it to desktop where a larger screen is important for complex tasks.

desktop redesign

Explore within an image using AI with Lens in Google Images

We launched Google Lens last year to help you do more with what you see. People are already using it in their camera and on their photos—to find items in an outfit they like, learn more about landmarks, or identify that cute dog in the park. In the coming weeks, we’ll bring Lens to Google Images to help you explore and learn more about visual content you find during your searches.

lens gif

Lens’ AI technology analyzes images and detects objects of interest within them. If you select one of these objects, Lens will show you relevant images, many of which link to product pages so you can continue your search or buy the item you’re interested in. Using your finger on a mobile device screen, Lens will also let you “draw” on any part of an image, even if it’s not preselected by Lens, to trigger related results and dive even deeper on what’s in your image.

We hope these changes will make it easier—and more visually interesting—to traverse the web, find information, and pursue your interests.


Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe