Skip to main content
The Keyword

Google Lens

Google Lens: real-time answers to questions about the world around you

Article's hero media

There’s so much information available online, but many of the questions we have are about the world right in front of us. That’s why we started working on Google Lens, to put the answers right where the questions are, and let you do more with what you see.

Last year, we introduced Lens in Google Photos and the Assistant. People are already using it to answer all kinds of questions—especially when they’re difficult to describe in a search box, like “what type of dog is that?” or “what’s that building called?”

Today at Google I/O, we announced that Lens will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and of course the Google Pixel. We also announced three updates that enable Lens to answer more questions, about more things, more quickly:

First, smart text selection connects the words you see with the answers and actions you need. You can copy and paste text from the real world—like recipes, gift card codes, or Wi-Fi passwords—to your phone. Lens helps you make sense of a page of words by showing you relevant information and photos. Say you’re at a restaurant and see the name of a dish you don’t recognize—Lens will show you a picture to give you a better idea.  This requires not just recognizing shapes of letters, but also the meaning and context behind the words. This is where all our years of language understanding in Search help.

lens_menu_050718 (1).gif

Second, sometimes your question is not, “what is that exact thing?” but instead, "what are things like it?" Now, with style search, if an outfit or home decor item catch your eye, you can open Lens and not only get info on that specific item—like reviews—but see things in a similar style that fit the look you like.

lens_clothing_inPhone.gif

Third, Lens now works in real time. It’s able to proactively surface information instantly—and anchor it to the things you see. Now you’ll be able to browse the world around you, just by pointing your camera. This is only possible with state-of-the-art machine learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second.

lens_multielements_050718 (1).gif

Much like voice, we see vision as a fundamental shift in computing and a multi-year journey. We’re excited about the progress we’re making with Google Lens features that will start rolling out over the next few weeks.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe