Google introduced Google Lens at I/O 2017 with its latest flagship devices — Pixel 2 and Pixel 2XL. Google Lens is the intelligent camera software developed by the tech giant that uses your smartphone camera and deep machine learning to identify objects in a photograph.
Google Lens isn't an app. It's included directly into Google Photos and Google Assistant. It has been available within Google Photos since the launch of Pixel 2 phones. Now, Google has started rolling out Google Lens to Google Assistant in all Pixel phones.
Note: Google Lens is currently available for the Pixel, the Pixel XL, the Pixel 2, and the Pixel 2 XL.
Step 1. Look out for Google Lens
Fire up Google Assistant by squeezing your Pixel 2 and Pixel 2 XL. If you have first-generation Pixel phones, simply long press on the home button.
You will now see the Google Lens icon at the bottom right corner of your phone's screen.
Step 2. Allow Camera Access
When you tap on the Google Lens icon for the first time, it will offer a small prompt explaining what the visual search assistant can do.
You will see the Tap to Continue button below it. After you tap on that button, Google will ask for your permission to allow camera access. Tap on Allow and you are set.
Step 3. Try out Google Lens
Now, every time you open Google Lens, it’ll immediately launch in the viewfinder and you will see four familiar 'Google-colored' dots in action. You can tap anywhere in the viewfinder to know about the object in question. The viewfinder also shows the flash button at the right corner that you can use in low-light conditions.
Currently, Google Lens can be used in a handful of situations, including text recognition and bar code scanning. It can identify books, movies, art, and landmarks.
How Google Lens Works in Different Scenarios
I took a few photos, each showing different objects in the frame to test out the performance of Google Lens. Let’s see how it worked.
Google Lens was accurate in recognizing leaves and plants. However, it's not very specific about results. Sometimes, if offers additional cards offering more information about the object.
I was especially surprised in the case of the above photo where, instead of identifying it as a droid, Google Lens simply guessed if it was a toy. I expected the Lens to recognize the droid right away.
The Lens was able to recognize Chromecast correctly and it even showed an information card mentioning Google, Play Store, YouTube, Images, and other media options.
Images taken with the Lens appear in the general Google Assistant history, alongside the accompanying search result.
If you like the result offered by the Lens, you can hit the thumbs up button or the thumbs down button if you're not satisfied with it. You can even share the feedback about a specific result on social media.
Google Lens was perfect in identifying books. It even offered links where you can buy the book in the image. The Lens is great at reading characters and, hence, works well with objects that have text on them.
Integrate with the Assistant, Google Lens is rolling out in India, the UK, Australia, Canada, and Singapore on devices that have English set as the default language.
Worth a Try
Google Lens is far from perfect at the moment and needs to improve a lot. It scores when it comes to landmarks, books, objects with text on them, and other relatively known objects and, hence, there are a few cases where you can actually use it.
However, with time, Google will gradually improve the Lens with a bunch of new use cases. Plus, the deep learning will also improve a lot more in the long run.
Have you tried Google Lens yet? How useful was it? Let us know in the comments section below. We'd love to hear from you!