With the Qualcomm Snapdragon 835 SoC, the Pixel 2 is equipped with the Pixel Visual Core, an additional chip designed by Google. It is built in to handle specialized processes like the HDR+ image processing, which achieves better dynamic range and reduced noise levels through computational imaging.
Google claims that the Pixel's Visual Core has eight image processing unit (IPU) cores and 512 arithmetic logic units. With this, the HDR+ processing can run five times faster at less than one-tenth of the energy than running on the main processor (Application Processor).
According to Google, the Pixel Visual Core is designed to handle the most challenging imaging and machine learning applications and the company is already preparing the next set of applications designed for this hardware.
How to Enable Visual Core on Pixel 2?
In order to enable the Visual Core, you'll have to first enable the Developer Options. Head over to Settings and select System.
Next, go to About phone and tap on Build Number multiple times until it says Developer options has been enabled.
Now, head to Settings and select System. Tap on Developer options and turn the toggle on.
Keep scrolling until you find the Debugging option. You will now see the Camera HAL HDR+ option, toggle it on.
A pop-up will appear prompting you to restart your smartphone so that the Pixel Visual Core can start working.
Does Enabling Visual Core Yield Better Photos?
Now that you have enabled the Visual Core on your device, this doesn't mean that the pictures taken will look substantially better. The chip is designed to improve the performance of the HDR+ image processing and not the image quality.
One of the main reasons to enable the Visual Core is to allow third-party apps such as Instagram and Facebook to take advantage of the same HDR feature available in the built-in camera app for several years now. There is a wide range of third-party apps on the Google Play Store.
The main aim is to maintain the image quality with a third-party app. With the chip enabled, the HDR+ image processing will be faster and it will use less power.
That’s a Wrap
Google is already using machine learning and neural network algorithms in its mobile cameras. This feature is one of the USPs of the Pixel 2. It will now be interesting to see how third-party apps benefit from the Visual Core.
Do you own a Google Pixel 2? What are your thoughts on its camera performance? Let us know in the comment section below. We'd love to hear from you.