Google, the flagship of the last Pixel 4’s portrait mode, which achieved spectacular results in full detail. Let’s take a closer look at what’s going on behind Pixel 4’s portrait mode.
Pixel 4 was the latest flagship smartphone launched by technology giant Google. One of the outstanding features of this device was the camera software. With the software, the Pixel 4 was able to bring out great portraits.
So how exactly does this camera software that Google installs on Pixel 4 work? Google’s artificial intelligence help camera software, announced by Google AI. Let’s understand how Pixel 4’s portrait mode works together.
The Pixel 4 camera also helps with machine learning:
Portrait mode captures photos with focus only on the main subject and blurs the background. To achieve this result, the distance of the camera from the subject is measured using machine learning. Thus, the main object in the photo frame can remain clear, while the background can be blurred.
Pixel 4 uses two cameras to measure depth: a wide-angle camera and a telephoto camera. The 13-millimeter distance between the two cameras can cause differences in the same frame. These differences can also be used to measure depth. You can think of it like a human eye.
Besides, Pixel 4’s cameras use dual pixel technique. With this technology, each pixel is split in half and the halves are photographed by two lenses. As a result, it is much more accurate in measuring depth.
Google has also improved the effects of bokeh and blurry backgrounds on Pixel 4 devices. Previously, bokeh blurring was done through a technique called tone mapping. This technique made the shadows lighter than the highlights.
There was a drawback to this technique. Tone mapping caused the overall picture contrast to be uneven. That’s why the portrait mode in Pixel 4 uses software that first blurs the raw image and then makes tone mapping. This keeps the background blurred, preserving the vitality and richness of the image.