LightBlog

mardi 17 octobre 2017

Google Discusses the Tech Used for Portrait Mode on the Pixel 2

Among other trends in the smartphone industry right now, we’re seeing more OEMs start to put two cameras on the back of their device for a number reasons. Some are using it for the bokeh effect and “portrait mode” shots while others are using a black and white sensor (to try and improve overall picture quality), wide-angle lenses or a telephoto solution for improved zooming. The bokeh effect is what the typical “portrait mode” feature uses but Google has been able to do this with the Pixel 2 and the Pixel 2 XL without even needing to have two camera sensors (rather, they use a “dual pixel” approach at the hardware level).

As smartphone hardware is starting to stagnate and hit a plateau in some key areas, we’re going to need to see innovation at the software level. This is exactly what Google has been doing lately with their applications, since they’ve been leveraging the machine learning technology that they have been working on and trying to bring to their products for the past few years. At the Pixel 2 and Pixel 2 XL launch event, Google CEO Sundar Pichai said the company was shifting from being mobile-first to being an AI-first company, and we’re seeing the results of this strategy right now.

pixel 2 mask profile

In a new post over on the Google Research Blog, they have shared some details about how they’re able to emulate the bokeh effect on their new phones without using two cameras. There are two ways that this is done since the phones have two different types of camera technology. With the tech used in the rear camera sensor, it can actually create a depth map but the front-facing camera doesn’t have this ability, so this is where the machine learning technology comes into play.

With the back camera of the Pixel 2 and the Pixel 2 XL, they’re able to utilize the Phase-Detect Auto-Focus (PDAF) pixels (which is sometimes called dual-pixel autofocus) technology of the camera sensor they chose. They explain this by having you imagine the rear camera sensor is split in half with one side seeing the image in one way while the other half of the sensor sees the image slightly differently. While these viewpoints are less than 1mm apart, it’s enough for Google to create a solid depth map.

The front camera of Google’s new smartphones does not feature this same type of technology though. Thanks to machine learning and computational photography techniques that they have been training and improving on lately, they’re able to produce a segmentation mask of what they feel is the subject of the photo. So after the mask is created, Google is then able to blur the background while keeping the subject of the photo approximately intact.

Follow the link below to read the entire Google Research Blog entry and learn more!


Source: Google Research Blog



from xda-developers http://ift.tt/2zvF4rA
via IFTTT

Aucun commentaire:

Enregistrer un commentaire