HomeNewsGoogle has Open-Sourced the Ai-driven tool Powering Pixel 2’s Portrait Mode Camera

Google has Open-Sourced the Ai-driven tool Powering Pixel 2’s Portrait Mode Camera

Google Pixel 2 and Pixel 2 XL
Google Pixel 2 and Pixel 2 XL

 

Google’s Pixel phone has one hell of a camera, and one of the reasons for this is AI. Google has used its machine learning talent to squeeze better images out of a tiny Smartphone lens, including its portrait mode shots, with blurred backgrounds and pin-sharp subjects.

Software and search giant Google has open-sourced the artificial intelligence (AI) tool that is responsible for the impressive portrait mode in its Pixel 2 devices.

Google’s flagship from last year, the Pixel 2 and the Pixel 2 XL pack an impressive camera.

The camera stood on its own with its single camera setup to beat the competition with was busy touting the importance and advantages of the dual-camera setup. Image testing website DxOMark awarded Pixel 2 the spot for the best camera in a Smartphone last year.

While the good sensors are one part of the equation, with the Google Pixel series the AI plays an equally important part in making the camera provide the supreme image quality.


Now almost 5 months after the launch of the device, Google announced on its Google Research blog that it will be open-sourcing ‘DeepLab-v3+’, its “latest and best performing semantic image segmentation model”.

To make things easier for everyone, this tool is responsible for identifying and then differentiating people or any object from the background for portrait mode-like results where a blur can be applied to the background layer.

Google-Image-segmentation-tool

Google has built this tool with the help of something called as ‘convolutional neural network’ or CNN backbone architecture, a machine learning method which according to a report by The Verge, is good at analysing visual data.

The segmentation model is based on its Tensor flow software library, an open-source machine learning framework that anyone can use.

Google Image segmentation tool 2 techfoogle

Google has also added Tensorflow model code that it uses for training and evaluation of the tool to help anyone who would want to use the tool for the first time while adding models that have already been trained to identify and segment items as part of two benchmark tasks.

Liang-Chieh Chen and Yukun Zhu, software engineers working with Google Research added that with the help of CNNs, the semantic image segmentation system has “reached accuracy levels that were hard to imagine even five years ago”.

Google is hoping that by making this tool open source, other groups in academia and industry can reproduce and improve on the work that Google has done. This is also likely to open a whole range of possibilities for developers who can integrate this tool to create groundbreaking experiences.

 

Subscribe To Our Newsletter!

To be updated with all the latest news, offers and special announcements.

RECOMMENDED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

TRENDING!