Theres no denying that the Google Pixel 2 has a really great camera. Although the camera UI does not offer manual controls, Googles HDR+ algorithm is really good at exposing most scenes very evenly. It can even do this fairly well in lower-lit conditions.
HDR+ aside, Google made a Research Blog post about the Google Cameras portrait mode. The Google Pixels portrait mode only needs a single camera lens, unlike many other OEMs that require a second camera to map out depth to synthesize the bokeh (blur) effect.
We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.
Googles camera uses semantic image segmentation to make this happen. It is able to map out which pixels pertain to the subject and which ones belong to the background. This technology has been released as open source by Google so that any phone maker can implement the technology in its own smartphones or app developers to make include it into their own apps.
The semantic image segmentation model can also be used to label pixels as anything like road, person, dog, and sky, so portrait mode is only one of the possibilities of the model. Check out the blog post in the source link to get a better idea of how DeepLab works to accurately predict what is actually in the image.
Source
- Fajar Nurzaman - Blog Sang Pembelajar -
https://i1.wp.com/fajarnurzaman.net/wp-content/uploads/2018/03/google-pixel-2%c2%92s-portrait-mode-is-made-open-source.png?fit=640%2C126&ssl=1
- https://fajarnurzaman.net/science-technology/google-pixel-2%c2%92s-portrait-mode-is-made-open-source/
0 komentar:
Post a Comment