News May 18, 2017

What Google was really telling us during the I/O 2017 keynote

Quentin Braet

Solution Architect

It is that time of the year again, the moment where the tech giants are organising their big developer conferences. Yesterday, it was Google who kicked of its yearly Google I/O conference with a keynote to introduce us to the new stuff they were working on the past year. It used to be a conference where representatives of the Android team mainly talked about new features and releases of their platform, but times have changed. People who were expecting big news about new Android updates are probably disappointed: Android phones “as we know them”, will not change that much. This year’s conference will be all about Artificial Intelligence (AI) and integrations between existing Google services and platforms with one end goal in mind: make people’s life easier.

In the past years, Google has grown from a search engine that almost everyone was using, to the creator of world's largest mobile operating system for phones. Even people that don’t use Android are probably using another Google service like Gmail, Google Maps or Chrome. Google has been working really hard to fine tune these products to what they are today. Take YouTube for example: next to the effort it takes to serve these massive amounts of video (1 billion hours of video are watched every day), there is also the effort Google puts in generating smart, user tailored recommendations of videos you like. Every single Google product leverages some form of AI. But now, the Mountain View based company is slowly taking it to the next level and is glueing the pieces together with the magic word: Google AI.

The backbone of all this is the open-source TensorFlow library Google has been working on for a while. Next to the cloud integrations, this platform will also slowly be integrated into Android with hardware support for AI. This will make it possible to let your phone do some AI tricks without the need of a powerful cloud stack and thus, an internet connection.

The Mountain View based company is slowly taking it to the next level and is glueing the pieces together with the magic word: Google AI.

Google Photos for example is already using a lot of machine learning; it not only categorises your photos based on things it can recognize in your photos, but it also recognizes faces and activities. By looking at different variables, like location and date of a picture, future versions of the app will be able to recognize parties you were at, and will suggest to share your pictures with friends it saw on your pictures taken at that party. It can also go further and make it possible to suggest your friend to add his photos that were taken there to a shared album, without any hassle.


The image recognition service that they introduced for this, Google Lens, will also be part of the bigger system. The Google Assistant, which aims to be the ultimate personal assistant. Google showcased some use cases like recognizing a shop, recognizing flowers, giving you all kinds of information about them, … But it also went for the extra mile: it has a lot of data from Google Maps for indoor locations, and it has Google Lens which can really see what you see. This makes it possible for Google to do indoor navigation based on image processing and the reference information it has from Maps, or like they call it: the Visual Position System, or VPS for short.
 

Image recognition service: Google Lens

Image recognition service: Google Lens

After all is said and done, it is really clear: Google sits on a huge pile of data about everything and everyone, and they are determined to really start using it, way beyond the initial use cases we thought about a couple of years ago. The cool thing is: we can also use the tools that Google created to start working on our own ideas.