Google's next frontier, analyzing what we see.


Google announced a large number of Google+ updates recently. They were focused on Google+ Hangouts and photo's. On the surface the most of them look like eye-candy and not fundamental in nature. However with one exception in my view. That is the further expansion of the possibilities in searching what is actually in a picture. In practice this means you can search for over a 1000 terms in your photo's or the photo's of posts in your circles in Google+. Ofcourse the number of terms will grow. Myself i tried "sunset", "labrador", "grass", "child", "beach" etc. with amazing success. To accomplish this Google has actually to analyse each video and photo for it's content. An algorithm has to recognise elements within the picture and index them. This gives the possibility to automatically add "meaning" to a picture. In a format that computers can use and analyse on a large scale. You can read more about it in the Google Patent application for automatic large scale video object recognition. Google can expand the profile and knowledge about it's customers with the content of all photo's and video's they have ever made. Through predictive analysis they can try to even better predict what you want to click or buy or maybe when you die. To parafrase the excellent book by Eric Siegel on this subject. The next step is than analyzing realtime video using Google Glass and advising you on actions. We have just scratched the surface with regard to the possibilities in my view.

Reacties

Populaire posts van deze blog

Olympische mashup

En nog een profiel....

"It's about technology, stupid"