Object recognition with Google Glass

When I first heard about Google Glass I imagined a future when Glass could assist in labelling objects in the environment. Well it seems that future might be rapidly approaching

Neurence has created a cloud based platform called Sense, which uses pattern based machine learning to identify objects within an environment. This system can be utilised on a number of devices including Google Glass.

Through pattern recognition the cloud based platform is able to recognise objects within the environment such as signs. This has incredible implications for the VI community and as the platform expands and adds more objects to its database it will only functionally be of greater value.

What really intrigues me about this device is how it can fit an incredible purpose for the VI community but is aimed squarely at a different market. As they are attempting to make the next generation of search – which they believe to be image based, it is creating an enormous database of objects. This database is open to the public and there is even an SDK to contribute to the platform. Therefore, it would be relatively trivial to create a system that the VI could use, but the userbase would be so large that it would actually be useful. Unlike other devices that are squarely aimed at the VI community, thus limiting their scope which, in turn limits how large the subsequent database of recognised objects will be.

Pairing this device with another wearable for navigation and you would have a great system to aid a VI individual. For example, a wearable with haptic feedback could aid in navigation and the Sense platform could add much needed contextual information about the environment.

Now I just need to email them to make this happen!