OnTheGo Platforms Releases Beta SDK That Can Add Gestures to Google Glass Apps

Three months ago, we published a post about how OnTheGo Platforms is working on an SDK that would allow developers to add gesture recognition to Google Glass. Well, that SDK has finally hit beta and is ready for people to give it a spin.

The SDK allows developers to add the new gesture recognition interface, referred to as ARI (Augmented Reality Interface), to any application, product or pair of smart glasses that they want. ARI requires no use of buttons, touchpad and no speaking – simply gestures.

Right now, the SDK only includes gestures such as swipe left, swipe right, open hand and closed hand. Basically, gestures that can control the simple functions of smart glasses. With that said, OnTheGo Platforms plans to begin releasing more gestures and motion tracking in the future.

If you’re a developer on a budget, there’s some good and bad news about this SDK. The bad news is that this isn’t a free SDK that you can download and use on every app you create. Developers will have to pay a fee to use the SDK, and the fee depends on how big the app being created is and what features are wanted.

Here is the pricing table from the official site:

Pricing Table

The good news is that it is affordable, and will not nuke a hole in to your pocket. It stinks to hear that something amazing as this isn’t free, but you pretty much always have to pay for the good stuff.

Some more good news is that the SDK / ARI can be used on any pair of smart glasses that runs Android OS and has a camera. So, developers aren’t necessarily limited to using this SDK with just Google Glass.

So, are you excited that you might be able to flail your arms around to browse apps in the near future? Developers, do you think you will be using this SDK in one of your apps?