Using a single low-cost camera to detect 26 skeletal points and mimic hand gestures in real-time is the trick that uSens has seemingly mastered. Using a relatively lightweight, deep learning algorithm, their software runs on android smartphones and only requires a single core of processing power and no external calls to a cloud compute engine.
With APIs that allow integration with the Unity AR platform and direct programming via C++, uSens’ business model is focused on adding capabilities to a variety of applications, beyond interpreting gestures in a game (as shown in the video).
USens, Inc. VP of Product and Operations, Yaming Wang, describes how the upstart, electric and self-driving car-maker, Byton, is integrating the USens technology into their cars to allow gesture control of screens that would be impossible to control otherwise.*
Other applications for this touch-free interface, including any sort of human-machine interaction in the public sphere, where eliminating a hands-on experience would lessen the chance of passing on germs. uSens’ SDK is currently available for beta testers.
*Updated 6/27/18 – original version referred to the wrong person.