Before proceeding further into desk research, I thought it would be valuable to first start out by exploring what has been done or is currently being done in the area of touchless interaction. As I delve deeper, I found and read up about different innovations that have strived or remain to strive towards perfecting an alternative to the traditional touch screen capacitative interaction. I have sorted into categories that include: depth cameras that are commonly used to pick up gestures, gesture detection SDKs, and a solution that combines both of these technologies.
In this blog post I will quickly introduce and summarize innovations from these three categories.
Leap Motion
Leap motion is a device consisted of infrared cameras in its hardware that could build skeletal models of a user’s hands and fingers with precision in its software. It has been utilized in game design and other applications that can collaborate with leap motion’s gesture sensors. The product was revolutionary, but in the end it was still viewed as a conceptual “toy” rather than a practical device that can be used as controllers and replace buttons/touchscreens due to it’s failure to track precise movements. Instead, it captures too much which often leads to misunderstanding in the software, which is then unable to deliver 100% accuracy.



Myo Armband
The Myo armband is a gestural armband (duh) controller that translates arm movements into input mechanisms. It’s worn on the wrist and it translates electrical signals from your body into computer input. It’s yet another hardware/software combined product that can be utilized for gesture control. It has proved to be useful for the handicapped, for example. However, the problem with the product is that it has a significant learning period before the usage turns smooth, and as usual, the software is not perfect, it can have trouble recognizing some gestures (false positives etc).



Intel® RealSense™ camera
Intel®’s Realsense™ camera is a combination of hardware and software capabilities, it has cameras, a 1080p HD camera, an infrared camera and an infrared laser projector that allows the camera to behave like the human eye and sense depth / track human motion. It’s known for being able to create 3D depth maps of surroundings. Companies like Crunchfish AB for example, have taken the technology under its wings and implemented its gesture control SDK into the hardware.


Touchless A3D®
The Touchless A3D® software development kit by Crunchfish (yay!) is an SDK that allows users to control functions in their device by gestures. The SDK can be incorporated into AR and VR, which creates new possibilities for interaction. It requires no hardware changes since it uses a video stream from any device with an embedded camera.


Azure Kinect
Microsoft’s Azure Kinect consists of sensor SDKs, body tracking SDK, vision APIs and speech service SDK for its hardware product Azure Kinect DK. Like other cameras with body tracking SDK combos, the Kinect can be used to e.g track products, manage inventory, and be potentially used in cashier less stores (what I can focus on!).


Orbbec’s body-tracking SDK
Like Crunchfish’s Touchless A3D SDK, Orbbec’s body-tracking SDK enables computers to use 3D data from cameras to see and understand human bodies. With the SDK, developers can utilize it to create intuitive and innovative applications.


Manomotion 3D
Manomotion is another Swedish company that specializes in advanced software for tracking user’s hand and finger movements with great precision using a mobile camera. The software provides a framework for real-time 3D gestural analysis.



Tapstrap 2 by Tap
Tapstrap 2 is a mobile keyboard that you can carry around and strap on your hand. It functions as a bluetooth keyboard and mouse, and also gives you the ability to utilize air gestures. However, it takes learning to fully master the experience, as it’s as intuitive as learning the piano.


gestigon by Valeo
German startup Gestigon specialized in developing 3D image processing software for the cabin of a vehicle, they were later bought by Valeo. The goal of gestigon is to help people interact with technology in a natural way while in vehicles, so innovative solutions like this can enhance personal comfort and safety.



Tobii REX by Tobii
Tobii is a global leader in eye tracking and gaze interaction. Tobii REX is an eye tracker that can detect the presence, attention, and focus of the user. Its sensor technology makes it possible for a computer or another device to know where a person is looking. This allows for unique insights into human behavior and facilitates natural user interfaces in a broad range of devices.



Trying Occulus Quest for the first time
This week I also had the honor (!!) to try using the Occulus Quest, Occulus had recently implemented gesture control software into its newest update. It includes hand/gesture recognition. Simple interactions with a UI, like scrolling, tapping, going back a page and whatnot, have been made possible
. Though the use of gesture control is still quite limited here, it displays the first step to the impact gesture control can have in interaction with tech. In the VR/AR world, when the hands are raised up enough for the headset to see, instant feedback is implemented: the live feed of your hands. As it’s a virtual reality world, your hands wouldn’t be visible when raised up in older versions of Occulus. In general the actions were quite intuitive, if not, they were easy to learn. I for example didn’t know how to move up a page, but with the guidance of Thomas on the side, I understood immediately how to make the gesture and from then on didn’t require further assistance with controlling. (The problem with gestures… does it take too much learning?)


