The pursuit of smooth, hands-free interaction within the rapidly developing field of wearable technology has produced ground-breaking discoveries. TongueTap, a technology that synchronizes multiple data streams to enable tongue gesture recognition for controlling head-worn devices, is a promising development. This method allows users to interact silently, without using their hands or eyes, and without having specially made interfaces which might be typically placed inside or near the mouth.
In collaboration with Microsoft Research, Redmond, Washington, USA, researchers at Georgia Institute of Technology have created a tongue gesture interface (TongueTap) by combining sensors in two industrial of-the-shelf headsets. Each headsets contained IMUs and photoplethysmography (PPG) sensors. One among the headsets includes EEG (electroencephalography), eye tracking, and head tracking sensors. The info from the 2 headsets, Muse 2 and Reverb G2 OE devices, was synchronized using the Lab Streaming Layer (LSL), a system for time synchronization commonly used for multimodal brain-computer interfaces.
The team pre-processed the pipeline using a 128Hz low-pass filter using SciPy and Independent Component Evaluation (ICA) on the EEG signals while applying Principal Component Evaluation (PCA) to the opposite sensors, each sensor individually from the others. For gesture recognition, they used a Support Vector Machine (SVM) in Scikit-Learn using a radial basis function (RBF) kernel with hyperparameters C=100 and gamma=1 to do binary classification and determine whether a moving window of knowledge contained a gesture or if it was a non-gesture.
They collected a big dataset for evaluating tongue gesture recognition with the assistance of 16 participants. Essentially the most interesting result from the study was which sensors were only at classifying tongue gestures. The IMU on the Muse was probably the most effective sensor, achieving 80% alone. Multimodal combos, including the Muse IMU, were much more efficient, with quite a lot of PPG sensors achieving 94% accuracy.
Based on the sensors with the perfect accuracy, It was observed that the IMU behind the ear is a low-cost approach to detecting tongue gestures with a position allowing it to be combined with past mouth-sensing approaches. One other critical step for making tongue gestures viable for products is a reliable, user-independent classification model. A more ecologically valid study design with multiple sessions and mobility between environments is needed for the gestures to translate to more realistic environments.
A giant step forward within the direction of smooth and intuitive wearable device interaction is represented by TongueTap. Its capability to discover and categorize tongue gestures using commercially available technology paves the way in which for a time when discrete, accurate, and user-friendly head-worn device control is conceivable. Essentially the most promising application for tongue interactions is in controlling AR interfaces. The Researchers plan to review this multi-organ interaction further by experimenting with its use in AR headsets and comparing it to other gaze-based interactions.
Take a look at the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to affix our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more.
For those who like our work, you’ll love our newsletter..
Arshad is an intern at MarktechPost. He’s currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the basic level results in recent discoveries which result in advancement in technology. He’s enthusiastic about understanding the character fundamentally with the assistance of tools like mathematical models, ML models and AI.