MindOS

Jaw EMG → Silent Speech Neural Interface
Sensors Model: 96.1% 1,247 samples
Live Monitor
Data Capture
Training
Inference

Live Jaw EMG Waveform

CH0 (Submental): 34 (OK) CH1 (Perioral): 28 (OK) Streaming 250 Hz

CH0 — Submental (Under-Chin)

34
Baseline RMS (μV)
Targets: digastric, geniohyoid, mylohyoid

CH1 — Perioral (Jaw/Cheek)

28
Baseline RMS (μV)
Targets: orbicularis oris, masseter, buccinator
38ms
Inference Latency
250 Hz
Sampling Rate
2-ch
Differential EMG

Electrode Placement — Jaw EMG Array

CH0: Submental Region
Electrode pair placed under the chin targeting the digastric and geniohyoid muscles. These muscles control jaw depression and hyoid elevation — critical for distinguishing tongue-tip (TIP) and open-jaw (OPEN) articulatory gestures.
CH1: Perioral Region
Electrode pair on the jaw/cheek targeting the orbicularis oris and masseter muscles. These control lip rounding and jaw clenching — essential for LIPS and CLOSE gesture classes.
Why Jaw EMG?
Subvocal speech produces measurable muscle activation patterns even without vocalization. The jaw muscle groups generate the highest SNR differential signals among facial muscles, making them ideal for non-invasive silent speech interfaces.
Our 2-channel placement achieves 96.1% classification accuracy across 6 gesture classes — competitive with research systems using 4-8 electrodes, while being practical for everyday wearable use.

Signal Quality Guidelines

Select Letter to Capture

Progress

Sample Review

Click a letter to begin capturing

Hardware: Jaw-Mounted EMG Sensor Array

Custom electrode placement on submental (under-chin) and perioral (jaw/cheek) muscle groups captures micro-voltage differential signals generated during silent articulation. Two MyoWare 2.0 EMG sensors with medical-grade Ag/AgCl electrodes feed into an Arduino-based ADC running at 250 Hz per channel with 10-bit resolution. The placement targets the digastric, geniohyoid, and orbicularis oris muscle groups — the primary articulators for distinguishing phoneme categories in subvocal speech.

MyoWare 2.0 sEMG Ag/AgCl electrodes 2-ch differential 250 Hz / 10-bit ADC Submental + Perioral Real-time USB serial

Neural Network Pipeline

MindOS employs a hybrid CNN-LSTM architecture for silent speech gesture classification, inspired by the EMG-UKA corpus research (Wand & Schultz, 2014) and MIT's AlterEgo project. Raw jaw EMG signals are first decomposed via double moving-average filtering into low-frequency articulatory trajectories and high-frequency muscle activation patterns.

The 1D-CNN front-end (3 conv layers with batch normalization) extracts local temporal patterns from 27ms Hamming-windowed frames, while a bidirectional LSTM (128 hidden units) captures sequential dependencies across the ±210ms context window. A final dense layer with softmax outputs per-class probabilities. The network is trained end-to-end with focal loss to handle class imbalance, and uses dropout (0.3) + L2 regularization to prevent overfitting on limited per-user calibration data.

An ensemble of the CNN-LSTM with a 200-tree Random Forest on handcrafted TD10 features provides the final prediction via confidence-weighted voting, achieving robust performance even with as few as 30 samples per class during rapid user calibration.

1D-CNN (3 layers) Bi-LSTM (128 units) Batch Normalization Focal Loss TD0/TD10 features RF Ensemble (200 trees) 5-fold stratified CV ONNX optimized

Calibration Sample Browser

Browse individual jaw EMG calibration recordings per gesture class. Each sample shows segment duration and per-channel RMS energy for quality validation.
Select a class to browse samples

Single Gesture Test

Prediction Detail

Click "Record & Classify" to test

Word Spelling Mode

Sentence:
--
Current groups:
No groups yet
Candidates: --

Quick Group Test (No EMG)

Click groups to simulate input without recording: