Augmented Tongue Ultrasound for Speech Therapy

Technological advances are raising the bar in medicine in so many ways, with everything from medical microchips to cloud solutions and automation making life easier for doctors and patients like. One new technology that is the talk of the town is a new system that displays the movements of a patient’s tongue in real time, for use in speech therapy. The system, developed by researchers at the GIPSA-Lab and INRIA Grenoble Rhône-Alpes, involves processing movements captured by an ultrasound probe placed under the jaw, to show the movement of the face and lips, as well as the tongue, palate and teeth, which were unable to be viewed in the past.

The Visual Biofeedback System

The new system, called Visual Biofeedback, will be used to help patients improve their pronunciation, as well as in speech therapy sessions and in learning foreign languages. The ability for patients to see the movement of the tongue in real time, can be of help  in a variety of settings, including the treatment of acquired speech pathologies. Although speech therapy is covered by health plans, in other cases, it can involve considerable expense. The new technique will hopefully shorten rehabilitation time and increase the efficiency of sessions. Currently, its being tested on patients who have undergone tongue surgery.

No Longer a Guessing Game

The augmented tongue system can ease a speech therapist’s work, since most exercises consist of repetition. Thus far, therapists have had to use drawings to explain ideal positioning of the tongue during exercises; by seeing the movement of their tongue in real time, they can help patients make the required changes and see how different positions affect sound. The use of ultrasound is not new in therapy, with researchers having used ultrasound probes under the jaw for many years, which unfortunately produced poor quality images and did not show the exact location of the tongue and teeth. Through the new technology, the visual feedback obtained from the probe animates a virtual clone of the speaker, producing a true-to-life visualization of movement. The key is a new machine learning algorithm that controls the ‘talking head’ or ‘avatar’, processing movements and revealing them clearly.

The augmented tonge ultrasound holds great promise for more accurate performance of speech therapy exercises. In addition to conducting preliminary testing on their new system, the researchers are also working on developing a new version of the talking head, which will rely not on ultrasound, but directly on the patient’s voice!