Sensor-fitted Gloves Translate Sign Language Into Speech
Complete the form below to unlock access to ALL audio articles.
UCLA bioengineers have designed a glove-like device that can translate American Sign Language into English speech in real time though a smartphone app. Their research is published in the journal Nature Electronics.
“Our hope is that this opens up an easy way for people who use sign language to communicate directly with non-signers without needing someone else to translate for them,” said Jun Chen, an assistant professor of bioengineering at the UCLA Samueli School of Engineering and the principal investigator on the research. “In addition, we hope it can help more people learn sign language themselves.”
The system includes a pair of gloves with thin, stretchable sensors that run the length of each of the five fingers. These sensors, made from electrically conducting yarns, pick up hand motions and finger placements that stand for individual letters, numbers, words and phrases.
The device then turns the finger movements into electrical signals, which are sent to a dollar-coin–sized circuit board worn on the wrist. The board transmits those signals wirelessly to a smartphone that translates them into spoken words at the rate of about a one word per second.
The researchers also added adhesive sensors to testers’ faces — in between their eyebrows and on one side of their mouths — to capture facial expressions that are a part of American Sign Language.
Previous wearable systems that offered translation from American Sign Language were limited by bulky and heavy device designs or were uncomfortable to wear, Chen said.
The device developed by the UCLA team is made from lightweight and inexpensive but long-lasting, stretchable polymers. The electronic sensors are also very flexible and inexpensive.
In testing the device, the researchers worked with four people who are deaf and use American Sign Language. The wearers repeated each hand gesture 15 times. A custom machine-learning algorithm turned these gestures into the letters, numbers and words they represented. The system recognized 660 signs, including each letter of the alphabet and numbers 0 through 9.
In addition to Chen, the study’s UCLA authors are co-lead author Zhihao Zhao, Kyle Chen, Songlin Zhang, Yihao Zhou and Weili Deng. All are members of Chen’s Wearable Bioelectronics Research Group at UCLA. The other corresponding author is Jin Yang, of China’s Chongqing University.
UCLA has filed for a patent on the technology. A commercial model based on this technology would require added vocabulary and an even faster translation time, Chen said.
Reference: Zhou, Z., Chen, K., Li, X., Zhang, S., Wu, Y., Zhou, Y., Meng, K., Sun, C., He, Q., Fan, W., Fan, E., Lin, Z., Tan, X., Deng, W., Yang, J., & Chen, J. (2020). Sign-to-speech translation using machine-learning-assisted stretchable sensor arrays. Nature Electronics, 1–8. https://doi.org/10.1038/s41928-020-0428-6
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.