Sign-language hack lets Amazon Alexa respond to gestures

“If voice is the future of computing, what about those who cannot speak or hear?”
That is the question posed by developer Abhishek Singh, the creator of an app that allows Amazon Alexa to respond to sign language.
Mr Singh’s project uses a camera-based system to identify gestures and interpret them as text and speech.
Future home devices should be designed to be inclusive for deaf users, the developer says.
The past few years have seen a rise in popularity of voice assistants run by Amazon, Google and Apple.
And a study by the Smart Audio Report suggests adoption of smart speakers has outstripped that of smartphones and tablets in the US.
But for the deaf community, a future where devices are increasingly controlled by voice poses problems.
Speech recognition is rarely able to pick up the rhythms of deaf users. And a lack of hearing presents a clear challenge to communicating with voice-based assistants.
Mr Singh’s project offers one potential solution – rigging Amazon’s Alexa to respond in text to American Sign Language (ASL).
“If these devices are to become a central way we interact with our homes or perform tasks, then some thought needs to be given to those who cannot hear or speak,” he says.
“Seamless design needs to be inclusive in nature.”
The developer trained an AI using the machine-learning platform Tensorflow, which involved repeatedly gesturing in front of a webcam to teach the system the basics of sign language.
Once the system was able to respond to his hand movements, he connected it to Google’s text-to-speech software to read the corresponding words aloud.
The Amazon Echo reacts and its vocal response is automatically transcribed by the computer into text, which is read by the user.

Source: BBC

Comment here