[ad_1]

Professor Laura-Ann Petitto guides an infant interacting with the prototype, which seeks to engage deaf infants and stimulate early exposure to sign language. (Image Credit: Professor Laura-Ann Petitto)

USC Viterbi researchers are helping to increase exposure to language in deaf infants to improve their development of language, reading, grammar, and writing skills for the rest of their lives.

RAVE, or Robot Avatar Thermal-Enhanced Prototype, is a learning tool designed specifically for deaf infants to give them increased exposure to sign language at a young age to help them develop a strong foundation for vocabulary, grammar, and language. USC collaborated with researchers from three other universities, Gallaudet University, Yale University and the University G. D’Annunzio of Chieti-Pescara, to design this prototype.

Exposure to language at an early age is crucial for normal and successful language development in children with hearing. By as early as 6 months old, babies learn the sounds of their native languages and reach peak sensitivity in beginning to develop and understand words. During this critical period, exposure to meaningful patterns of language in the infant brain leads to larger vocabulary, better grammar use, improved reading skills, and better overall language skills.

Interestingly, this pattern also applies to non-verbal languages, like sign language, which actually stimulates the same area of the brain that spoken language does. For deaf babies, having exposure at a young age to the hand signals that go along with sign language is crucial.

However, this exposure can often be difficult to achieve, especially since the majority of deaf babies are born to hearing parents, who face pressure to learn sign language very quickly to prevent their children from facing a disadvantage later in life.

The RAVE device consists of a digital human on a screen, several sensing devices, and a tangible, 3-D robot. The robot, an adorable and expressive cyborg with round blue eyes and a fluff of mohawk, attracts the attention of the hearing-impaired baby with physical movement and directs the infant to the screen. A thermal imaging camera scans the child’s face for increased interest. Face tracking software determines when he or she is most likely to be engaged with the screen. At that point, the onscreen avatar engages the baby in conversation, producing American Sign Language along with the natural human facial expressions, posture, and body language that would accompany this normal communication experience.

Two teams of researchers at USC are involved in the project. The Natural Language Dialogue Group is led by David Traum, the director for national language research at the USC Institute for Creative Technologies (ICT) and a research professor in the Department of Computer Science at the USC Viterbi School of Engineering. The Character Animation and Simulation Group is led by Ari Shapiro, a research assistant professor in the Department of Computer Science at USC Viterbi.

A crucial ICT researcher in Traum’s Natural Language Dialogue Group is Setareh Nasihati Gilani, Ph.D. CS ’20. Nasihati Gilani spearheaded the design of the communication and interaction of the robot and the digital avatar. Furthermore, she wrote the code for RAVE’S eye-tracker component that automatically calibrates when the infant is engaged and looking at the virtual human.

The Shapiro-led team created the graphics and animations behind the avatar, working with Gallaudet University to control the human-like avatar’s facial expressions, hand movements, and body language.

Laura Ann Petitto, RAVE’s lead researcher and an educational neuroscientist at Gallaudet University, a university for the deaf and hard of hearing, first began the project in 2014. The first version of the current system was completed in August 2016. The project has received funding from the National Science Foundation, as well as the William Keck Foundation.

Preliminary results of experiments using the RAVE tool have proven highly successful, according to Traum. In the most recent study, all babies remained engaged with the signing avatar for over four minutes, a very long time for a 4- month old infant.

RAVE is not designed as a replacement for quality time infants spend communicating with parents and loved ones but instead as a supplement for homes that may lack the means to offer constant stimulation to their children.

“We used these technologies to try to overcome previous results that indicated that while babies of this age learn from watching people, showing recordings of the same people did not lead to learning,” Traum explained. “Unlike passive TV, the RAVE actively engages the babies and reacts to their behavior.”

[ad_2]

Source link