聋哑儿童如何在家学习手语?

聋哑儿童如何在家学习手语?
Naomi Caselli explores the effect of early experiences on vocabulary and language processing
Naomi Caselli is an assistant professor in Deaf studies at BU Wheelock. Her research focuses on how early language experience affects vocabulary acquisition and processing in American Sign Language. She and her team are preparing to release an online ASL vocabulary test for children younger than five and have received a new grant from the National Institutes of Health to use the test to examine how children build a vocabulary in sign language when their parents are hearing and do not yet know ASL.
How has your research developed over the past few years?
When we began the first phase of this project in 2017, we focused on deaf children who were born into homes where ASL was the primary language (i.e., those who had deaf parents). This subset of deaf children is rare (~5% of the population), but can help us learn how children learn sign language when they have ample exposure to fluent ASL starting at birth.
As part of that study, we developed and normed an ASL vocabulary test for children younger than five, the ASL-CDI 2.0. This filled a critical gap both in research and practice, as there had been no widely accessible assessment to evaluate young children’s ASL skills. Such an assessment is critical, because the other 95% of deaf children are born to hearing parents who do not yet know ASL, and so the children are at risk of delayed first language acquisition.
How does this new project fit in with your recent work?
With our new grant, we will use the ASL-CDI 2.0 to see how children learn ASL when their parents are learning ASL alongside them. Parents of deaf children who do not know ASL must learn ASL while also adjusting to life with a newborn and entering a world of early intervention, speech therapy, and audiology. Children growing up in these circumstances have language learning environments that are unlike virtually any other child: there is often a delay before they begin learning their first language, and they are learning their first language primarily from people who are not fluent in that language.
Now we want to know how these unusual early language environments affect language acquisition. Can hearing parents become proficient in ASL quickly enough to support ASL vocabulary acquisition? Do children learn language differently if they begin learning it during childhood rather than infancy? Does early ASL vocabulary knowledge predict later language skills in ASL and English? This project will contribute to our understanding of how early language experience shapes language learning, which will in turn help families and professionals better support young deaf children.
What were some of the main challenges of this project?
When we began the project, there was no way of evaluating young children’s ASL vocabulary, so we had to develop an assessment ourselves. I am glad we did it because while the ASL-CDI 2.0 was under development several states passed legislation to screen deaf children regularly to ensure they are successfully learning at least one language. Until now, there has not been a common assessment system that could be widely used for this purpose. With the ASL-CDI 2.0, we can help states make sure deaf children aren’t falling between the cracks, getting interventions that aren’t working. It will also help states learn more about the programs that are (and are not) working.
What’s your favorite finding that you’ve produced and why is it your favorite?
One of my favorite projects has been the ASL-LEX database, which I developed with collaborators at San Diego State University and Tufts University. ASL-LEX is an interactive, online visualization of the ASL lexicon that now catalogues hundreds of pieces of information about nearly 3,000 ASL signs. When we started that project, I thought it would primarily be useful to other sign language researchers. Once the first version was released, it got picked up by K–12 schools that wanted to use it with their deaf students, and by ASL students who wanted to use it to learn more signs.
For example, an 8th grade teacher might get a new deaf student in their class who does not yet know ASL. The teacher could search ASL-LEX for the most common nouns and verbs, and focus vocabulary instruction on those signs. ASL-LEX has also been used by computer scientists who are working on sign language recognition (e.g., Siri or Alexa for ASL). Each of these groups of users have given us feedback on what they would want to search for, and we are working to better tailor ASL-LEX to these applications.
Have you tried to expand this project for learners of other sign languages in the world?
So far, we’ve collaborated with two different teams of Israeli and Spanish researchers to develop something comparable to ASL-LEX in their sign languages.
What is the next big step?
For ASL-LEX, we are finishing work with software engineers from the Rafik B. Hariri Institute for Computing and Computational Science & Engineering to launch the second version of ASL-LEX, and are beginning work to add a semantic network to ASL LEX, which will show which signs have similar meanings.
For the ASL vocabulary acquisition work, we are working now to assemble a team of researchers and data scientists, and figuring out how to execute the project remotely.