Â鶹´«Ã½

 is a  and a at the University of Illinois Urbana-Champaign. He is the William L. Everitt Faculty Scholar in ECE and holds affiliations in the Department of Speech and Hearing Science, Coordinated Science Lab, , and Department of Computer Science. He also leads the , a new research initiative to make voice recognition technology more useful for people with a range of diverse speech patterns and disabilities. 

Hasegawa-Johnson has been on the faculty at the University of Illinois since 1999. His research addresses automatic speech recognition with a focus on the mathematization of linguistic concepts. His group has developed mathematical models of concepts from linguistics including a rudimentary model of pre-conscious speech perception (the landmark-based speech recognizer), a model that interprets pronunciation variability by figuring out how the talker planned his or her speech movements (tracking of tract variables from acoustics, and of gestures from tract variables), and a model that uses the stress and rhythm of natural language (prosody) to disambiguate confusable sentences. Applications of his research include:

  • Speech recognition for talkers with cerebral palsy. The automatic system, suitably constrained, outperforms a human listener.
  • Provably correct unsupervised ASR, or ASR that can be trained using speech that has no associated text transcripts.

  • Equal Accuracy Ratio regularization: Methods that reduce the error rate gaps caused by gender, race, dialect, age, education, disability and/or socioeconomic class.

  • Automatic analysis of the social interactions between infant, father, mother, and older sibling during the first eighteen months of life.

Hasegawa-Johnson is currently Senior Area Editor of the journal IEEE Transactions on Audio, Speech and Language and a member of the ISCA Diversity Committee. He has published 308 peer-reviewed journal articles, patents, and conference papers in the general area of automatic speech analysis, including machine learning models of articulatory and acoustic phonetics, prosody, dysarthria, non-speech acoustic events, audio source separation, and under-resourced languages.

Education

  • Postdoctoral fellow, University of California at Los Angeles, 1996-1999
  • Ph.D., Massachusetts of Technology, 1996

  • M.S., Massachusetts Institute of Technology, 1989

Honors

  • 2023: Fellow of the International Speech Communication Association for contributions to knowledge-constrained signal generation
  • 2020: Fellow of the IEEE, for contributions to speech processing of under-resourced languages

  • 2011: Fellow of the Acoustical Society of America, for contributions to vocal tract and speech modeling

  • 2009: Senior Member of the Association for Computing Machinery

  • 2004: Member, Articulograph International Steering Committee; CLSP Workshop leader, "Landmark-Based Speech Recognition”, Invited paper

  • 2004: NAACL workshop on Linguistic and Higher-Level Knowledge Sources in Speech Recognition and Understanding

  • 2003: List of faculty rated as excellent by their students

  • 2002: NSF CAREER award

  • 1998: NIH National Research Service Award

Personal website:

CV:





Title

Cited By

Year

Speech Accessibility Project partners with The Matthew Foundation, Massachusetts Down Syndrome Congress

The Speech Accessibility Project has two new partners — The Matthew Foundation and the Massachusetts Down Syndrome Congress — as it continues to recruit adults with Down syndrome who live in the United States and Canada. The project also allows residents of Puerto Rico to participate.
15-Nov-2024 03:40:39 PM EST

Speech Accessibility Project expands to Canada

The Speech Accessibility Project is now recruiting Canadian adults with Parkinson’s disease, cerebral palsy, amyotrophic lateral sclerosis, Down syndrome and those who have had a stroke.
18-Oct-2024 10:55:22 AM EDT

Automatic speech recognition learned to understand people with Parkinson’s disease — by listening to them

Listening to people with Parkinson’s disease made an automatic speech recognizer 30% more accurate, according to initial findings from the Speech Accessibility Project. Speech recordings used in the study are freely available to organizations looking to improve their voice recognition devices.
27-Sep-2024 11:05:03 AM EDT

Speech Accessibility Project’s three newest partners are dedicated to people with cerebral palsy

The Speech Accessibility Project is partnering with several organizations who serve people with cerebral palsy as it recruits more participants for its speech recognition technology work. They include ADAPT Community Network, the Cerebral Palsy Foundation and CP Unlimited.
09-Jul-2024 01:05:03 PM EDT

Speech Accessibility Project now sharing recordings, data

The Speech Accessibility Project, which aims to make automatic speech recognition technology more accessible to people with speech differences and disabilities, is now sharing some of its voice recordings and related data with universities, nonprofits and companies.
22-Apr-2024 08:00:34 AM EDT

During National CP Awareness Month, a voice recognition project recruits U.S., Puerto Rican adults with cerebral palsy.

The Speech Accessibility Project, which aims to train voice recognition technologies to understand people with diverse speech patterns and disabilities, is recruiting U.S. and Puerto Rican adults with cerebral palsy.
12-Mar-2024 12:05:54 PM EDT

Speech Accessibility Project begins recruiting people who have had a stroke

The Speech Accessibility Project has begun recruiting U.S. and Puerto Rican adults who have had a stroke.
02-Feb-2024 11:05:04 AM EST

Voice recognition project recruiting adults with cerebral palsy

The Speech Accessibility Project is now recruiting U.S. and Puerto Rican adults with cerebral palsy.
09-Jan-2024 12:05:09 PM EST

Speech Accessibility Project begins recruiting people with ALS

The Speech Accessibility Project has expanded its recruitment and is inviting U.S. and Puerto Rican adults living with amyotrophic lateral sclerosis to participate.
05-Jan-2024 10:05:47 AM EST

Speech Accessibility Project now recruiting adults with Down syndrome

The Speech Accessibility Project is now recruiting U.S. adults with Down syndrome. The project aims to make voice recognition technology more useful for people with diverse speech patterns and different disabilities.
09-Nov-2023 03:05:29 PM EST

"Speech technology works really well for people whose voices are homogeneous. It works less well for people who have neuromotor disorders that cause differences in their speech patterns, or for people who are speaking English as a second language or have a regional or socioeconomic dialect that’s less represented in the samples used to train the technology. A lot of my research now is trying to better understand how we can compensate for differences in speaking patterns in a way that will enable speech technology to be usable by everyone."

-

A disability is not a physical fact about you. A disability is the interaction between physical differences in the way your body works and the things you’re able to do. And the things that you’re able to do are governed by how buildings are designed and how devices and organizations are created. If those buildings, devices, and organizations knew in advance that somebody with your physical abilities wanted to make use of them, they could create an accommodation that would allow you to access that building or device. What we would like to do with speech is make those accommodations standard, so that physical differences don’t exclude anyone from using any functionality that’s available.

-

“We are able to do this work because the Beckman Institute has fostered, over several decades, a close working relationship among scientists and engineers studying speech communication, linguistics, and artificial intelligence."

-

“The option to communicate and operate devices with speech is crucial for anyone interacting with technology or the digital economy today. Speech interfaces should be available to everybody, and that includes people with disabilities. This task has been difficult because it requires a lot of infrastructure, ideally the kind that can be supported by leading technology companies, so we’ve created a uniquely interdisciplinary team with expertise in linguistics, speech, AI, security, and privacy to help us meet this important challenge.â€

-

Available for logged-in users onlyLogin HereorRegister
close
0.08714