
The rapid advancement of artificial intelligence in recent years has created new opportunities for research at the intersection of linguistics, neuroscience, and AI. Taking speech recognition as an example, the application of Transformer models and the pre-training-fine-tuning approach has significantly improved recognition accuracy, including recognizing tones in connected speech. Building on my research in tone recognition, this talk introduces a novel method for speech analysis that leverages model representations and presents three studies related to model-brain alignment: learning mechanisms, speaker normalization, and the neural decoding of tones.
Speaker
Prof. YUAN Jiahong
Jiahong Yuan is a professor in the School of Humanities and Social Sciences at the University of Science and Technology of China (USTC). He received his Ph.D. in Linguistics from Cornell University in 2004. Before joining USTC, He held positions in the Department of Linguistics and the Linguistic Data Consortium at the University of Pennsylvania, Liulishuo Silicon Valley AI Lab, and Baidu Research Institute.
His main research areas include phonetics and the integration of linguistics and artificial intelligence. He has led multiple research projects funded by the U.S. National Science Foundation, the UK Economic and Social Research Council, the National Social Science Fund of China, among others. His current research focuses on the early detection and prediction of Alzheimer’s disease through spoken language, investigating the origin and evolution of Chinese tones with machine learning methods, and exploring model-brain alignment from the perspective of tone recognition.
For inquires, please contact the General Office at 39433219.