Uncovering the algorithmic foundations of language learning and processing
Human language is extraordinarily complex. Nevertheless, we readily acquire language as children, when we are most cognitively limited, and we comprehend language as adults with striking efficiency. My research seeks to understand the mental algorithms that allow us to accomplish this feat, with particular focus on how memory and prediction mechanisms are recruited to overcome the bottlenecks of real-time language processing. In this talk, I will review results from three of my lines of inquiry into this question. First, using fMRI measures of naturalistic story listening, I will show evidence that memory and prediction processes are dissociable in the brain's response to language, that syntactic structure building plays a major role in ordinary language comprehension, and that the neural resources that are responsible for structure building are largely specialized for language. Second, using diverse naturalistic reading datasets, I will show evidence that prediction is both a central concern of the human language processing system and dissociable from memory-related processes. Third, I will show evidence from computational modeling that memory and prediction pressures independently encourage discovery of phonological regularities from natural speech. Together, these results support an intricate coordination of memory and prediction abilities for language learning and comprehension. I will conclude by outlining planned directions for my future lab, integrating neuroimaging, behavioral methods, natural language processing, and computational modeling to study language learning and processing.