Abstract: Each individual constantly changes the way they use language, creating and reflecting larger-scale language change. These changes can be motivated by internal, cognitive factors, or by external, social factors. Existing theories of language change target one or the other motivation, but not both.
In this talk, I argue that both cognitive and social motivations can be unified under a theory of language change in which perceptual biases in the listener are central. I integrate experimentally-supported perceptual biases in computational modeling and corpus analysis of two case studies, one cognitive and one social. In the cognitive case, I build an empirically-grounded computational model to simulate word-frequency effects in sound change. I show that different word-frequency effects in different kinds of sound change follow from a single perceptual bias, whereby high-frequency words are recognized more easily than low-frequency words when acoustically ambiguous. In the social case, I apply novel data-scientific methods to a large corpus to show that the tag eh has spread across ethnic groups in New Zealand. I then use qualitative methods to show that this spread was facilitated by de-stigmatization of indigenous Māori, in accordance with a perceptual bias that reduces the memorability of words spoken in stigmatized accents.
Taken together, these two case studies highlight how passive but powerful perceptual biases in the listener can shape language change, whether it be motivated cognitively or socially.