Abstract: Speech errors obey the phonotactic constraints of the language being spoken. An English speaker for example might slip and say "singing nung" for "singing nun", but would never say "singing ngun." I will describe experiments in which speakers are exposed to syllables following new artificial phonotactic constraints. The learning of the new constraints is then revealed in their speech errors. I will describe constraints that are easily learned, constraints that are never learned, and constraints that are learned only after a period of sleep. The ultimate (not yet reached) goal of the research is to link up theories of consolidation in learning (e.g. the need for sleep to consolidate learning) with theories of critical periods in sound pattern learning (e.g. the difficulty that mature speakers have learning a second language's phonology).
Speaker Bio: Gary Dell's work deals with how people produce and understand sentences, and how these processes can be modelled using neural networks. For example, his research on language production attempts to understand production errors or "slips of the tongue." He has developed a neural net model that makes predictions about the qualitative and quantitative properties of speech errors. These predictions are tested using experimental procedures in which subjects produce words and sentences under controlled conditions. A particularly interesting aspect of the model is that it can be used to understand patterns of behavior resulting from brain damage. By changing the processing characteristics of the model, one can produce speech error patterns that are characteristic of certain types of aphasic patients.