Department of Linguistics Calendar

View Full Calendar

Linguistics Seminar Series - Stephanie Shih, Assistant Professor of Linguistics, USC Dornsife College of Letters, Arts and Science: "Gradience for lexically-conditioned phonology"

Event Type
Lecture
Sponsor
Department of Linguistics
Date
Sep 28, 2020   4:00 pm  
Speaker
Stephanie Shih, Assistant Professor of Linguistics, USC Dornsife College of Letters, Arts and Science, Los Angeles, California.
Contact
Daniel Stelzer
E-Mail
stelzer3@illinois.edu
Views
30
Originating Calendar
School of Literatures, Cultures and Linguistics Calendar

Abstract: There are many approaches to modeling lexically-conditioned phonology in current formal theories, including lexically-indexed constraints and cophonologies. Nearly all of these existing approaches assume categorical membership in the lexical classes that condition differential phonotactics or phonological behaviours: for example, a lexical item is either a noun or a verb, or of one gender class or another. In this talk, I present evidence from sound symbolic patterns that demonstrates the need for gradient membership in the lexical classes that condition phonological patterns. Case studies include cross-linguistic Pokémon names and English baseball player names and nicknames.

From these cases, I propose an implementation of Maximum Entropy Harmonic Grammar with lexically-indexed constraints and gradient symbolic activations over classes that allows us to model differences in phonological patterns over both discrete and gradient class membership. This theoretical implementation is a natural extension of the scales and gradient activations that have been shown to be necessary in recent phonological theory: sound symbolic evidence highlights the necessity for such increased explanatory power in our phonological models. Crucially, we find gradient lexically-conditioned patterns not only in sound symbolism—where they are often most obvious—but also in what is considered “core” language (e.g., morphosyntactic classes), and allowing gradient class structures in our phonological models may ultimately make for cleaner interfaces with other parts of grammar such as morphosyntax.

link for robots only