Zoom Link: https://illinois.zoom.us/j/83020187587?pwd=cVZxK1NlU2I0M0krMk5IdE0vZGVIQT09
Reception following program
This talk presents challenges, risks, and opportunities for Natural Language Processing (NLP) Applications in, with an emphasis on the need for explainability in the era of ChatGPT. Examples include: machine translation of human languages, ask detection for defending against social engineering attacks, and stance detection for extracting attitudes from social media. Past, current, and future projects face several challenges: (a) brittleness of rule-based linguistic principles for large-scale processing; (b) shallowness of statistical methods and neural language models for understanding implicit information; and (c) lack of “explainability” amidst ever-increasing numbers of black-box models. A case is made for hybrid approaches that combine linguistic generalizations with statistical and neural models to handle implicitly conveyed information (e.g., beliefs and intentions), and also for the implementation of an “explainable” propositional representation that supports the ability of developers and end users to understand what is going on inside the AI system. Questions of interest range from “What is the social engineer’s underlying goal in a two-way interaction?” to “What beliefs support individuals’ attitudes regarding pandemic interventions?” to “How does targeted influence impact attitudes online?”. Such information is generally not extractable from large language models alone and, moreover, such models are hampered in that they are too large to retrain on a regular basis by the average researcher, developer, or customer. Representative examples of ChatGPT output are provided to illustrate areas where more exploration is needed, particularly with respect to task-specific goals.
Professor Dorr joined the Department of Computer and Information Science and Engineering at the University of Florida in 2022 where she directs the Natural Language Research (NLP) Group. Her research focuses on deep language understanding, semantics, language processing using linguistically informed machine learning models, large-scale multilingual processing, explainable artificial intelligence (AI), social computing, and detection of underlying mental states. Her recent contributions have fallen squarely in the realm of cyber-NLP, for example, responding to social engineering attacks and detecting indicators of influence. She has an affiliate appointment at the Institute for Human and Machine Cognition, is Professor Emerita at the University of Maryland, former program manager at the Defense Advanced Research Projects Agency (DARPA), and former president of the Association for Computational Linguistics. She is a Sloan Fellow, NSF Presidential Faculty (PECASE) Fellow, AAAI Fellow, ACL Fellow, and ACM Fellow. In 2020 she was named by DARPA to the Information Science and Technology (ISAT) Study Group. She holds a Master's and a Ph.D. in computer science from the Massachusetts Institute of Technology, with a Bachelor's degree in computer science from Boston University.
Part of the Illinois Computer Science Speakers Series. Faculty Host: Heng Ji
Meeting ID: 830 2018 7587
If accommodation is required, please email <email@example.com> or <firstname.lastname@example.org>. Someone from our staff will contact you to discuss your specific needs.