Skip to main content
UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN
Toggle navigation
Research
Dropdown menu toggle
Artificial Intelligence
Arts and Humanities
Astrophysics
Digital Agriculture
Earth and Environment
Engineering
Health Sciences
Project Highlights
Expertise
Dropdown menu toggle
Compute Resources
Data Analytics
Facilities
Innovative Systems
Integrated Cyberinfrastructure
Program Administration
Software and Applications
User Services
Visualization
People
Dropdown menu toggle
Leadership
Directorates
Staff Directory
News & Events
Dropdown menu toggle
News
Calendar
Press Room
Tours
About
Dropdown menu toggle
Careers
Fellowships & Internships
Industry Partner Program
Institutional Partnerships
Diversity
History
Giving
Contact
Search
Search
Toggle navigation
Calendar
National Center for Supercomputing Applications WordPress Master Calendar
Share on Facebook
Tweet
Email
add to calendar
contact
add an event
View Full Calendar
NCSA staff who would like to submit an item for the calendar can email
newsdesk@ncsa.illinois.edu
.
DAIS Distinguished Alumni Seminar: Dr. Jiaming Shen, "Identifying and Incentivizing Grounded Natural Language Generation.""
Event Type
Seminar/Symposium
Sponsor
Prof. Jiawei Han and Prof. Chengxiang Zhai
Location
4124 Siebel Center
Virtual
Join online
Date
Oct 17, 2025 11:00 am
Speaker
Dr. Jiaming Shen
Contact
Allison Mette
E-Mail
agk@illinois.edu
Phone
217-300-0256
Views
12
Originating Calendar
Siebel School Speakers Calendar
Abstract
: The rapid advancement of Large Language Models (LLMs) has revolutionized natural language generation, yet their propensity to "hallucinate"—generating non-factual or unfaithful content—remains a critical challenge to their reliable deployment. This talk presents a research journey aimed at building more truthful AI, charting a course from granular detection to proactive mitigation. We begin with a detailed exploration of hallucination detection in the news domain. We present a framework for identifying hallucinations in generated news headlines. This work is extended through a fine-grained, multilingual typology of hallucinations, providing a more nuanced understanding of how models fail across different languages. Building on these diagnostic insights, we then shift to a new training-time hallucination mitigation framework. Specifically, we look at how search-augmented multi-step reinforcement learning can post-train LLMs for improved internal factual consistency. Finally, the talk concludes by outlining some interesting future research directions for building fundamentally more reliable generative systems.
Bio
: Jiaming Shen is a senior research scientist at Google DeepMind, specializing in natural language processing, data mining, and machine learning. His research focuses on assisting humans with LLM agents for knowledge acquisition, decision making, and creative thinking. His research has been acknowledged through several awards, including the ACL Outstanding Paper Honorable Mention in 2023, the Yunni & Maxine Pao Memorial Fellowship in 2019, and the Brian Totty Graduate Fellowship in 2016. He earned his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2021.
link for robots only
Back to top