Research Seminars @ Illinois

View Full Calendar

Tailored for undergraduate researchers, this calendar is a curated list of research seminars at the University of Illinois. Explore the diverse world of research and expand your knowledge through engaging sessions designed to inspire and enlighten.

To have your events added or removed from this calendar, please contact OUR at ugresearch@illinois.edu

ISE Graduate Seminar Series- Yunzong Xu

Event Type
Seminar/Symposium
Sponsor
ISE Graduate Programs
Location
2310 EVRT - 1406 W Green St, Urbana IL 61801
Date
Sep 13, 2024   10:00 - 10:50 am  
Views
19
Originating Calendar
ISE Seminar Calendar

Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation

Yunzong Xu
Assistant Professor, Industrial & Enterprise Systems Engineering
University of Illinois at Urbana-Champaign 

Abstract: We consider the offline reinforcement learning problem, where the aim is to learn a decision making policy from logged data. Offline RL -- particularly when coupled with (value) function approximation to allow for generalization in large or continuous state spaces -- is becoming increasingly relevant in practice, because it avoids costly and time-consuming online data collection and is well suited to safety-critical domains. Existing sample complexity guarantees for offline value function approximation methods typically require both (1) distributional assumptions (i.e., good coverage) and (2) representational assumptions (i.e., ability to represent some or all Q-value functions) stronger than what is required for supervised learning. However, the necessity of these conditions and the fundamental limits of offline RL are not well understood in spite of decades of research. This led Chen and Jiang (2019) to conjecture that concentrability (the most standard notion of coverage) and realizability (the weakest representation condition) alone are not sufficient for sample-efficient offline RL. We resolve this conjecture in the positive by proving that in general, even if both concentrability and realizability are satisfied, any algorithm requires sample complexity polynomial in the size of the state space to learn a non-trivial policy.
Our results show that sample-efficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions that go beyond supervised learning, and highlight a phenomenon called over-coverage which serves as a fundamental barrier for offline value function approximation methods. A consequence of our results for reinforcement learning with linear function approximation is that the separation between online and offline RL can be arbitrarily large, even in constant dimension.

Biography: Yunzong Xu recently joined the University of Illinois Urbana-Champaign in 2024, as an assistant professor in the Department of Industrial and Enterprise Systems Engineering. He is interested in the foundations of machine learning and decision making; their connections to operations research, optimization, and probability; and their applications to social, economic, and management sciences. Before joining Illinois, he was a postdoctoral researcher in the Machine Learning & AI group at Microsoft Research. Yunzong Xu received his Ph.D. degree from MIT in 2023 and his B.S. degree from Tsinghua University in 2018.

link for robots only