We are excited to feature our own UIUC student speakers, Alexander Smith and Neeloy Chakraborty, in Person for this week's Robotics Seminar! Join us this Friday, October 31st, at 1 pm CT in CSL Studio 1232 to learn about their interesting works!
Talk 1
Title: Medical and Surgical Robotic Systems for Increased Access to Healthcare
Abstract: Diagnostic and interventional clinical systems require novel combinations of technology to meet clinical needs. In this talk, we’ll explore two medical problems: access to vision screening and access to traumatic brain injury treatment, and how we might approach these problems using novel systems. In the first, a fully automated retinal imaging system is shown to produce screening-quality retinal images that can be used to assess the health of the eye, serving as the skeleton for future eye examination techniques to be built upon. The second is a novel and compact image-guided surgery system designed to improve the workflow for bedside neurosurgical guidance.
Bio: Alexander Smith is a sixth-year MD-PhD student at the Carle Illinois College of Medicine and the Siebel School for Computing and Data Science at the University of Illinois at Urbana-Champaign. He received his B.S. degree with a dual major in Biomedical Engineering and Computer Science from Saint Louis University in 2019. From 2019-2020, he worked as a Researcher at Johns Hopkins University in the Carnegie Center for Surgical Innovation.
Talk 2
Title: Adaptive Stress Testing Black-Box LLM Planners
Abstract: Large language models (LLMs) have recently demonstrated success in generalizing across decision-making tasks including planning, control, and prediction, but their tendency to hallucinate unsafe and undesired outputs poses risks. We argue that detecting such failures is necessary, especially in safety-critical scenarios. Existing methods for black-box models often detect hallucinations by identifying inconsistencies across multiple samples. Many of these approaches typically introduce prompt perturbations like randomizing detail order or generating adversarial inputs, with the intuition that a confident model should produce stable outputs. We first perform a manual case study showing that other forms of perturbations (e.g., adding noise, removing sensor details) cause LLMs to hallucinate in a multi-agent driving environment. We then propose a novel method for efficiently searching the space of prompt perturbations using adaptive stress testing (AST) with Monte-Carlo tree search (MCTS). Our AST formulation enables discovery of scenarios and prompts that cause language models to act with high uncertainty or even crash. By generating MCTS prompt perturbation trees across diverse scenarios, we show through extensive experiments that offline analyses can be used at runtime to automatically generate prompts that influence model uncertainty, and to inform real-time trust assessments of an LLM. We further characterize LLMs deployed as planners in a single-agent lunar lander environment and in a multi-agent robot crowd navigation simulation. Overall, ours is one of the first hallucination intervention algorithms to pave a path towards rigorous characterization of black-box LLM planners.
Bio: Neeloy Chakraborty is a fifth-year PhD candidate at the University of Illinois working in the Human-Centered Autonomy Lab. Prior to that, he completed his M.S. in 2023 and B.S. in 2021 at the University of Illinois in electrical and computer engineering. His primary research revolves around developing reliable technologies to enable safer interactions between humans and automation. Throughout his time at Illinois, he has studied modular approaches for instruction-following embodied AI robots, developed real-time video generation models to allow users to remotely control robots under challenging constraints, designed scalable multi-modal algorithms to detect anomalies in multi-agent settings, and identified hallucinations in large language model generations to proactively avoid failures in decision-making. Neeloy has also conducted research through internships at Dolby Labs, Ford, and Brunswick, tackling problems in generative modeling and reinforcement learning.
We will meet in CSL Studio 1232 — please enter through the center doors in the middle of the parking garage's south face if you do not have card swipe access to CSL Studio.
Looking forward to seeing you!
Robotics Seminar Team