Siebel School Speakers Calendar

View Full Calendar

FM/SE seminar

Event Type
Seminar/Symposium
Sponsor
PL/FM/SE
Location
0222 Siebel Center and Zoom
Virtual
wifi event
Date
Dec 6, 2024   2:00 - 3:00 pm  
Speaker
Xinyi Wei, UIUC and Isha Chaudhary, UIUC
Contact
Kristin Irle
E-Mail
kirle@illinois.edu
Phone
217-244-0229
Views
12
Talk 1 (2-2.30):
Title: Fine-grained Distributed Data Plane Verification with Intent-based Slicing
Speaker: Xinyi Wei, UIUC

Abstract: Data plane verification has grown into a powerful tool to ensure network correctness. However, existing methods with monolithic models have memory requirements tied to network sizes, and the existing method of scaling out is too limited in expressiveness to capture practical network features. In this paper, we describe Scylla, a general data plane verifier that provides fine-grained scale-out without the need for a monolithic network model. Scylla creates models for what we call intent-based slices, each of which is constructed at the rule-level granularity with only enough to verify a given set of intents. The sliced models are retained and incrementally updated in memory across a distributed compute cluster in response to network updates. Our experiments show that Scylla makes the scaling problem more granular -- tied to the size of the intent-based slices rather than that of the overall network. This enables Scylla to verify large, complex networks in minimum units of work that are significantly smaller (in both memory and time) than past techniques, enabling fast scale-out verification with minimal resource requirement.

Talk 2 (2.30-3):
Title: Quantitative Certification of Bias in Large Language Models
Speaker: Isha Chaudhary, UIUC
Abstract: Large Language Models (LLMs) can produce biased responses that can cause representational harms. However, conventional studies are insufficient to thoroughly evaluate LLM bias, as they cannot scale to large number of inputs and provide no guarantees. Therefore, we propose the first framework, QuaCer-B (Quantitative Certification of Bias) that certifies LLMs for bias on distributions of prompts. A certificate consists of high confidence bounds on the probability of unbiased LLM responses for any set of prompts mentioning various demographic groups, sampled from a distribution. We illustrate the bias certification for distributions of prompts created by applying varying prefixes drawn from a prefix distribution to a given set of prompts. We consider prefix distributions for random token sequences, mixtures of manual jailbreaks, and jailbreaks in the LLM’s embedding space to certify bias. We obtain non-trivial certified bounds on the probability of unbiased responses of SOTA LLMs, exposing their vulnerabilities over distributions of prompts generated from computationally inexpensive distributions of prefixes.
We look forward to your participation!
link for robots only