National Center for Supercomputing Applications WordPress Master Calendar

Back to Listing

NCSA staff who would like to submit an item for the calendar can email

Underspecified Foundation Models Considered Harmful

Event Type
Sponsor Digital Transformation Institute
Nov 10, 2022   3:00 - 4:00 pm  
Nicholas Carlini, Research Scientist, Google Brain
Contact Digital Transformation Institute
Originating Calendar
NCSA External Events Feed

Instead of training neural networks to solve any one particular task, it is now common to train neural networks to behave as a “foundation” upon which future models can be built. Because these models train on unlabeled and uncurated datasets, their objective functions are necessarily underspecified and not easily controlled. In this talk, I argue that while training underspecified models at scale may benefit accuracy, it comes at a cost to their security. As evidence, I present two case studies in the domains of semi- and self-supervised learning, where an adversary can poison the unlabeled training dataset to perform various attacks. Addressing these challenges will require new categories of defenses to simultaneously allow models to train on large datasets while also being robust to adversarial training data.

As a research scientist at Google Brain, Nicholas Carlini studies the security and privacy of machine learning. For this he has received best paper awards at ICML, USENIX Security, and IEEE S&P. Carlini earned his PhD from the University of California, Berkeley in 2018.

link for robots only