Speakers

View Full Calendar

Underspecified Foundation Models Considered Harmful

Event Type
Seminar/Symposium
Sponsor
C3.ai Digital Transformation Institute
Date
Nov 10, 2022   3:00 - 4:00 pm  
Speaker
Nicholas Carlini, Research Scientist, Google Brain
Registration
required.
Contact
C3.ai Digital Transformation Institute
Views
16
Originating Calendar
C3.ai DTI Events Calendar

Instead of training neural networks to solve any one particular task, it is now common to train neural networks to behave as a “foundation” upon which future models can be built. Because these models train on unlabeled and uncurated datasets, their objective functions are necessarily underspecified and not easily controlled. In this talk, I argue that while training underspecified models at scale may benefit accuracy, it comes at a cost to their security. As evidence, I present two case studies in the domains of semi- and self-supervised learning, where an adversary can poison the unlabeled training dataset to perform various attacks. Addressing these challenges will require new categories of defenses to simultaneously allow models to train on large datasets while also being robust to adversarial training data.

As a research scientist at Google Brain, Nicholas Carlini studies the security and privacy of machine learning. For this he has received best paper awards at ICML, USENIX Security, and IEEE S&P. Carlini earned his PhD from the University of California, Berkeley in 2018.

link for robots only