Center for Artificial Intelligence Innovation

View Full Calendar

CAII Fall Seminar Series: "The Geometry of Data: An Interpretation of the Challenges of Training Models at Scale"

Event Type
Seminar/Symposium
Sponsor
Center for Artificial Intelligence Innovation
Virtual
wifi event
Date
Oct 19, 2020   11:00 am  
Speaker
Aaron Saxton, Data Engineer, National Center for Supercomputing Applications
Registration
Registration
Views
251

Aaron Saxton, Data Engineer at the National Center for Supercomputing Applications, will present "The Geometry of Data: An Interpretation of the Challenges of Training Models at Scale" on Monday, October 19 at 11:00 a.m.

Abstract: There are many ways to parallelize computational workflows on HPC systems like Blue Waters and they all come with risks and benefits. The holy grail of paralyzing ML workflows is distributed training. In this process the model is copied, data is partitioned, and both are loaded onto multiple nodes to increase the scale at which we can train. In this talk, we start with the hypothesis that all data can be embedded on some lower dimension manifold. Indeed, this is called the geometric interpretation of data. Since gradient methods are the primary algorithms to optimize ML models on training data, by using the geometric interpretation we are able to visualize and gain insight to the challenges optimization faces. In particular we will explore how this expresses while training at scale. There is no good general theory of ML training, but this will give practitioners some intuitive tools to improve their models to push the limits of scaling.

Register to attend this webinar.

Seminar Zoom link.

link for robots only