Grainger College of Engineering Seminars & Speakers

View Full Calendar

Special Seminar: Rainer Engelken, "Harnessing Neural Dynamics: Principled Algorithms for Efficient AI and Large-Scale Computational Neuroscience"

Event Type
Seminar/Symposium
Sponsor
Siebel School of Computing and Data Science
Virtual
wifi event
Date
Apr 16, 2025   10:00 am  
Views
84
Originating Calendar
Siebel School Special Seminar Series

Zoom: https://illinois.zoom.us/j/87649116180?pwd=AdizgS8nu1IxjsU3jD8XHfjhItx3Tg.1

Abstract: 
Modern AI systems exhibit remarkable capabilities but face challenges in efficiency, robustness, and trainability, consuming vast resources unlike the highly efficient biological brain. Concurrently, neuroscience generates massive datasets whose complex dynamics remain challenging to interpret. This talk presents a unified approach, leveraging Dynamical Systems Theory to extract fundamental computational principles from neural dynamics, leading to novel algorithms for AI and powerful tools for Neuro-Informatics.

First, I address the critical problem of training stability in deep and recurrent networks. I introduce Gradient Flossing, a novel algorithm that improves training by dynamically regularizing the network's Lyapunov exponents—key indicators of trajectory stability and chaos. By controlling these exponents based on analytical gradients, Gradient Flossing mitigates exploding/vanishing gradients and improves the gradient's condition number, demonstrably enhancing performance on tasks requiring long-range dependencies.

Second, inspired by the brain's efficiency, I explore computation in spiking neural networks (SNNs), which communicate via discrete events (spikes) and are promising for energy-efficient AI. I present SparseProp, an O(log N) event-based simulation algorithm that dramatically accelerates large-scale SNN simulations. This computational advance enables the application of dynamical systems analysis to connectome-scale models relevant for Computational Systems Neuroscience and neuromorphic computing. I further show how controlling SNN dynamics enhances computational reliability and information transmission.

Finally, I demonstrate the power of this framework for Big Data Neuro-Informatics, applying dynamical systems concepts and dimensionality reduction techniques to analyze large-scale neural recordings (mouse hippocampus). This reveals emergent low-dimensional manifolds encoding behavior and learning, offering new insights into neural representation dynamics.

By bridging dynamical systems, AI, and computational neuroscience, this research provides principled methods for building more efficient and robust AI and for deciphering the computational underpinnings of learning in complex neural systems.

Bio:
Rainer Engelken is a Kavli Scholar at Columbia University's Center for Theoretical Neuroscience (advised by Larry Abbott). His research integrates Dynamical Systems Theory, Machine Learning, and Computational Neuroscience to uncover principles of efficient and robust computation in biological and artificial neural networks. He focuses on developing novel algorithms for stable AI training (Gradient Flossing), efficient simulation of large-scale neural systems (SparseProp), and computational methods for analyzing large neural datasets, directly contributing to the field of Neuro-Informatics. He earned his Ph.D. studying neural dynamics at the Max Planck Institute for Dynamics and Self-Organization (Göttingen) with Fred Wolf, following studies in Physics at the Universities of Tübingen, University College London and at the University of Cambridge (Part III Maths) with a German National Merit Scholarship.

Faculty Host: Julia Hockenmaier  

Meeting ID: 876 4911 6180 
Password: csillinois

link for robots only