In many applications traditional software development is being replaced by machine learning generated models resulting in accuracy improvements and deployment advantages. This fundamental shift in how we develop software is known as Software 2.0. The continued success of Software 2.0 will require efficient and flexible computer hardware optimized for dataflow computational graphs. In this talk, we will discuss the design of high-performance dataflow computer architectures for accelerating Software 2.0 Natural Language Processing workloads. Our vertically integrated approach to machine learning performance combines new machine learning algorithms, new domain-specific languages, advanced compilation technology and software-defined hardware.
Urmish is a Principal Engineer at SambaNova Systems working on large LM. Urmish works on efficient training and inference algorithms for Deep Learning Applications and has 25+ publications and patents in this domain. He received his Master’s from University of Wisconsin Madison and has 6+ years of experience working on HW-SW codesign of ML Applications across companies like Arm, AMD and Texas Instruments.