Its “click-bait” title notwithstanding (sorry, “fake news”), this talk is about a more serious question, namely, given an algorithm and a computer system, can we estimate or bound the amount of physical energy (Joules) or power (Watts) it might require, in the same way that we do for time and storage? These physical measures of performance are relevant to nearly every class of computing device, from embedded mobile systems to power-constrained datacenters and supercomputers (the latter being the focus of my interests). Armed with models of such measures, we can try to answer many interesting questions. For instance, what is an “optimal machine” for a given algorithm and system power budget? How might systems be better balanced in energy or power for certain classes of algorithms? Can we design algorithms and software with tuning knobs that could be used to control energy or power as it executes? I will describe how we are approaching these questions and give simple case studies in the context of numerical and graph computations, including both theoretical predictions and early empirical validation of our algorithmic energy and power models on real software and systems.
(**) P.S.: To be clear, I strongly believe that many of the emerging applications of deep learning are a meaningful and worthwhile part of how computing should spend its capital. This message of this talk is simply that we ought to raise the bar for what we regard as “efficient” to be much, much higher, echoing similar rallying cries under the banner of “GreenAI.”
I am a Professor in the School of Computational Science and Engineering at Georgia Tech and serve as Director of its Center for High-Performance Computing. My research lab, The HPC Garage (@hpcgarage on Twitter and Instagram) is interested in “fast code,” which encompasses a variety of problems in performance analysis and engineering, with an emphasis on applications in scientific and engineering computing. (Does that include data? Of course — it’s science!) In my previous lives, I received my Ph.D. from UC Berkeley and worked as a postdoctoral scholar at Lawrence Livermore National Laboratory. For the usual litany of other claimed academic honors (e.g., NSF CAREER, DARPA CSSG, Gordon Bell Prize, best papers), please visit vuduc.org.