Abstract: In this seminar I will present some of my recent work on automatic acoustic sensing, with applications ranging from entertainment to surveillance and more. I will show how many of the existing signal processing tools can be ill-equipped to solve various real-life audio problems, and how one can use a machine learning mindset to get better results. In the process I will discuss new models for source separation, beamforming methods, and new modes of audio processing (e.g. can we make a microphone array from all the cell phones in a concert?) which wouldn’t be tractable with traditional DSP
About the speaker: I’m an assistant professor at the CS and ECE depts. at theUniversity of Illinois at Urbana-Champaign. My primary research interests revolve around making machines that can listen. I’ve done plenty of work on signal processing, machine learning and statistics as they relate to artificial perception, and in particular computational audition. I also love working on anything related to audio! The bulk of my work on audio is on source separation, and various machine learning approaches to traditional signal processing problems. I am fortunate to have been associated with some amazing research labs. I completed my masters, Ph.D. and a postdoc at the Machine Listening Group at the MIT Media Lab under the supervision of Barry Vercoe. I work with Adobe Systems’ Advanced Technology Labs, used to be at MERL, and have spent some time at Interval Research and Starlab. I was also a visiting scientist at MIT’s McGovern Institute for Brain Research. In 2006 I was selected by MIT’s Technology Review as one of the year’s top young technology innovators. I’m a descendant of a long music lineage dating to the early 1600s. My Erdös number is 4.