Processing and analyzing large datasets drives the demand for more computation and puts even larger demand on the memory and storage infrastructure. At the same time, the performance gap between processing units and memory is increasing, and this is a major source of performance bottleneck for memory-bound, data-intensive applications. Addressing these challenges and enabling the next generation of data-intensive applications requires computation to be performed as close to where the data are located as possible, in order to exploit massive bandwidth and scalable parallelism.
To filter and extract information from collected data (in domains such as bioinformatics, network security, natural language processing, data mining, etc.), complex patterns and variants of base patterns need to be identified quickly and efficiently. These tasks are memory-bound, and even high-throughput off-the-shelf von Neumann architectures struggle to meet today’s big-data and streaming line-rate pattern processing requirements. In this talk, I will describe my work on (1) developing near-data accelerators and the associated software stack to accelerate complex pattern recognition/processing, and (2) mapping applications from big-data domains to the proposed architectures, the combination of which forms an efficient hardware/software methodology that enables high-performance and energy-efficient complex pattern processing. I will discuss how our open-source software stack enables the design space exploration in memory-centric architectures, which results in high-throughput and area-efficient pattern processing solutions. I will then conclude my talk by describing future directions towards efficient data-centric computation in other big-data domains.
Elaheh Sadredini is a postdoctoral researcher at the University of Virginia, in the Center for Research on Intelligent Storage and Processing in Memory (CRISP). She received her Ph.D. in Computer Science from the University of Virginia in May 2019. Her research is at the intersection of computer architecture along with algorithm, compiler, data mining, and machine learning, and is focused on developing specialized, near-data hardware accelerators for big data applications, including natural language processing, data mining, and bioinformatics. Her research has resulted in several publications at top-tier venues (such as MICRO, ASPLOS, HPCA, ICS, and KDD) and several patents and patent applications. Elaheh is the recipient of several awards, including the John A. Stankovic Graduate Research Award from the UVA Department of Computer Science for outstanding research in 2019 and the UVA International Students Office Graduation Award for Academic Excellence in 2019. She also received the best paper awards at the ACM International Conference on Computing Frontiers in 2016, the “Best of CAL” award in 2019, and the nomination for the best paper award at HPCA 2020.
Faculty Host: Chris Fletcher