CAP Seminar: Jianming Tong, "FEATHER: A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching."
Feb 17, 2026 4:00 - 5:00 pm
2405 Siebel Center

- Sponsor
- Architecture, Compilers, and Parallel Computing Research Area
- Speaker
- Jianming Tong
- Contact
- CAP Seminar Planners
- cap-seminar-planning@lists.cs.illinois.edu
- Originating Calendar
- Siebel School Speakers Calendar
Abstract: The inference efficiency of diverse ML models over spatial accelerators boils down to the execution of different dataflows (i.e. different tiling, ordering, parallelism, and shapes). Using the optimal dataflow for every layer of workload can reduce latency by up to two orders of magnitude over a suboptimal dataflow. Unfortunately, reconfiguring hardware for different dataflows involves on-chip data layout reordering and datapath reconfigurations, leading to non-trivial overhead that hinders ML accelerators from exploiting different dataflows, resulting in suboptimal performance. To address this challenge, we propose FEATHER, an innovative accelerator that leverages a novel spatial array termed NEST and a novel multi-stage reduction network called BIRRD for performing flexible data reduction with layout reordering under the hood, enabling seamless switching between optimal dataflows with negligible latency and resources overhead. For systematically evaluating the performance interaction between dataflows and layouts, we enhance Timeloop, a state-of-the-art dataflow cost modeling and search framework, with layout assessment capabilities, and term it as Layoutloop. We model FEATHER into Layoutloop and also deploy FEATHER end-to-end on the edge ZCU104 FPGA. FEATHER delivers 1.27~2.89x inference latency speedup and 1.3~6.43x energy efficiency improvement compared to various SoTAs like NVDLA, SIGMA and Eyeriss under ResNet-50 and MobiletNet-V3 in Layoutloop. On practical FPGA devices, FEATHER achieves 2.65/3.91x higher throughput than Xilinx DPU/Gemmini. Remarkably, such performance and energy efficiency enhancements come at only 6% area over a fixed-dataflow Eyeriss-like accelerator.
Bio: Jianming Tong is a 5th-year PhD candidate at Georgia Tech, advised by Tushar Krishna. He is a computer architect focusing on focusing on system for AI and Cryptography, i.e., enabling today’s AI systems to work in a privacy-preserving manner without sacrificing performance. Representative highlights include the CROSS Compiler (HPCA’26 with Google, MLSys’24) and the FEATHER Reconfigurable Accelerator (ISCA'24). His works are deployed in NVIDIA (NV Labs) and Google (Jaxite), and have been recognized by 2nd place in university demo @ DAC, Qualcomm Innovation Fellowship, Machine Learning and System Rising Star, and GT NEXT Award.
