Fuzzing is a promising method for discovering vulnerabilities. Recently, various techniques are developed to improve the efficiency of fuzzing, and impressive gains are observed in evaluation results. However, evaluation is complex, as many factors affect the results, for example, benchmark, baseline and metrics. In order to restore the comparability and authenticity of existing fuzzing works, in this talk, we present an empirical evaluation of fuzzing techniques. First, we systematically evaluate typical fuzzers on a unified test suite with carefully selected metrics. By analyzing the results, we summarize common pitfalls optimizing a fuzzer. Furthermore, to understand the root causes behind the pitfalls, we conduct experiments and propose directions to overcome the problems and demonstrate how to customize it to different domains such as deep learning, block-chain and industry control.
Yu Jiang received his PhD. degree in computer science from Tsinghua University in 2015, worked as a Postdoc at the University of Illinois at Urbana-Champaign in 2016, and is currently an assistant professor at Tsinghua University. His research focuses on the safety and security assurance of software systems, and proposed systematic methods for their trustworthy design and analysis. The proposed techniques have been applied in several industry partners such as Alibaba and Tencent, and detected many safety-critical bugs, and 100+ vulnerabilities in widely used software such as Linux kernel, libjpeg and protocol IEC61850 were accepted in the USA national vulnerability database CVE. He has published 60+ papers in international journals (TPDS, TC, TCPS, etc.) and conferences (Security, ICSE, EMSOFT, ICCAD, etc.). He has received the best paper award of CPScom 2019 and the best paper candidate of EMSOFT 2019. He has won the China Computer Association outstanding doctoral dissertation award 2015, Young Rising Star of Microsoft Research Asia 2018 and China Association of Science and Technology young talent 2018.