In this talk, I will summarize and analyze four paradigms in the development of statistical natural language processing techniques and argue that prompt-based learning is a promising new paradigm that may represent another major change in the way we look at NLP. Then I will try to organize the current state of knowledge in this rapidly developing field by providing an overview and formal definition of prompting methods. Looking all the way back to the summary of the four paradigms of NLP research, I hope to highlight the commonalities and differences between them, making research on any of these paradigms more full-fledged, and potentially providing a catalyst to inspire work towards the next paradigm shift as well.
Pengfei Liu obtained his Ph.D. from Fudan University in 2019 and is currently a postdoc at CMU. His research topics focus on information extraction, text generation, interpretable evaluation, and diagnostics for NLP systems. He has published more than 40 academic papers in NLP flagship conferences and serves as an area chair of ACL, EMNLP, NeurIPS, etc. He wins the best demo paper award in ACL 2021 and CAAI outstanding doctoral dissertation, Baidu scholarship, Microsoft Scholar award, etc. Some of the recent projects (e.g., ReviewAdvisor, ExplainaBoard) he has led have received attention from academia and industry.