A key challenge for Artificial Intelligence is to design intelligent agents that can reason with heterogeneous representations. In this talk, I will describe
our recent work on teaching machines to reason in semi-structured tables and unstructured text data. More specifically, I will introduce:
(1) TabFact, a large benchmark dataset for table-based fact-checking;
(2) HybridQA, a multi-hop question answering framework on tables and text;
(3) How one can utilize TabFact to facilitate logical natural language generation with LogicNLG. I will also describe some other work at UCSB's NLP Group on learning to reason with multiple modalities.
William Wang is the Duncan and Suzanne Mellichamp Chair in Artificial Intelligence and Designs, and an Assistant Professor in the Department of Computer Science at the University of California, Santa Barbara. He is Director of UC Santa Barbara's Natural Language Processing group, and Center for Responsible Machine Learning. He received his PhD from Carnegie Mellon University. He has broad interests in machine learning and natural language processing, including statistical relational learning, information extraction, computational social science, and vision. He has published more than 100 papers at leading NLP/AI/ML/Vision conferences and journals, and received best paper awards (or nominations) at ASRU 2013, CIKM 2013, EMNLP 2015, and CVPR 2019, a DARPA Young Faculty Award (Class of 2018), IEEE Intelligent Systems AI's 10 to Watch (2020), an NSF CAREER Award (2021), and many other faculty research awards from Google, Facebook, IBM, Amazon, JP Morgan Chase, Adobe, and Intel. His work and opinions appear at major tech media outlets such as Wired, VICE, Scientific American, Fortune, Fast Company, NPR, etc.