College of Engineering Seminars & Speakers

Back to Listing

Zhijian Ou "Semi-Supervised Task-Oriented Dialog Systems and Natural Language Labeling"

Event Type
Seminar/Symposium
Sponsor
The Department of Computer Science, University of Illinois, BLENDER Lab
Location
https://illinois.zoom.us/j/8167899060?pwd=YkZrQ09zODRzL0txRGF5bnhWdmk0UT09
Virtual
wifi event
Date
Sep 24, 2021   10:00 am  
Speaker
Dr. Zhijian Ou, Associate Professor, Department of Electronic Engineering, Tsinghua University
Contact
Candice Steidinger
E-Mail
steidin2@illinois.edu
Phone
217-300-8564
Views
77
Originating Calendar
Computer Science Speakers Calendar

Abstract: 

It is important for intelligent systems to learn in a data-efficient manner, in order to reduce the over-reliance on labeled data. There are increasing interests in developing semi-supervised learning (SSL) for various natural language processing (NLP) tasks, which aims to leverage both labeled and unlabeled data. In general, there exist two SSL approaches - joint-training and pre-training. Joint-training estimates the joint distribution of observations and labels, while pre-training is taken over observations only and followed by fine-tuning. Pre-training could be based on masked language models, auto-regressive language models, or random-field language models. The models used in joint-training could be directed (aka latent-variable model, LVM) or undirected (aka energy based model, EBM). Two approaches of pre-training and joint-training could be combined or compared to each other. There are many open questions in designing semi-supervised methods for particular NLP tasks. In this talk, we present some of our recent efforts towards answering these questions. First, we propose Variational Latent-State GPT model (VLS-GPT), which is the first to combine the strengths of the two approaches of pre-training and joint-training for task-oriented dialog systems. Second, we systematically evaluate and compare joint-training and pre-training for EBM base SSL through various natural language labeling tasks (POS, NER and chunking). It is found that joint-training outperforms pre-training marginally but nearly consistently.

Bio:

Zhijian Ou received his Ph.D. from Tsinghua University in 2003. Since 2003, he has been with the Department of Electronic Engineering in Tsinghua University and is currently an associate professor. From August 2014 to July 2015, he was a visiting scholar at Beckman Institute, University of Illinois at Urbana-Champaign, USA. He has actively led national research projects as well as research funding from Intel, Panasonic, IBM, Toshiba and Apple. He currently serves as associate editor of IEEE/ACM Transactions on Audio, Speech and Language Processing, member of IEEE Speech and Language Processing Technical Committee, and was General Chair of SLT 2021, Tutorial Chair of INTERSPEECH 2020. His research interests are speech and language processing (particularly speech recognition and dialogs) and machine intelligence (particularly with probabilistic graphical models and deep learning).

link for robots only