Computer Science Speaker Series Master Calendar

Back to Listing

Yangfeng Ji: "Bringing Structural Information into Neural Network Design"

Event Type
Department of Computer Science
2405 Thomas M. Siebel Center for Computer Science
Mar 29, 2018   10:00 am  
Dr. Yangfeng Ji, Postdoctoral Researcher, University of Washington
Alice Needham
Originating Calendar
Computer Science Speakers Calendar

Abstract: Deep learning is one of most popular learning techniques used in natural language processing (NLP). A central question in deep learning for NLP is how to design a neural network that can fully utilize the information from training data and make accurate predictions. A key to solve this problem is to design a better network architecture.

In this talk, I will present two examples from my work on how structural information from natural language helps design better neural network models. The first example shows adding coreference structures of entities not only helps different aspects of text modeling, but also improves the performance of language generation; the second example demonstrates structures of organizing sentences into coherent texts can help neural networks build better representations for various text classification tasks. Along the lines of this topic, I will also propose some ideas for future work and discuss the potential challenges.

Bio: Yangfeng Ji is a postdoc researcher at University of Washington working with Noah Smith. His research interests lie in the interaction of natural language processing and machine learning. He is interested in designing machine learning models and algorithms for language processing, and also fascinated by how linguistic knowledge helps build better machine learning models. He completed his Ph.D. in Computer Science at Georgia Institute of Technology in 2016, advised by Jacob Eisenstein. He was one of the area co-chairs on Discourse and Pragmatics in ACL 2017. 

link for robots only