For robots to operate after environmental disasters in locations that are too dangerous for humans to explore, our team is developing human-robot dialogue systems for robots that collaborate in search operations with human teammates. As natural language researchers working with roboticists, we face the "cold start" problem of collecting training data for this scenario and developing a dialogue system for robots that do not yet exist.
In this talk, I will present results of our multi-phase, data-driven approach to this problem with a progression of four experiments, all of which involve human participants speaking freely with a robot to complete navigation-based tasks. The participants rely on three types of visual information to track the robot’s responses: natural language text in a chat window; movements of a robot icon on a floor plan (generated by the robot's LIDAR); and photos from the robot's camera (when requested by the participant). The talk will conclude with a few preliminary observations on the most recent experiment and a brief overview of some ongoing research using the collected data.
Clare R Voss is a senior research computer scientist at the Army Research Laboratory (ARL) in Adelphi, Maryland. She has been actively involved in natural language processing (NLP) for over twenty years, starting with her education (Linguistics B.A., U Michigan; Psychology M.A., U Pennsylvania; Computer Science PhD., U Maryland) and continuing as a founding member of the multilingual computing group at ARL, where she now leads an interdisciplinary team working on multilingual and multimodal information extraction in support of event analysis for decision makers, as well as joint navigation and exploration using natural language dialog between humans and robots. She is a member of the Advisory Board for the Computational Linguistics Program at the U. of Washington and a past member of the Board of Directors of the Association for Machine Translation in the Americas (AMTA).