Autonomy Incubator Seminar Series:
LANGUAGE AND ROBOTS
Jonathan Connell, IBM T.J. Watson Research Center
June 4, 2014, 10:00 am, NASA Langley, Reid Center
Hosts: Danette Allen (NASA) and Fred Brooks (NIA)
Seminar Video
Abstract:
IBM has built a speech controlled fetch-and-carry robot named ELI. While the focus has largely been on natural language and learning, Mr. Connell will cover the infrastructure that supports these other functionalities. In particular he will explain how the robot separates objects from the background, determines properties such as shape and color, builds visual models for later recognition, and determines where to reach for them. In addition, he will describe some simple gesture interpretation that enhances the user’s interaction with the robot. A video of the integrated system in operation will be presented.
Bio:
Jonathan Connell received his Ph.D. in AI from MIT in 1989 and then went to work at IBM’s T.J. Watson Research Center. His projects include robot navigation, reinforcement learning, natural language processing, audio-visual speech recognition, video browsing, fingerprint identification, iris recognition, and cancelable biometrics. He has done extensive work in real-world computer vision including recognizing items in retail stores, object detection for video surveillance, and vehicle spotting for automotive controls. Most recently he has developed a multi-modal instructional dialog system for use with speech-guided eldercare mobile robots. In addition, he has taught in the Psychology Department at Vassar College, is an IEEE Fellow, the author of 3 books, and holds 48 US patents.