Position: Ph.D. Candidate

Current Institution: University of Pennsylvania

Abstract:
Toward Improved Robotic Perception via Online Machine learning

Perceiving the world has been one of the long-established problems in computer vision and robotic intelligence. Whereas human perception works so effortlessly, even state-of-the-art algorithms experience difficulty in performing the same tasks. The two questions that I will address in my talk are as follows:

(1) While many learning techniques emphasize the quantity of data, the underlying difficulty is in recognizing the subset of relevant data to the problem at hand. Due to the advancement of sensing and computing technology, the amount of realtime information a robot can receive and process is enormous. How can a robot estimate the non-stationary environment from this rich data? How can a robot learn online and selectively use noisy information?

(2) In order to enable high-level or interactive tasks in the real 3D world, both geometric and semantic scene understanding are critical. Humans are known to have two distinct visual processing systems called dorsal stream and ventral stream, which are often called “where” and “what” pathways respectively. As opposed to conventional approaches in computer vision that have parallelized the two issues, my study is motivated by the crosstalk between the two systems: Can a robotic visual system bootstrap the learning of both spatial information and the attributes of an object of interest?

With these questions in mind, my research has focused on how robotic perception can be improved via online learning. In this talk, I will discuss the combined problem of estimation of 3D geometric parameters and learning appearance-based features of objects in an online learning framework and present two case studies. First, I will present a study on monocular vision-based ground surface estimation and classification. The ground (or floor) is the most important background object, which appears everywhere if on land. Being ubiquitous, the ground exhibits diverse visual features depending where you are and when it is. In this study, an online simultaneous geometric estimation and appearance-based classification of the ground is demonstrated using the KITTI benchmark dataset, a large-scale dataset developed for autonomous driving car research. Second, I will talk about a learning approach for efficient model-based 3D object pose estimation. Knowing the precise 3D pose of an object is crucial for interactive robotic tasks such as grasping and manipulation. However, dealing with 3D models and running a 3D registration algorithm on noisy image data is typically expensive. By predicting the visibility of the geometric model and learning discriminative appearance of the object in an online fashion, the suggested method is able to select only relevant part of data stream, which results in high efficiency and robustness in 3D registration. I will conclude the talk with ongoing projects and future works on improving 3D robotic perception via online learning.

Bio:
Bhoram Lee is a PhD candidate at GRASP (General Robotics, Automation, Sensing, and Perception) Lab, University of Pennsylvania, under the supervision of Prof. Daniel D. Lee. Before coming to Penn, she worked at SAIT (Samsung Advanced Institute of Technology) from 2007 to 2013 as a researcher. She received B.S. in mechanical and aerospace engineering in 2005 and M.S. in aerospace engineering in 2007 from Seoul National University (SNU), Korea. Her previous research experience includes visual navigation of UAVs, sensor fusion, and mobile user interactions. Bhoram Lee was a member of GNSS (Global Navigation Satellites Systems) Lab at SNU and her team won the 2nd prize at the 6th Korean Robot Aircraft Competition in 2007. During her years at Samsung, she (co-)authored more than 20 patent applications and was awarded the Samsung Best Paper Award 2012 Bronze prize as the first author. She was involved in many research projects including human pose estimation in AR (augmented reality) environments, and development of mobile motion UIs (user interfaces) and haptic UIs at SAIT. Bhoram Lee recently participated in the DARPA Robotics Challenge in 2015 as a member of team THOR, one of the finalists, and worked on 3D perception. She also has served as a teaching assistant during the past two years for offline and online robotics courses at Penn. Her current academic interest includes probabilistic estimation, robot vision, machine learning, and general robotics with a focus on improving robotic perception via online learning techniques. She currently resides in Havertown, PA, with her husband and their two daughters.