My research is at the intersection of embodied AI, robotics, and human-robot interaction.
My recent work focuses on (1) comprehensive 3D scene understanding;
(2) world models for anticipation and evaluation of embodied state changes;
(3) robot learning with multimodal reasoning and action policy learning;
(4) integrated task and motion planning for embodied assistance and human-robot collaboration.
We introduce PartInstruct, the first large-scale benchmark for training and evaluating fine-grained robot manipulation policies using part-level instructions.
International Conference on Automation Science and Engineering (CASE), 2023
paper
/
arXiv
We present a robust markerless image based visual servoing method that enables precision robot control
without hand-eye and camera calibrations in 1, 3, and 5 degrees of freedom.