Projects

Social Crowd Navigation

Many crowd navigation methods are often short-sighted and prone to the freezing robot problem. To tackle these problems, we propose a novel reinforcement learning planner that reasons about spatial and temporal relationships between the robot and the crowd. In addition, we incorporate human trajectory prediction into the planner to increase safety and social-awareness of the robot.

Relevant papers:

Autonomous Driving

Understanding the interactions and behaviors of surrounding drivers is essential for autonomous vehicles (AV). To this end, we propose novel networks to detect the abnormal drivers and predict driving styles in an unsupervised fashion, which improves navigation of AV. Besides this, we also build realistic multi-agent traffic simulations.

Relevant papers:

Instruction Following Robot & Visual-Language Grounding

To enable robots to serve humans and those with disabilities in everyday environments, the robots must understand spoken language and associate commands to entities in the environment. We pursue two directions to achieve speech command-following robots:

  1. Learning a visual-audio representation for RL skill learning without hand-engineered reward;
  2. Building a system with vision-language models to guide persons with visual impairments from place to place and enhance their knowledge of the environment.

User studies with real human subjects show that our systems are intuitive and easy to use.

Relevant papers:

Manipulation

Real-world long-horizon manipulation problems have been a longstanding challenge in robotics. We model object dynamics with graph networks and utilize motion primitives to achieve effective manipulation. We demonstrate our methods in real world tasks such as stowing and scooping.

Relevant papers:

Machine Learning

While reinforcement learning (RL) is an appealing way to learn robot policies, it is prone to performance degradation under adversarial perturbations and sim2real gaps. We pursue two directions to address this problem:

  1. We improve the robustness of RL algorithms with adversarial training;
  2. We evaluate RL policies in novel domains with minimal rollout data using importance sampling and optimization.

Relevant papers: