Shen Li

I am a research specialist at the Interactive Robotics Group, MIT, supervised by Prof. Julie Shah. I am working on developing planning and learning techniques to achieve safe, efficient, and fluid human-robot collaboration.

Previously, I obtained an M.S. degree in Robotics at the Robotics Institute, CMU. I conducted research at the Personal Robotics Lab, co-advised by Prof. Siddhartha Srinivasa and Prof. Stephanie Rosenthal. My Master's research was about making robot behaviors understandable and predictable for humans via natural language-based and demonstration-based explanations.

I obtained B.S. degrees in Computer Science and Psychology from the Pennsylvania State University. Back then, I was working on LiDAR-based obstacle avoidance for autonomous wheelchairs at the Intelligent Vehicles and Systems Laboratory advised by Prof. Sean Brennan.

Publications Peer-Reviewed Conference Papers

Workshop Papers

Peer-Reviewed Journal Articles



To enable safe and efficient human-robot collaboration, robot and human need to understand and adapt to each other. From the perspective of a robot, it needs to recognize and predict human behaviors in order to plan motions around humans and behave expressively in order to convey the robot intentions and reasonings to humans.

Online activity recognition and planning
  • I have integrated a robotic system that assists human workers in automotive final assembly. In particular, the robot will first recognize human activities through a fast online segmentation and classification algorithm, then navigate to the part bins, securely grasp and fetch the parts, and gracefully hand over the parts to the human when he or she needs the parts. We have successfully demonstrated a live demo at Honda manufacturing plant in Marysville, OH.

Safe and efficient replanning within Space-time
  • I have developed a trajectory replanning algorithm in order to plan safe and efficient robot trajectories in real time to avoid fast moving obstacles in human-robot collaborative environments, given time parameterized motion predictions. Our algorithm enormously reduces the search space in order to reason about novel obstacle avoidance strategies within both configuration space and the time domain.

Communicating robot intentions via referring expressions
  • In a scenario where there were many blocks on the table with different features, we crowdsourced a corpus of instructions that people gave to instruct their partner who was sitting across the table to pick up a block from the table. [ RO-MAN'16, R:SS'16, IJRR'18 ]
  • "Pick up the yellow block."

    "Pick up the leftmost orange block from your perspective."


  • Based on the collected data, we further developed a graph-based algorithm to automatically generate referring expressions to identify a block clearly and concisely. [ M.S. Thesis'17 ]
  • We applied our algorithm to enable a robot to specify the block it was about to pick up in natural language to humans before actually doing it. The goal was to avoid collaborative conflicts and improve predictability in robot motions.
  • Ada (Assistive Dexterous Arm) is describing the target block while picking it up!

Communicating robot reasonings via demonstrations
  • In a navigation scenario with roads, grass, rocks, trees, and humans, we investigated how well demonstrations could help humans understand the robot cost functions, such as preferring grass while avoiding rocks. Through crowdsourcing, we found two types of critical points that could shape human understanding and proposed an algorithm to automatically generate demonstrations with certain critical points. [ RO-MAN'17 ]

"The robot is avoiding grass."

"The robot is avoiding rocks."

"The robot is avoiding dirt roads."

"The robot doesn't care :)"

Course Projects at CMU 15781 Robots and Humans Teaming up in Pursuit-Evasion Games

Advised by Prof. Gianni Di Caro

  • In a human-swarm collaborative pursuit-evasion game, we proposed an algorithm to enable robot pursuers to develop behaviors to compensative for the sub-optimality introduced by the human-controlled pursuer. The human model was factorized by the human capabilities in gathering information from agents, planning ahead, and processing multiple tasks simultaneously. [Proposal, Report]

Yellow=human pursuer Purple=robot pursuer Red=robot evader


I built a RoBoT for Trinity College Fire Fighting Home Robot Contest at Penn State!


I am working on "Robbie & Yuri" at MIT!


I worked on "Herb" (Home Exploring Robotic Butler) at CMU!


I am working on "Abbie" at MIT!

Media Publicity

Prof. Stephanie Rosenthal's article about our work Why Did the Robot Do That is on Y-combinator!


Building 31, Floor 2M, 70 Vassar St, Cambridge, MA 02142

+1 (814) 777 7988

  • FB
  • Git
  • LinkedIn
  • YouTube

© Shen Li 2018