SHEN LI
Biography
Avatar

I am a research specialist at the Interactive Robotics Group, MIT, supervised by Prof. Julie Shah. My research focuses on developing planning and learning algorithms to enable robots to safely and efficiently assist and collaborate with humans on physical tasks.

Previously, I obtained my M.S. degree in Robotics at the Robotics Institute, CMU. I was a research assistant at the Personal Robotics Lab (now moved to UW), co-advised by Prof. Siddhartha Srinivasa and Prof. Stephanie Rosenthal. I also worked with Prof. Katia Sycara in summer 2017. I obtained my B.S. degrees in Computer Science and Psychology from Penn State.

Publications
Peer-Reviewed Journal Articles

  1. Ankit Shah, Pritish Kamath, Shen Li, Patrick Craven, Kevin Landers, Kevin Oden, and Julie Shah. Supervised Bayesian Specification Inference from Demonstrations. IJRR. 2019.


  • (In review)
    1. Shen Li, and Julie Shah. Safe and Efficient High Dimensional Motion Planning in Space-Time with Time Parameterized Prediction. RA-L/ICRA. 2019.


  • (In review)
    1. Tariq Iqbal, Shen Li, Christopher Fourie, Bradley Hayes, and Julie Shah. Fast Online Segmentation of Activities from Partial Trajectories. RA-L/ICRA. 2019.


  • (In review)
    1. Rosario Scalise*, Shen Li*, Henny Admoni, Stephanie Rosenthal, and Siddhartha Srinivasa. Natural Language Instructions for Human-Robot Collaborative Manipulation. IJRR. 2018.

    Peer-Reviewed Conference Papers

    1. Ankit J. Shah, Pritish Kamath, Shen Li, and Julie A. Shah. Bayesian Inference of Temporal Task Specifications from Demonstrations. NeurIPS. 2018.


  • PDF
    1. Changjoo Nam, Huao Li, Shen Li, Michael Lewis, and Katia Sycara. Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy. SMC. 2018.


  • PDF
    1. Shen Li*, Rosario Scalise*, Henny Admoni, Stephanie Rosenthal, and Siddhartha Srinivasa. Evaluating Critical Points in Trajectories. RO-MAN. 2017.


  • PDF
    1. Shen Li*, Rosario Scalise*, Henny Admoni, Stephanie Rosenthal, and Siddhartha Srinivasa. Spatial References and Perspective in Natural Language Instructions for Collaborative Manipulation. RO-MAN. 2016.

    Workshop Papers

    1. Shen Li*, Rosario Scalise*, Henny Admoni, Stephanie Rosenthal, and Siddhartha Srinivasa. Perspective in Natural Language Instructions for Collaborative Manipulation. R:SS Workshop on Model Learning for Human-Robot Communication. 2016.

    Thesis

    1. Shen Li. Automatically Evaluating and Generating Clear Robot Explanations. Master's thesis. Carnegie Mellon University. 2017.

    * Both authors contributed equally

    Research

    From the perspective of robots, I develop planning and learning algorithms to enable a robot to physically interact with humans. From the perspective of humans, I draw inspirations from psychology and investigate the understandability, predictability, and trustworthiness of robot behaviors.

    6. Motion-level POMDP Planning with Macro Actions (Ongoing)

    We are investigating planning under the uncertainty about the latent states in human models. We constructed a POMDP model and trained a human model on the motion level instead of task level to synchronize robot and human motion speeds. We augmented motion primitives as macro actions with extended durations to improve motion smoothness under kinodynamic constraints and gripper actions as macro actions to precisely model their durations. We resolved the computational overhead caused by the fine-grained motions via adaptive resolutions for state and action spaces.

    5. Safe and Efficient Motion Planning within Space-Time

    We developed a motion planner to enable a robot arm to avoid fast-moving obstacles by searching for a safe trajectory with the minimal execution time in real-time, given the time parameterized motion predictions of the obstacles. Our algorithm enormously reduces the search space to reason about novel obstacle avoidance strategies within robot configuration space and the time domain.

    (Submitted to RA-L/ICRA 2019)   
    RA-L/ICRA 2019
    4. Navigation Planning with Motion Prediction and Activity Recognition and Segmentation

    We developed a human motion predictor in human-robot collaborative navigation. The system recognizes the current goal that the human is heading to and predicts the time-parameterized human motion in the near future. The motion planner then uses the predicted human trajectory to plan an optimal robot trajectory in space and time to avoid the human and reach the goal.

    3. Planning with Activity Recognition and Segmentation

    We developed Fast Online Segmentation of Activities from Partial Trajectories (FOSAPT) to recognize and segment human activities in real time. We integrated navigation, manipulation, and grasping planners, along with FOSAPT to enable a robot to fetch and deliver correct parts to humans at appropriate times in factory settings. We have successfully demonstrated the system at a Honda manufacturing plant in Marysville, OH.

    (Submitted to RA-L/ICRA 2019)    Video YouTube Link
    2. Communicating Robot Preferences via Demonstrations

    In a GridWorld navigation scenario with roads, grass, rocks, trees, and humans, we found that critical points in demonstrations of robot paths could shape humans' understanding of robot cost functions and prediction of future robot paths.

    RO-MAN'17

    "The robot is avoiding rocks."

    1. Communicating Robot Intentions via Referring Expressions

    In a tabletop manipulation scenario, we investigated enabling a robot to convey its intention to pick up a block from a table to human collaborators using referring expressions. Our goal was to improve the understandability of robot behaviors. We crowdsourced a corpus of referring expressions from a user study where participants were asked to instruct their partner sitting across the table to pick up a block from the table.

    IJRR'18 RO-MAN'16 R:SS:WS'16

    We extracted the visual and spatial attributes with their visual and linguistic saliences from the corpus and applied them in a referring expression generating algorithm. We further expedited this algorithm via pruning and heuristics.

    M.S. Thesis'17

    We have demonstrated our system on "Ada" (Kinova Mico) and enabled it to specify the target blocks using natural language before picking them up.

    Video YouTube Link

    "Pick up the yellow block."

    "Pick up the leftmost orange block from your perspective."

    ?

    Media Publicity
    1. PBS NewsHour: The robots are coming. Will they work with us? (12/18)

    2. IEEE - The Institute page: IEEE Members Build Robots to Help People with Disabilities Live Independently (06/17)

    3. Y-combinator: Why Did the Robot Do That? Increasing Trust in Autonomous Robots (12/16)

    Robots
    myrobot_reddragon

    We built "Red Dragon" for Trinity College Fire Fighting Home Robot Contest at Penn State!

    robbie_yuri

    I am working on "Robbie & Yuri" at MIT!

    herb

    I worked on "Herb" (Home Exploring Robotic Butler) at CMU!

    abbie

    I am working on "Abbie" at MIT!

    Contact

    Building 31, Floor 2M, 70 Vassar St, Cambridge, MA 02142

    +1 (814) 777 7988

    shenli@mit.edu

    • FB
    • Git
    • LinkedIn
    • YouTube

    © Shen Li 2018
    (Design & CSS courtesy: Kevin Smith, Rui Zhu, and Academic theme for Hugo)