Shen Li
Biography

I am a second year Master's student at the Robotics Institute, Carnegie Mellon University. I am conducting research at the Personal Robotics Lab, co-advised by Dr. Siddhartha Srinivasa and Dr. Stephanie Rosenthal. My research is about enabling robots to gain human trust by building a shared understanding with human partners and enhancing the robot behavioral predictability in human-robot interaction and collaboration.

Previously, I obtained B.S. degrees in Computer Science and Psychology from the Pennsylvania State University. Back then, my research was about developing obstacle-avoidance algorithms for an autonomous wheelchair using the LiDAR data at the Intelligent Vehicles and Systems Laboratory advised by Dr. Sean Brennan. I also ran user studies to investigate the effects of biological rhythms on human performance in the Human Performance Rhythms Laboratory advised by Dr. Frederick Brown and Dr. Cynthia LaJambe.

Publications Peer-Reviewed Conference Papers

Workshop Papers

Peer-Reviewed Journal Articles

Thesis

Research
  • I model adaptation in ad-hoc heterogeneous teams as a cooperative game, in which each player optimizes its strategies through acquiring mutual trust with adequate explorations and queries, and adapting to the capabilities of and synergies with other players. At equilibrium, each agent will reach optimal behavioral adaptation, which emerges from somewhere between compliance as a participative follower and influence as a delegative leader, where mutual trust determines the threshold for delegation.
    • Following other players helps one player gain trust from others but might damage the optimality in its payoff.
    • Leading others to their highest payoffs helps one player reinforce its trust in others but might cause collaborating conflicts.
  • As researchers, we can design a mechanism with this equilibrium of rational adaptation and seamless collaboration by making each agent optimize its strategies bidirectionally:
    • Making each agent gain trust from other agents
      • Enabling this agent to share two types of information with others to enhance behavioral predictability and trustworthiness
        • Intention (declarative knowledge) [See current work on What Did the Robot Do?]
        • Reasoning (procedural knowledge) [See current work on Why Did the Robot Do That?]
    • Making each agent reinforce its trust in other agents
      • Enabling this agent to exert influence on other agents and optimize their behaviors to their highest payoff
        • Sample selection [See current work on Demonstration Selection]

What Did the Robot Do?
  • Motivation: The shared awareness of robot intention could avoid collaborating conflicts with participative collaborators, enhance behavioral predictability, develop purposefulness, and gain trust from delegative collaborators in seamless collaboration.
  • Task: In a tabletop manipulation task, a robot is picking up blocks which have the same shape and size but different colors. We are developing algorithms to enable the robot to automatically describe the target blocks using natural language. The language identifying an individual object is called a referring expression.
  • Method: To tackle referring expression generation, we crowdsourced a corpus of instructions that people gave to instruct his partner who was sitting across the table to pick up a block from the table. [IJRR 2016 data paper]
  • Results: An interesting finding is that people have a preference ordering in the features they use in reference to objects, such as color is used more frequently than density among visual features. With regards to the spatial references, people are significantly better at understanding perspective-independent spatial references than perspective-dependent ones due to the ambiguities caused by unspecified perspectives, such as "on the right." [RO-MAN 2016 paper, R:SS 2016 Workshop paper]
  • "Pick up the yellow block."

    "Pick up the leftmost orange block from your perspective."

    :(

  • Demo: We are developing efficient and scalable graph-based referring expression generation algorithms to grant robots such as Herb (Home Exploring Robot Butler) and Ada (Assistive Dexterous Arm) the capability to describe the block they are going to manipulate!
  • Ada is describing the target block while picking it up!

Why Did the Robot Do That?
  • Motivation: The shared awareness of robot reasoning could enhance the understandability and predictability of robot behavior, avoid human fundamental attribution errors, and satisfy the human desire for authority.
  • Task: In a navigation task, an autonomous mobile robot plans optimal paths on the tilemaps with roads, grass, rocks, trees and people, based on varying cost functions. We are granting the robot the capability to automatically explain its reasoning (the actual cost function) behind the paths to human partners in natural language.
  • Method: Through crowdsourcing, we are investigating how humans explain the robot reasoning behind the demonstrated paths planned by the robot.
  • Results: We find that people use different approaches to express their understanding of the robot reasoning. Some people use behavioral descriptions (what path did the robot follow?) while the others use rationale explanations (why did the robot follow that path?). One hypothesis is that the rationale explanations are more efficacious in gaining human trust than the behavioral descriptions.
  • Examples: Rationale explanations of four paths planned based on four different cost functions on the same tilemap.

"The robot is avoiding grass."

"The robot is avoiding rocks."

"The robot is avoiding dirt roads."

"The robot doesn't care :)"

Demonstration Selection
  • Motivation: In the user study of [Why Did the Robot Do That?] project, participants interpreted and explained robot reasoning from observations of robot navigation behavior. We found that incompleteness and ambiguities in the paths might prevent participants from understanding robot reasoning. To mitigate such ambiguities, we are working on enabling a robot teacher to incrementally query human confusion in understanding its reasoning and selectively demonstrate a series of paths to optimize human estimation
  • Method: an active learning problem
Course Projects 15781 Robots and Humans Teaming up in Pursuit-Evasion Games (ongoing)

Advised by Dr. Gianni Di Caro

  • Motivation: We are enabling robots to develop behavior to compensative for the sub-optimality introduced by their human partner in human-swarm collaboration in a dynamic environment with multiple goals.
  • Task: We are using a pursuit-evasion game where a human player is controlling a pursuer to collaborate with the other robot pursuers to chase multiple robot evaders.
  • Method: The robot pursuers will incorporate the randomness and uncertainty introduced by the human player in MDP so that they could compute an optimal policy to compensate for the human sub-optimality. The human model is factorized by the human capabilities in gathering information from agents, planning ahead, and processing multiple tasks simultaneously, all of which are measured based on the human behavior in the past. [Project Proposal] [Project Report]

Yellow=human pursuer Purple=robot pursuer Red=robot evader

Media Publicity

Dr. Stephanie Rosenthal's article about our work Why Did the Robot Do That is on Y-combinator!

Contact

Robotics Institute, Pittsburgh, PA, 15213

+1 (814) 777 7988

shenli@cmu.edu

  • FB
  • Git
  • LinkedIn
  • YouTube
myrobot_reddragon

The Firefighting RoBoT I built at Penn State!

herb

The RoBoT I am working on at CMU!

© Shen Li 2017