In assistive robotics, control-interface mediated manual teleoperation of a robotic arm is usually facilitated by interfaces such as joysticks, switch-based head array, and sip-and-puff devices. These interfaces are of low dimensionality compared to the dimensionality of the robotic arm. As a result, the end-user can only activate the robot's motion only in a subset of spatial dimensions at a time, due to which estimating user intent (in terms of reaching goals) becomes harder for the robot. In order to enhance the autonomy's ability to estimate intent, in this work, I developed a human-machine interaction algorithm that nudges the robot to states in which subsequent human teleoperation will guarantee (with respect to information gain) more accurate intent estimation by the autonomy. By indirectly influencing the user to generate control signals that will help the autonomous agent better estimate the intent, it can help the user more effectively, and as such the overall task effort is reduced.
Code available at https://github.com/deepakgopinath/adaptive_assistance_sim and https://github.com/deepakgopinath/jaco_adaptive_assistance
Flow chart depicting user action sequence during teleoperation with intent estimation and disambiguation.
In the context of human-machine interaction, inferring a human's visual awareness of their surroundings while they perform a task is valuable for machines to facilitate seamless interactions and effective interventions. In this work, we propose a computational model to estimate a person's attended awareness of their environment during driving. The model takes as input a video of the scene, as seen by the person performing a task along with noisily registered ego-centric gaze sequences from that person and estimates a) a saliency heat map b) a refined gaze's estimate c) an estimate of the subjects attended awareness of the scene.
Code and dataset available at https://github.com/ToyotaResearchInstitute/att-aware
(work done at Toyota Research Institute, Cambridge, MA, jointly with Guy Rosman, Simon Stent, Katsuya Terahata, Luke Fletcher, John Leonard)
Time evolution of gaze (left) and attended awareness (right) when the observer is distracted by a text message.
High-dimensional robot control using low-dimensional control interfaces such as joysticks, switch-based head array, or sip-and puffs poses a unique challenge for end-users. During teleoperation, the end-user has to perform mode switches which can be cognitively and physically taxing. Due to fatigue and various other external factors, the interface level action that the user intended to generate and what actually gets generated could differ. It is important for the autonomous agent to make a distinction between the human's conceptual and physical understanding of interface usage. Reasoning about how the intended action differs from the measured action is necessary to perform interface-level intent inference and perform appropriate interventions to handle the unintended interface-level actions.
Code available at https://github.com/deepakgopinath/customized_interface_aware_assistance
(work done jointly with Mahdieh Nejati Javaremi)
Probabilistic Graphical Model depicting user-robot interaction via a control interface.
In shared autonomy, on one end of the spectrum. we have fully manual teleoperation in which the robot is a just passive device and on the other end, we have fully autonomous robots in which the human becomes a passive observer. One of the primary challenges in shared autonomous systems is to determine how to arbitrate between the human and the autonomous partner's control signals. A usual approach is to assume some sort of overall objective that the human-autonomy team is trying to optimize and design control algorithms that would achieve those objectives. In this work, we approach the problem from a different perspective by directly involving the end-user in this optimization process. We utilize an interactive optimization procedure in which verbal commands from the user are mapped to changes in the parameters of the arbitration function eventually converging to the overall behavior that the user prefers.
Core component of a shared autonomy system with user-driven customization of arbitration parameters.
RemoteHRI is a JavaScript framework for crowd-sourced human-robot interaction experiments. Evaluation of human-robot systems requires thorough experiments, however, designing and conducting HRI experiments on real robotic systems come with many challenges, especially in the academic setting. Especially in the light of the COVID-19 pandemic, running in-person studies in academic labs has become more difficult due to social distancing protocols. Built with HRI researchers in mind, RemoteHRI includes a flexible set of software tools that allow for rapid prototyping and quick deployment of a wide range of laboratory-like experiments that can be run online. RemoteHRI uses the state-of-the-art ReactJS framework to build standard HRI stimulus environments such as grid worlds, differential drive cars, and robotic arms. As a result, the researcher can solely focus on the experimental design thereby saving valuable time and effort.
Code for RemoteHRI is available at https://github.com/argallab/RemoteHRI.
(work done jointly with Finley Lau, as a part of Lau's undergraduate summer internship)
Multi-linked planar robotic arm environment for performing reaching tasks.