Benjamin A. Newman

I am a PhD student in the Robotics Institute at Carnegie Mellon Univeristy, where I work on assistive human robot-interaction.

At CMU I am a member of the Human And Robot Partners (HARP) Lab where I am advised by Henny Admoni and Kris Kitani. Broadly, I am interested in developing seamlessly assistive home robots. Specifically, I want to create assistive robots that are able to understand and reason about the consequences of their actions on the people they assist. I am fortunate to be funded in part by the NSF GRFP.

Recently I completed an internship at Meta's Reality Labs. Here, I studied the effect of visual and optimal assistance on people as they completed a complex house cleaning task in an XR simulation built in Habitat. Here, I worked with Ruta Desai and Kevin Carlberg.

I completed my undergrad at Indiana University, Bloomington in 2016 where I obtained a BS in Computer Science and a BS in Cognitive Science. While there I was fortunate to work with David Crandall, Chen Yu, and Kris Hauser.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn

profile photo
Research

I'm interested in human-robot interaction, assistive technologies, machine learning and reinforcement learning. My research focuses on how we can develop human-robot systems that lead to successful assistive interactions.

PontTuset Helping People Through Space and Time: Assistance as a Perspective on Human-Robot Interaction
Benjamin A. Newman, Reuben Aronson, Kris Kitani, and Henny Admoni
Frontiers in Robotics and AI (accepted), 2021
pdf / bibtex

We define assistance as a perspective on human-robot interaction and provide cross-domain design axes that are critical to consider when developing assistive robotics. We support these through a broad review of recent assistive robotics research.

PontTuset HARMONIC: A Multimodal Data Set of Assistive Human-Robot Collaboration
Benjamin A. Newman*, Reuben Aronson*, Kris Kitani, and Henny Admoni
IJRR, 2021
pdf / bibtex / Project Page

We present a multi-modal dataset of eye gaze, joystick activation, egocentric video, robot motion, and arm electromyography taken during a human-robot co-manipulation task under varying degrees of robotic assistance.

* denotes equal contribution

PontTuset Examining the Effects of Anticipatory Robot Assistance on Human Decision Making
Benjamin A. Newman*, Abhijat Biswas*, Sarthak Ahuja, Siddharth Girdhar, Kris Kitani, and Henny Admoni
ICSR, 2020
pdf / bibtex

We explore how robot motion that is expressed in advance of an expected phenomenon (e.g. a robot reaching for an object it expects the user will want) affects the eventual decision the person makes.

* denotes equal contribution

PontTuset Visual Assistance for Object-Rearrangement Tasks in Augmented Reality
arXiv, 2020
Benjamin A. Newman, Kevin Carlberg, and Ruta Desai
pdf / bibtex

We examine how presenting users with optimal routing assistance through a visual display would affect their ability and sense of agency when completing a complex object rearrangement task.

PontTuset In-Sight: Tension-Based Haptic Feedback to Improve Navigation for People who are Blind
Alexander Baikovitz*, Jonathan Duffy*, Zachary Sussman*, Benjamin A. Newman, and Henny Admoni
CHI 2019 Workshop on Hacking Blind Navigation, 2019
pdf / bibtex

We develop a portable haptic device that aids visually impaired users navigte in real world environments.

* denotes equal contribution

PontTuset Global and Local Statistical Regularities Control Visual Attention to Object Sequences
Alexa Romberg, Yayun Zhang, Benjamin A. Newman, Jochen Triesch, and Chen Yu
ICDL Epi-Rob, 2016
pdf / bibtex

We study how cross-situational statistics drive visual attention. Specifically, we examine how attention differs towards objects that are displayed infrequently versus those that are displayed frequently.

Projects
PontTuset Hand-Eye Coordination Primitives for Assistive Robotic Co-Manipulation
Benjamin A. Newman, Kris Kitani, and Henny Admoni
pdf

We attempt to discover joint hand and eye gaze primitives for human robot co-manipulataion in an assisted eating task that could be useful for user goal recognition.


Thank you Jon Barron for creating and open-sourcing a fantastic website!