I am a fifth year Ph.D. candidate at Robotics Institute working in the RoboTouch Lab mentored by Prof. Dr. Wenzhen Yuan and Ioannis Gkioulekas. My research focuses on computer graphics and tactile sensing. I completed my masters in Robotics at CMU, advised by Prof. Dr. Katerina Fragkiadaki and Prof. Dr. Katharina Muelling. I received my B.Tech. in Electrical Engineering from IIT Kanpur.
Find my CV here.
Email arpita1 [at] andrew [dot] cmu [dot] edu
Address A423 NSH, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Research Publications
Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components
IEEE International Conference on Intelligent Robots and Systems (IROS), 2023
Arpit Agarwal, Abhiroop Ajith2, Chengtao Wen2, Veniamin Stryzheus3, Brian Miller3, Matthew Chen3, Micah K. Johnson4, Jose Luis Susa Rincon2, Justinian Rosca2 and Wenzhen Yuan
Affiliations: 2 - Siemens Corporations, 3 - Boeing, 4 - GelSight Inc.
PDF
Abstract
In manufacturing processes, surface inspection is a key requirement for quality assessment and damage localization. Due to this, automated surface anomaly detection has become a promising area of research in various industrial inspection systems. A particular challenge in industries with large-scale components, like aircraft and heavy machinery, is inspecting large parts with very small defect dimensions. Moreover, these parts can be of curved shapes. To address this challenge, we present a 2-stage multi-modal inspection pipeline with visual and tactile sensing. Our approach combines the best of both visual and tactile sensing by identifying and localizing defects using a global view (vision) and using the localized area for tactile scanning for identifying remaining defects. To benchmark our approach, we propose a novel real-world dataset with multiple metallic defect types per image, collected in the production environments on real aerospace manufacturing parts, as well as online robot experiments in two environments. Our approach is able to identify 85% defects using Stage I and identify 100% defects after Stage II.
Simulation-driven vision-based tactile sensor design
IEEE International Conference on Computational Photography(ICCP) 2022
Arpit Agarwal, Timothy Man, Edward Adelson, Ioannis Gkiolekas, Wenzhen Yuan.
PDF
Abstract
We leverage physics-based rendering techniques to simulate the light transport process inside curved vision-based tactile sensor designs. We use physically-grounded models of light and material in our simulation. We also use lightweight and fast calibration methods for fitting the analytical models of light and material for our prototype. Given a calibrated simulation framework, we propose a tactile sensor shape optimization pipeline. Towards this goal, we propose a low-dimensional tactile sensor shape parameterization and automatically generate the full sensor prototype and an indenter surface virtually, which allows us to validate the sensor performance across the sensor surface. Our main technical results include a) accurately matching RGB image between simulation and a physical prototype b) generating improved tactile sensor shapes c) characterizing the design parameter space using appropriate light piping metrics. Our physically accurate simulation framework offers the ability to generate accurate RGB images for arbitrary vision-based tactile sensors. The parameter space exploration gives us high level guidelines on the design of tactile sensors for specific applications. Lastly, our system allows us to characterize the various tactile sensor designs in terms of their 3D shape reconstruction ability on different parts of the sensor surface.
Simulation of Vision-based Tactile Sensors using Physics based Rendering
IEEE International Conference on Robotics and Automation(ICRA) 2021
Arpit Agarwal, Tim Man, Wenzhen Yuan.
Conference Version |
Extended Version |
project page |
code
Abstract
Tactile sensing has seen a rapid adoption with the advent of vision-based tactile sensors. Vision-based tactile sensors provide high resolution, compact and inexpensive data to perform precise in-hand manipulation and human-robot interaction. However, the simulation of tactile sensors is still a challenge. In this paper, we built the first fully general optical tactile simulation system for a GelSight sensor using physics-based rendering techniques. We propose physically accurate light models and show in-depth analysis of individual components of our simulation pipeline. Our system outperforms previous simulation techniques qualitatively and quantitative on image similarity metrics.
Grasp Stability Prediction with Sim-to-Real Transfer from Tactile Sensing
IEEE International Conference on Intelligent Robots and Systems(IROS) 2022
Zilin Si, Zirui Zhu, Arpit Agarwal, Wenzhen Yuan.
PDF
Abstract
Robot simulation has been an essential tool for data-driven manipulation tasks. However, most existing simulation frameworks lack either efficient and accurate models of physical interactions with tactile sensors or realistic tactile simulation. This makes the sim-to-real transfer for tactile-based manipulation tasks still challenging. In this work, we integrate simulation of robot dynamics and vision-based tactile sensors by modeling the physics of contact. This contact model uses simulated contact forces at the robot's end-effector to inform the generation of realistic tactile outputs. To eliminate the sim-to-real transfer gap, we calibrate our physics simulator of robot dynamics, contact model, and tactile optical simulator with real-world data, and then we demonstrate the effectiveness of our system on a zero-shot sim-to-real grasp stability prediction task where we achieve an average accuracy of 90.7% on various objects. Experiments reveal the potential of applying our simulation framework to more complicated manipulation tasks. We open-source our simulation framework at https://github.com/CMURoboTouch/Taxim/tree/taxim-robot
Improving Grasp Stability with Rotation Measurement from Tactile Sensing
IEEE International Conference on Intelligent Robots and Systems(IROS) 2021
Raj Kolamuri, Zilin Si, Yufan Zhang, Arpit Agarwal, Wenzhen Yuan.
PDF |
project page
Abstract
Rotational displacement about the grasping point is a common grasp failure when an object is grasped at a location away from its center of gravity. Tactile sensors with soft surfaces, such as GelSight sensors, can detect the rotation patterns on the contacting surfaces when the object rotates. In this work, we propose a model-based algorithm that detects those rotational patterns and measures rotational displacement using the GelSight sensor. We also integrate the rotation detection feedback into a closed-loop regrasping framework, which detects the rotational failure of grasp in an early stage and drives the robot to a stable grasp pose. We validate our proposed rotation detection algorithm and grasp-regrasp system on self-collected dataset and online experiments to show how our approach accurately detects the rotation and increases grasp stability.
Model learning for look-ahead exploration in continuous control
AAAI Conference on Artificial Intelligence (AAAI), 2019
Arpit Agarwal, Katharina Muelling, Katerina Fragkiadaki.
PDF |
project page |
code
Abstract
We propose an exploration method that incorporates look- ahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation poli- cies . Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formula- tions, analogous to options or macroactions. Coarse skill dy- namics, i.e., the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during looka- head search. Policy search benefits from temporal abstrac- tion during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suf- fer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation poli- cies faster than current state-of-the-art RL methods, and con- verges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration.
Deep Reinforcement Learning with Skill Library: Exploring with Temporal Abstractions and coarse approximate Dynamics Models
Technical Report, CMU-RI-TR-18-31, July, 2018
Arpit Agarwal
PDF |
code
Abstract
Reinforcement learning is a computational approach to learn from interaction. However, learning from scratch using reinforcement learning requires exorbitant number of interactions with the environment even for simple tasks. One way to alleviate the problem is to reuse previously learned skills as done by humans. This thesis provides frameworks and algorithms to build and reuse Skill Library. Firstly, we extend the Parameterized Action Space formulation using our Skill Library to multi-goal setting and show improvements in learning using hindsight at coarse level. Secondly, we use our Skill Library for exploring at a coarser level to learn the optimal policy for continuous control. We demonstrate the benefits, in terms of speed and accuracy, of the proposed approaches for a set of real world complex robotic manipulation tasks in which some state-of-the-art methods completely fail.
Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions
Conference on Robotic Learning (CoRL), 2019
Ricson Cheng, Arpit Agarwal, Katerina Fragkiadaki.
PDF
Abstract
We consider artificial agents that learn to jointly control their gripper and camera in order to reinforcement learn manipulation policies in the presence of occlusions from distractor objects. Distractors often occlude the object of in- terest and cause it to disappear from the field of view. We propose hand/eye con- trollers that learn to move the camera to keep the object within the field of view and visible, in coordination to manipulating it to achieve the desired goal, e.g., pushing it to a target location. We incorporate structural biases of object-centric attention within our actor-critic architectures, which our experiments suggest to be a key for good performance. Our results further highlight the importance of curriculum with regards to environment difficulty. The resulting active vision / manipulation policies outperform static camera setups for a variety of cluttered environments.
Other Projects
Position Free Monte Carlo for layered material rendering
Physics Based Rendering, CMU Spring 2021
Arpit Agarwal
Report
Exploration with Expert Policy Advice
Technical Report, CMU 2018
Ashwin Khadke, Arpit Agarwal, Anahita Mohseni-Kabir, Devin Schwab
PDF
Software