Using tactile sensors and machine learning to improve how robots manipulate fabrics

2022-08-21 15:35:30 By :

Click here to sign in with or

by Ingrid Fadelli , Tech Xplore

In recent years, roboticists have been trying to improve how robots interact with different objects found in real-world settings. While some of their efforts yielded promising results, the manipulation skills of most existing robotic systems still lag behinds those of humans.

Fabrics are among the types of objects that have proved to be most challenging for robot to interact with. The main reasons for this are that pieces of cloth and other fabrics can be stretched, moved and folded in different ways, which can result in complex material dynamics and self-occlusions.

Researchers at Carnegie Mellon University's Robotics Institute have recently proposed a new computational technique that could allow robots to better understand and handle fabrics. This technique, introduced in a paper set to be presented at the International Conference on Intelligent Robots and Systems and pre-published on arXiv, is based on the use of a tactile sensor and a simple machine-learning algorithm, known as a classifier.

"We are interested in fabric manipulation because fabrics and deformable objects in general are challenging for robots to manipulate, as their deformability means that they can be configured in so many different ways," Daniel Seita, one of the researchers who carried out the study, told TechXplore. "When we began this project, we knew that there had been a lot of recent work in robots manipulating fabric, but most of that work involves manipulating a single piece of fabric. Our paper addresses the relatively less-explored directions of learning to manipulate a pile of fabric using tactile sensing."

Most existing approaches to enable fabric manipulation in robots are only based on the use of vision sensors, such as cameras or imagers that only collect visual data. While some of these methods achieved good results, their reliance on visual sensors may limit their applicability for simple tasks that involve the manipulation of a single piece of cloth.

The new method devised by Seita and his colleagues Sashank Tirumala and Thomas Weng, on the other hand, uses data collected by a tactile sensor called ReSkin, which can infer information related to a material's texture and its interaction with the environment. Using this tactile data, the team trained a classifier to determine the number of layers of fabric grasped by a robot.

"Our tactile data came from the ReSkin sensor, which was recently developed at CMU last year," Weng explained. "We use this classifier to adjust the height of a gripper in order to grasp one or two top-most fabric layers from a pile of fabrics."

To evaluate their technique, the team carried out 180 experimental trials in a real-world setting, using a robotic system consisting of a Franka robotic arm, a mini-Delta gripper and a Reskin sensor (integrated on the gripper's "finger") to grasp one or two pieces of cloth in a pile. Their approach achieved promising results, outperforming baseline methods that do not consider tactile feedback.

"Compared to prior approaches that only use cameras, our tactile-sensing-based approach is not affected by patterns on the fabric, changes in lighting, and other visual discrepancies," Tirumala said. "We were excited to see that tactile sensing from electromagnetic devices like the ReSkin sensor can provide a sufficient signal for a fine-grained manipulation task, like grasping one or two fabric layers. We believe that this will motivate future research in tactile sensing for cloth manipulation by robots."

In the future, Tirumala, Weng, Seita, and their colleagues hope that this manipulation approach could help to enhance the capabilities of robots designed to be deployed in fabric manufacturing facilities, laundry services, or in homes. Specifically, it could improve the ability of these robots to handle complex textiles, multiple pieces of cloth, laundry, blankets, clothes, and other fabric-based objects.

"Our plan is to continue to explore the use of tactile sensing to grasp an arbitrary number of fabric layers, instead of the one or two layers that we focused on in this work," Weng added. "Furthermore, we are investigating multi-modal approaches that combine both vision and tactile sensing so we can leverage the advantages of both sensor modalities." Explore further Generating cross-modal sensory data for robotic visual-tactile perception More information: Sashank Tirumala et al, Learning to singulate layers using tactile feedback. arXiv:2207.11196v1 [cs.RO]. arxiv.org/abs/2207.11196

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.

Daily science news on research developments and the latest scientific innovations

Medical research advances and health news

The most comprehensive sci-tech news coverage on the web

This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.