MultiSCOPE: Disambiguating in-hand object poses with proprioception and sequential interactions

The International Journal of Robotics Research Study, Ahead of Publish.
Joint evaluation of realized things posture and external calls is main to durable and dexterous adjustment. In this paper, we present MultiSCOPE, a state-estimation formula that leverages consecutive frictional calls (e.g., jabs) to collectively approximate call places and realized things postures making use of solely proprioception and responsive comments. Our technique addresses the issue of minimizing things posture unpredictability by utilizing 2 corresponding bit filters over a collection of activities: one to approximate call place (CPFGrasp) and one more to approximate things postures (EXTENT). Our technique addresses unpredictability in both robotic proprioception and force-torque dimensions, which is very important for approximating in-hand things posture in the real life. We execute and review our technique on substitute and real-world single-arm and dual-arm robot systems. We show that by bringing 2 items right into call a number of times, the robotics can presume call place and things postures concurrently.

发布者:Andrea Sipos,转转请注明出处:https://robotalks.cn/multiscope-disambiguating-in-hand-object-poses-with-proprioception-and-sequential-interactions/

(0)
上一篇 2 3 月, 2025 8:19 上午
下一篇 2 3 月, 2025

相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信
社群的价值在于通过分享与互动,让想法产生更多想法,创新激发更多创新。