Approximating the present of hand-held items is an essential and tough issue in robotics and computer system vision. While leveraging multi-modal RGB and deepness information is an appealing service, existing strategies still encounter difficulties because of hand-induced occlusions and multimodal information combination. In a brand-new research, scientists created an unique deep knowing structure that deals with these concerns by presenting an unique vote-based combination component and a hand-aware present evaluation component.
发布者:Dr.Durant,转转请注明出处:https://robotalks.cn/researchers-develop-a-novel-vote-based-model-for-more-accurate-hand-held-object-pose-estimation/