Abstract
Natural Human-Robot Interaction (HRI) is one of the key components for service robots to be able to work in human-centric environments. In such dynamic environments, the robot needs to understand the intention of the user to accomplish a task successfully. Towards addressing this point, we propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user. At the core of our system, we employ a multi-modal deep neural network for visual grounding. Unlike most grounding methods that tackle the challenge using pre-trained object detectors via a two-stepped process, we develop a single stage zero-shot model that is able to provide predictions in unseen data. We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets. Experimental results showed that the proposed model performs well in terms of accuracy and speed, while showcasing robustness to variation in the natural language input.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) |
Publisher | IEEE |
Pages | 50-56 |
Number of pages | 6 |
ISBN (Electronic) | 978-1-6654-3198-9 |
ISBN (Print) | 978-1-6654-3199-6 |
DOIs | |
Publication status | Published - 18-May-2021 |
Event | 2021 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) : (ICARSC) - Felra, Portugal Duration: 28-Apr-2021 → 29-Apr-2021 |
Conference
Conference | 2021 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) |
---|---|
Country/Territory | Portugal |
City | Felra |
Period | 28/04/2021 → 29/04/2021 |