Few-Shot Visual Grounding for Natural Human-Robot Interaction

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

6 Citations (Scopus)
110 Downloads (Pure)

Abstract

Natural Human-Robot Interaction (HRI) is one of the key components for service robots to be able to work in human-centric environments. In such dynamic environments, the robot needs to understand the intention of the user to accomplish a task successfully. Towards addressing this point, we propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user. At the core of our system, we employ a multi-modal deep neural network for visual grounding. Unlike most grounding methods that tackle the challenge using pre-trained object detectors via a two-stepped process, we develop a single stage zero-shot model that is able to provide predictions in unseen data. We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets. Experimental results showed that the proposed model performs well in terms of accuracy and speed, while showcasing robustness to variation in the natural language input.
Original languageEnglish
Title of host publicationIEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)
PublisherIEEE
Pages50-56
Number of pages6
ISBN (Electronic)978-1-6654-3198-9
ISBN (Print)978-1-6654-3199-6
DOIs
Publication statusPublished - 18-May-2021
Event2021 IEEE International Conference on
Autonomous Robot Systems and Competitions (ICARSC)
: (ICARSC)
- Felra, Portugal
Duration: 28-Apr-202129-Apr-2021

Conference

Conference2021 IEEE International Conference on
Autonomous Robot Systems and Competitions (ICARSC)
Country/TerritoryPortugal
CityFelra
Period28/04/202129/04/2021

Fingerprint

Dive into the research topics of 'Few-Shot Visual Grounding for Natural Human-Robot Interaction'. Together they form a unique fingerprint.

Cite this