Call for Papers: Ambient Understanding for Mobile Autonomous Robots

A Special Issue with the Journal of Ambient Intelligence and Humanized Computing
Open for submissions until February 28, 2022
Submission code: AutoRob

Ambient understanding is a crucial feature for a mobile, intelligent and autonomous cognitive robot. Nowadays, advanced machine learning, computer vision, audio analysis and multimedia systems can be widely used for improving the perceptive and reasoning capabilities of the robots. Thanks to these algorithms, the robots can see, hear and sense the world, process the data in real-time and adapt its behaviour accordingly. The combination between artificial intelligence and robotics has the disruptive potential to make the robot completely aware of the environment and able to perform autonomous navigation and intelligent interaction with humans and objects of the scene. As an example, a cognitive robot in a clothing shop may autonomously understand the customer it is talking with, by video and audio analytics, to recognize gender, age, ethnicity and clothing and then suggest specific clothes of interest in the shop, personalized for that specific person; moreover, it can evaluate the customer satisfaction through facial and acoustic emotion analysis and answer to unknown questions using natural language processing algorithms.

Similarly, a robot for elders assistance can entertain its patients with personalized conversations, can visually verify nutrition and therapies, detect dangerous situations by means of image and audio analysis and notify them to healthcare professionals or familiars.

Following the impressive progress in intelligent robot systems, the proposed Special Issue aims to treat emerging topics such as advanced robotic perception and reasoning for autonomous navigation and human robot interaction, which have attracted a widespread interest of several multi-disciplinary research groups in academia and industries all over the world.

Call for Papers
In recent years, the combination of robotics and artificial intelligence arose outstanding developments in the fields of cognitive robotics and human robot interaction. Nowadays several academic and industrial research groups are engaged in the design of intelligent robots able to act autonomously, by means of advanced deep learning algorithms used for the analysis of data acquired from heterogeneous sensors (camera, 3D camera, stereo camera, microphone, LIDAR, sonar etc.), for ambient understanding (people, objects, scenes) and for dynamically adapting the interaction with humans and environment.

Despite the recent deep learning achievements and the impressive advances in technology for intelligent robot systems, there are still several challenges to be addressed in computer vision, audio analysis, sensor fusion, autonomous navigation methods and human robot interaction. Such challenges arise from the inherent difficulty of designing algorithms that are effective in the complex real world, §but are also due to the acquisition of low quality or corrupted images, noisy audio and inaccurate data from environmental sensors, and to the necessity to process large amounts of operations in real-time.

The aim of the special issue is to provide a collection of innovative algorithms, theories and applications related to all the aspects of perception, reasoning and navigation of a mobile autonomous robot, with contributions from both academia and industry. Even the submission of datasets and benchmarks collected in real challenging conditions or methods optimized for real-time perception and reasoning are encouraged.

Topics of Interest
Topics of interest include, but are not limited to:

  • Face Analysis (Detection, Recognition, Re-Identification, Gender, Age, Ethnicity, Emotion)
  • People detection
  • 2D and 3D object detection and recognition for ambient understanding and reconstruction
  • 6D Pose Estimation
  • Gesture and action recognition
  • Voice and audio event detection and localization
  • Speech Analysis for human-robot interaction (Speaker Recognition, Gender, Age, Language, Emotion, Speech to text, Natural Language Processing)
  • Multi-modal perception for human-robot interaction and autonomous navigation
  • Ambient understanding for SLAM, dynamic motion planning and autonomous navigation
  • Reinforcement learning for cognitive robotics
  • Datasets and benchmarks in the wild
  • Software optimization for real-time perception, reasoning and navigation
  • Embedded systems for cognitive robotics

Paper Submission
Submitted papers should not have been previously published nor be currently under consideration for publication elsewhere. Before the preparation of submissions, authors should carefully follow the author guideline from:

The maximum page length is 12 pages considering the manuscript formatted as the paper in the final format when published on the journal website. Two additional pages are allowed during the revision. So, the final accepted papers cannot be longer than 14 pages. Papers having a format that is not in the correct format will he returned to the author by the journal staff.

In order to properly format your paper, please use the LaTeX template that can be found here: and use the following LaTeX commands in the proper place of your manuscript:


Submitted papers will go through a strict peer review procedure. Prospective authors should submit an electronic copy of their complete manuscript through the Springer submission system at: clicking on "Submit a manuscript". Please, at the beginning of your submission select “Original Research” in “Select Article Type” and in the “Additional Information” tab please select “Yes” at the questionnaire “Does this manuscript belong to a special issue?”, then select “S.I. : AutoRob”. After that, you can upload all of your manuscript files following the instructions given on the screen.

Guest Editors
Antonio Greco, Università degli Studi di Salerno, Italy
Mario Vento, Università degli Studi di Salerno, Italy
Sean Ryan Fanello, Google, USA

Important Dates
Submissions open: 1st October 2021
Submissions close: April 30, 2022
First decision: 30th June 2022
First revision: 31st July 2022
Second revision: 31th August 2022
Final decisions: 30st October 2022
Publication: December 2022