Call for Papers: Multimodal Interaction and IoT Applications 
Multimodal interaction exploits the synergic use of different modalities to optimize the interactive tasks accomplished by the users. This allows a user to use several input modes such as speech, touch, and visual to interact with a multimedia system to output such as text, audio, and learning. Many IoT applications have become a fundamental part of modern society. Despite the significant progress on multimodal interaction systems and IoT applications in recent years, much work remains to be done before sophisticated multimodal interaction with IoT applications become a commonplace, indispensable part of computing.
The multimodal interaction with IoT applications depends on multimedia systems input/output that may limit their scalability and expressiveness. For instance, while intelligent voice assistants (e.g. Amazon’s Alexa, Google Home etc.) are great in processing a lot of natural language commands, they may face challenges in interpreting and executing complex intents, e.g. if a person reading a book wants to dim all the ambient light except the overhead reading light, it will be quite a challenge for the reader to give such a command using simple voice commands. Further, voice assistants may not be able to recognise and execute such commands effectively. Voice input alone, however, is limited for issuing commands with spatially distributed IoT applications. Further, such verbose commands would be challenging for the assistant to recognize and execute. Therefore, it needs to address this by designing and investigating a more convenient way of human interaction with the physical environment.
The major challenges of such multimodal interaction with IoT applications are input/output integration, security, privacy, affordability, portability, and practicality issues. For example, integration of touch and eye gaze, speech and eye gaze, facial expression and haptics input, touch-based gesture and prosody-based affect – may not have obvious points of similarity and straightforward ways to connect. One solution is to design an effective multimodal interaction with IoT applications that can use a synergetic combination of input modalities based on user intent, need, comfort, and adaptability. This special issue is requesting articles that address these challenges with a potential area of focus on, but not limited to:
- Multimedia system
- Internet-of-Things (IoT)
- Multimodal interaction
- Security and privacy issues
- Facial expression
- Application of multimodal interaction in healthcare
- Machine learning models for multimodal interaction
- AI-based multimodal interaction with/without IoT applications
- Neurophysiological and physiological signals
Prof. Karm Veer Arya (Lead Guest Editor)
ABV-Indian Institute of Information Technology & Management, Gwalior, India
Dr. Yogesh Kumar Meena
Swansea University, UK
Paper submission deadline: extended to 31 March, 2021
Authors’ first notification date: 30 June, 2021
Tentative publication date: Autumn 2021
Authors should prepare their manuscript according to the Instructions for Authors available from the Multimedia Tools and Applications website. Authors should submit through the online submission site at https://www.editorialmanager.com/mtap/default.aspx and select “SI 1215 - Multimodal Interaction and IoT Applications” when they reach the “Article Type” step in the submission process. Submitted papers should present original, unpublished work, relevant to one of the topics of the special issue. All submitted papers will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least three independent reviewers. It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process.
The special issue will consider papers extending previously published conference papers, provided the journal submission presents a significant contribution beyond the conference paper. Authors must explain in the introduction to the paper the new contribution to the field made by the submission, and the original conference publication should be cited in the text. Note that neither verbatim transfer of large parts of the conference paper nor wholesale reproduction of already published figures is acceptable.