Skip to main content
Log in

Multimedia Tools and Applications - Call for Papers: New Methods of AI and Deep Learning in Multimedia [1244]

Aims and Scope

The rise of machine learning approaches, and in particular deep learning, has led to a significant increase in the performance of AI-based systems. Many current multimedia applications also include at least one AI-based component. Images, video, sound, text-traditional multimedia data are now completed by a multitude of multimodal sources, on which the fundamental principles of multimedia research are being applied.

Multimedia data are coming on the fly via social networks, human observation, and other applications. Hence, the training of DNNs on multimedia data has to be done in an incremental way for many applications. Therefore, incremental learning approaches become more and more popular in multimedia. Furthermore, in the case of distributed systems analyzing multimedia data locally, e.g. for data privacy protection, the adjustment of local models in an incremental way has to be integrated in the global model. The latter has a better generalization power as it is trained on a larger amount of data than each local model. The best ways of integration of local model updates into the global model is the subject of federated learning. Finally, when applying multimedia fusion principles in different domains supplying multimodal data, such as remote sensing, astrophysics or mechanics, medical and healthcare multimedia applications, domain-dependent constraints/physical models of data may be integrated into model training yielding more reliable solutions.

However, the use of AI has also raised the question of the reliability and explainability of their predictions for decision-making (e.g., the black-box issue of the deep models). Such shortcomings undermine the trustworthiness of AI-based  multimedia applications. It is therefore critical to understand how the predictions of AI-based systems correlate with information perception and expert decision-making. This is also an important prerequisite for successful teaming of human experts and AI-based tools. The objective of eXplainable AI (XAI) is in proposing methods to understand and explain how these systems produce their decisions.

Here are the main, but not limited to, topics of interest:
● Incremental learning in multimedia
● Federated learning in multimedia
● New DNN architectures for multimedia, in particular, supporting multimodal data
● Physical models in AI with applications in multimedia
● Explainable/interpretable machine learning for multimedia
● XAI Evaluation and Benchmarks for multimedia data
● Information visualization for models or their predictions
● Sample-centric and dataset-centric explanations in multimedia applications


Guest Editors:
Werner Bailer - Joanneum Research Institute
Email: werner.bailer@joanneum.at

Aladine Chetouani - Polytech’Orléans, université d’Orléans, France
Email: aladine.chetouani@univ-orleans.fr

Cathal Gurrin - Dublin City University
Email: cathal.gurrin@dcu.ie

Alexandre Benoit - Polytech Annecy-Chambéry, France
Email: alexandre.benoit@univ-smb.fr


Important Dates:
Submission Deadline: December 15, 2023
Revised Paper Deadline: March 15, 2024

Submission Guidelines:
Authors should prepare their manuscript according to the Instructions for Authors available from the Multimedia Tools and Applications website (this opens in a new tab). Authors should submit through the online submission site at https://www.editorialmanager.com/mtap/default.aspx (this opens in a new tab) and select “SI 1244 - New Methods of AI and Deep Learning in Multimedia" when they reach the “Article Type” step in the submission process. Submitted papers should present original, unpublished work, relevant to one of the topics of the special issue. All submitted papers will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least three independent reviewers. It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process. Please note that the authors of selected papers presented at CBMI 2023 are invited to submit an extended version of their contributions by taking into consideration both the reviewers’ comments on their conference paper, and the feedback received during presentation at the conference. It is worth clarifying that the extended version is expected to contain a substantial scientific contribution, e.g., in the form of new algorithms, experiments or qualitative/quantitative comparisons, and that neither verbatim transfer of large parts of the conference paper nor reproduction of already published figures will be tolerated. The extended versions of CBMI papers will undergo the standard, rigorous journal review process and be accepted only if well-suited to the topic of this special issue and meeting the scientific level of the journal. Final decisions on all papers are made by the Editor in Chief.

Navigation