Skip to main content
Log in

Complex & Intelligent Systems - SPECIAL ISSUE

Interpretation of Deep Learning: Prediction, Representation, Quantification and Visualization


Aim and Scope:

While Big Data offers the great potential for revolutionizing all aspects of our society, harvesting of valuable knowledge from Big Data is an extremely challenging task. The large scale and rapidly growing information hidden in the unprecedented volumes of non-traditional data requires the development of decision-making algorithms. Deep learning is currently an extremely active research area in machine learning and pattern recognition society. In contrast to the conventional classification methods, deep learning models can learn a hierarchy of features by building high-level features from low-level ones, thereby automating the process of feature construction for the problem. The deep learning approach exploits layers to develop representations of data at increasing levels of abstraction. It has demonstrated best-in-class performance in a range of applications, including image classification, and been successfully applied in industry products that take advantage of the large volume of digital data. Companies like Google, Apple, and Facebook, who collect and analyze massive amounts of data many nonlinear processing on a daily basis, have been aggressively pushing forward deep learning techniques.


While deep learning has achieved unprecedented prediction capabilities, it is often criticized as a black box because of lacking interpretability, which is very important in real-world applications such as healthcare and cybersecurity. For example, healthcare professionals would appropriately trust and effectively manage prediction results only if they can understand why and how a patient is diagnosed with prediabetes. There has been an explosion of interest recently in the related research directions, such as a) analyzing the information bottleneck for efficient learning, b) inferring and regularizing the network structure for stable and robust prediction, and c) interpreting the learned representations and generated decisions.


This special issue will focus on the interpretability of deep learning from representation, modeling and prediction, as well as the deployment of interpretability in various applications. Potential topics include but are not limited to the following:

Interpretability of deep learning modelsQuantifying or visualizing the interpretability of deep neural networksNeural networks, fuzzy logic, and evolutionary based interpretable control systemsDimensionality expansion and sparse modelingOptimization of big data in complex systemsApplications in:

o  Image/video processing
o  Audio/speech
o  Robotics, navigation, control 
o  Games
o  Cognitive architectures
o  Natural language processing


Important Dates:

Paper submission deadline: August 30, 2020
Author notification: October 30, 2020
Revised paper submission: December 30, 2020
Final acceptance: February 28, 2021


Guest Editors:

Dr. Nian Zhang, University of the District of Columbia, Washington DC, USAnzhang@udc.edu (this opens in a new tab) (Lead Guest Editor)

Dr. Zhaojie Ju, University of Portsmouth, Portsmouth, UKzhaojie.ju@port.ac.uk (this opens in a new tab)

Dr. Chenguang Yang, University of West England, Bristol, UKcharlie.yang@uwe.ac.uk (this opens in a new tab)

Dr. Dingguo Zhang, University of Bath, Bath, UKd.zhang@bath.ac.uk (this opens in a new tab)

Dr. Jinguo Liu, Shenyang Institute of Automation, Chinaliujinguo@sia.cn (this opens in a new tab)
 



Navigation