Skip to main content
Log in

Complex & Intelligent Systems - Special Issue on

Secure Learning

There has been growing interest in rectifying machine learning vulnerabilities and preserving privacy. Adversarial machine learning and privacy preserving has attracted tremendous attention in the machine learning society over the past few years.  Recent research has studied the vulnerability of machine learning algorithms and various defense mechanisms against those vulnerabilities. The questions surrounding this space are more pressing and relevant than ever before: How can we make a system robust to novel or potentially adversarial inputs? How can machine learning systems detect and adapt to changes in the environment over time? When can we trust that a system that has performed well in the past will continue to do so in the future? These questions are essential to consider in designing systems for high stakes applications such as self-driving cars and automated surgical assistants.  

We aim to bring together researchers in diverse areas such as reinforcement learning, human robot interaction, game theory, cognitive science, and security to further the field of reliable and trustworthy machine learning. We will focus on robustness, trustworthiness, privacy preservation, and scalability. Robustness refers to the ability to withstand the effects of adversaries, including adversarial examples and poisoning data, distributional shift, model misspecification, and corrupted data. Trustworthiness is guaranteed by transparency, explainability, and privacy preservation. Scalability refers to the ability to generalize to novel situations and objectives.

This special issue aims to promote the most recent advances of secure AI from both the theoretical and empirical perspectives as well as novel applications.  The goal is to build reliable machine learning and computational intelligence models, which are resilient in adversarial settings.

Topics of the special issue include, but are not limited to:

  • Machine learning reliability
  • Adversarial machine learning (attack and defense)
  • Privacy preserving machine learning
  • Learning over encrypted data
  • Homomorphic encryption techniques for machine learning
  • Secure multi-party computation techniques for machine learning
  • Explainable and transparent artificial intelligence
  • Neural architecture search for secure learning
  • Security intelligence in malware, network intrusion, web security, and authentication


Schedule:
Manuscript Submission    July 15, 2021 
Notification to Authors    August 15, 2021
Revised Manuscript Due    September 15, 2021
Decision Notification    October 15, 2021


Guest Editors:

Dr. Catherine Huang (Managing Guest Editor)
McAfee LLC, USA
Catherine_huang@mcafee.com
 

Prof. Yew-soon Ong
School of Computer Science and Engineering, Nanyang Technological University, Singapore
asysong@ntu.edu.sg

Dr. Celeste Fralick
McAfee LLC, USA
celeste_fralick@mcafee.com

Navigation