[agents] Neural Computing and Applications (NCAA) Special Issue on Interpretation of Deep Learning

Pub Conference pubconference at gmail.com
Mon Jun 28 20:58:48 EDT 2021


*Neural Computing and Applications (NCAA) Special Issue **on Interpretation
of Deep Learning: Prediction, Representation, Modeling and Utilization*

https://www.springer.com/journal/521/updates/19187658

Aims, Scope and Objective

While Big Data offers the great potential for revolutionizing all aspects
of our society, harvesting of valuable knowledge from Big Data is an
extremely challenging task. The large scale and rapidly growing information
hidden in the unprecedented volumes of non-traditional data requires the
development of decision-making algorithms. Recent successes in machine
learning, particularly deep learning, has led to breakthroughs in
real-world applications such as autonomous driving, healthcare,
cybersecurity, speech and image recognition, personalized news feeds, and
financial markets.

While these models may provide the state-of-the-art and impressive
prediction accuracies, they usually offer little insight into the inner
workings of the model and how a decision is made. The decision-makers
cannot obtain human-intelligible explanations for the decisions of models,
which impede the applications in mission-critical areas. This situation is
even severely worse in complex data analytics. It is, therefore, imperative
to develop explainable computation intelligent learning models with
excellent predictive accuracy to provide safe, reliable, and scientific
basis for determination.

Numerous recent works have presented various endeavors on this issue but
left many important questions unresolved. The first challenging problem is
how to construct self-explanatory models or how to improve the explicit
understanding and explainability of a model without the loss of accuracy.
In addition, high dimensional or ultra-high dimensional data are common in
large and complex data analytics. In these cases, the construction of
interpretable model becomes quite difficult and complex. Further, how to
evaluate and quantify the explainability of a model is lack of consistent
and clear description. Moreover, auditable, repeatable, and reliable
process of the computational models is crucial to decision-makers. For
example, decision-makers need explicit explanation and analysis of the
intermediate features produced in a model, thus the interpretation of
intermediate processes is requisite. Subsequently, the problem of efficient
optimization exists in explainable computational intelligent models. These
raise many essential issues on how to develop explainable data analytics in
computational intelligence.

This Topical Collection aims to bring together original research articles
and review articles that will present the latest theoretical and technical
advancements of machine and deep learning models. We hope that this Topical
Collection will: 1) improve the understanding and explainability of machine
learning and deep neural networks; 2) enhance the mathematical foundation
of deep neural networks; and 3) increase the computational efficiency and
stability of the machine and deep learning training process with new
algorithms that will scale.

Potential topics include but are not limited to the following:

   - Interpretability of deep learning models
   - Quantifying or visualizing the interpretability of deep neural networks
   - Neural networks, fuzzy logic, and evolutionary based interpretable
   control systems
   - Supervised, unsupervised, and reinforcement learning
   - Extracting understanding from large-scale and heterogeneous data
   - Dimensionality reduction of large scale and complex data and sparse
   modeling
   - Stability improvement of deep neural network optimization
   - Optimization methods for deep learning
   - Privacy preserving machine learning (e.g., federated machine learning,
   learning over encrypted data)
   - Novel deep learning approaches in the applications of image/signal
   processing, business intelligence, games, healthcare, bioinformatics, and
   security

Guest Editors
Nian Zhang (Lead Guest Editor), University of the District of Columbia,
Washington, DC, USA, nzhang at udc.edu
Jian Wang, China University of Petroleum (East China), Qingdao, China,
wangjiannl at upc.edu.cn
Leszek Rutkowski, Czestochowa University of Technology, Poland,
leszek.rutkowski at pcz.pl

Important Dates

Deadline for Submissions: March 31, 2022
First Review Decision:        May 31, 2022
Revisions Due:                   June 30, 2022
Deadline for 2nd Review:  July 31, 2022
Final Decisions:                  August 31, 2022
Final Manuscript:               September 30, 2022

Peer Review Process

All the papers will go through peer review,  and will be reviewed by at
least three reviewers. A thorough check will be completed, and the guest
editors will check any significant similarity between the manuscript under
consideration and any published paper or submitted manuscripts of which
they are aware. In such case, the article will be directly rejected without
proceeding further. Guest editors will make all reasonable effort to
receive the reviewer’s comments and recommendation on time.

The submitted papers must provide original research that has not been
published nor currently under review by other venues. Previously published
conference papers should be clearly identified by the authors at the
submission stage and an explanation should be provided about how such
papers have been extended to be considered for this special issue (with
at least 30% difference from the original works).

Submission Guidelines

Paper submissions for the special issue should strictly follow the
submission format and guidelines (
https://www.springer.com/journal/521/submission-guidelines). Each
manuscript should not exceed 16 pages in length (inclusive of figures and
tables).

Manuscripts must be submitted to the journal online system at
https://www.editorialmanager.com/ncaa/default.aspx.
Authors should select “TC: Interpretation of Deep Learning” during the
submission step ‘Additional Information’.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cs.umbc.edu/pipermail/agents/attachments/20210628/48a04319/attachment-0001.html>


More information about the agents mailing list