[agents] CFP Explainable AI (XAI-19) Workshop @ IJCAI 19

Timothy Miller tmiller at unimelb.edu.au
Tue May 14 20:30:38 EDT 2019


*Deadline*: 19 May 2019 – still a few days to go!

*Call for papers*

Explainable AI (XAI) Workshop @ IJCAI 19
Macau, China
Aug 11, 2019

https://sites.google.com/view/xai2019/home

Topics and Objectives
*******************
As AI becomes more ubiquitous, complex and consequential, the need for people to understand how decisions are made and to judge their correctness becomes increasingly crucial due to concerns of ethics and trust. The field of Explainable AI (XAI) aims to address this problem by designing methods and tools that enable people to understand decisions made by AI, such as methods for generating explanations or making decisions that are more intuitive to people. This workshop brings together researchers working in explainable AI to share and learn about recent research, with the hope of fostering meaningful connections between researchers from diverse backgrounds, including but not limited to artificial intelligence, human-computer interaction, human factors, philosophy, cognitive & social psychology.

While AI researchers have experienced many recent successes, those successes have been demonstrated based on measures of accuracy and correctness (e.g., using AUC, F-scores, mAP, or accuracy measures) of the decisions, rather than on measures associated to the understanding of the users who are the receipients of those decisions. Some ways to demonstrate success of AI decisions with this respect would be through their explainability (and justification) (e.g., involving user satisfaction, mental model alignment, or human-system task performance). This is problematic for applications in which users seek to understand before committing to decisions with inherent risk. For example, a delivery drone should explain (to its remote operator) why it is operating normally or why it suspends its behavior (e.g., to avoid placing its fragile package on an unsafe location), and an intelligent decision aid should explain its recommendation of an aggressive medical intervention (e.g., in reaction to a patient’s recent health patterns). The need for explainable models increases as AI systems are deployed in critical applications.

The need for explainability exists independently of how models are acquired (i.e., perhaps they were hand-crafted, or interactively elicited without using machine learning techniques). This raises several questions, such as: how should explainable models be designed? What queries should AI systems be able to answer about their models and decisions? How should user interfaces communicate decision making? What types of user interactions should be supported? And how should explanation quality be assessed?

This workshop will provide a forum for discussing recent research on interactive XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI, human-computer interaction, and cognitive theories of explanation and transparency.

In addition to encouraging descriptions of original or recent contributions to XAI (i.e., theory, simulation studies, subject studies, demonstrations, applications), we will welcome contributions that: survey related work; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them.

Topics of interest include but are not limited to:

Technologies and Theories
* Machine learning (e.g., deep, reinforcement, statistical, relational, transfer, case-based)
* Planning
* Cognitive architectures
* Commonsense reasoning
* Decision making
* Episodic reasoning
* Intelligent agents (e.g., planning and acting, goal reasoning, multiagent architectures)
* Knowledge acquisition
* Narrative intelligence
* Temporal reasoning
* Human-agent explanation
* Psychological and philosophical foundations
* Interaction design
* Evaluation for XAI

Applications/Tasks
* After action reporting
* Ambient intelligence
* Autonomous control
* Caption generation
* Computer games
* Explanatory dialog design and management
* Image processing (e.g., security/surveillance tasks)
* Information retrieval and reuse
* Intelligent decision aids
* Intelligent tutoring
* Legal reasoning
* Recommender systems
* Robotics
* User modeling
* Visual question-answering (VQA)

This meeting will provide attendees with an opportunity to learn about progress on XAI, to share their own perspectives, and to learn about potential approaches for solving key XAI research challenges. This should result in effective cross-fertilization among research on ML, AI more generally, intelligent user interaction (interfaces, dialogue), and cognitive modeling.

Important Dates
******************
Paper submission: 19 May, 2019
Notification: 15 June, 2019
Camera-ready submission: 10 July, 2019
Workshop date: 11 August, 2019

Submission Details
**********************
Authors may submit *long papers* (6 pages plus up to one page of references) or *short papers* (4 pages plus up to one page of references).
All papers should be typeset in the IJCAI style (https://www.ijcai.org/authors_kit). Accepted papers will be published on the workshop website.
Papers must be submitted in PDF format via the EasyChair system: https://easychair.org/conferences/?conf=xai19

Organizing Chairs
*********************
Tim Miller (University of Melbourne, Australia): Primary contact: tmiller at unimelb.edu.au<mailto:tmiller at unimelb.edu.au>
Rosina Weber (Drexel University)
David Aha (NRL, USA)
Daniele Magazzeni (King’s College London)


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cs.umbc.edu/pipermail/agents/attachments/20190515/74353e30/attachment.html>


More information about the agents mailing list