[agents] XAIP 2020
Tathagata Chakraborti
tchakra2 at asu.edu
Thu Sep 10 10:35:44 EDT 2020
Hi all,
Hope you had a fun AAAI deadline!
This is the final call for papers for XAIP @ ICAPS 2020, the 3rd
International Workshop on Explainable AI Planning. Submissions close on
Sept 11 UTC-12.
More details about the workshop and submission instructions can be found
here: ibm.biz/xaip2020.You can also explore the program from previous
iterations of the workshop here <http://xaip.mybluemix.net/#/explore> and
read more about the emerging landscape of XAIP here
<http://xaip.mybluemix.net/#/landscape>.
This year we are exploring a special theme regarding the *UX of XAIP*, in
addition to traditional topics in explainable planning, to recognize the
inseparable role of user interfaces in the explanation process. This year
we are also exploring opportunities to cross-pollinate with the XAI
Workshop at IJCAI 2020 <https://sites.google.com/view/xai2020/home>. As
part of this effort, authors of *relevant *accepted papers in either venue
will have an option to present in the poster session of the other venue as
well.
The detailed CFP is available below.
------------------------------------------------------------
---------------------------------
3rd International Workshop on Explainable AI Planning (XAIP)
------------------------------------------------------------
---------------------------------
Collocated with ICAPS 2020, Nancy, France. 21-23 Oct 2020.
Home: ibm.biz/xaip2020
------------------------------------------------------------
---------------------------------
COVID-19 Update: As you might know, ICAPS 2020 has been postponed to 21-30
October, and will be fully virtual, in light of the COVID-19 outbreak. XAIP
2020 (collocated with ICAPS 2020) has also been pushed back accordingly.
Submissions are still open with a revised deadline of September 11st UTC-12
.
XAI-20 Update: This year we are exploring opportunities to cross-pollinate
with the XAI Workshop at IJCAI 2020
<https://sites.google.com/view/xai2020/home>. As part of this effort,
authors of accepted papers in either venue will have an option to present
in the other venue as well. More details about this will be made available
after the respective paper acceptance notifications.
------------------------------------------------------------
-----------------------
As Artificial Intelligence (AI) is increasingly being adopted into
application solutions, the challenge of supporting interactions with humans
is becoming more apparent. Partly this is to support integrated working
styles, in which humans and intelligent systems cooperate in
problem-solving, but also it is a necessary step in the process of building
trust as humans migrate greater competence and responsibility to such
systems. The challenge is to find effective ways to characterize, and to
communicate, the foundations of AI-driven behavior when the algorithms and
the knowledge on which those algorithms operate are far from transparent to
humans. While XAI at large is primarily concerned with black-box
learning-based approaches, model-based approaches are well suited —
arguably better suited — for an explanation, and Explainable AI Planning
(XAIP) can play an important role in helping users interface with AI
technologies in complex decision-making procedures.
After the success of previous workshops on XAI and XAIP — e.g. at IJCAI 2017
<http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/>, at IJCAI
2018 <http://home.earthlink.net/~dwaha/research/meetings/faim18-xai/>, and
at ICAPS 2018-2019 <https://kcl-planning.github.io/XAIP-Workshops/> — the
mission of this workshop is to mature and broaden this community, fostering
continued exchange on XAIP topics at ICAPS. Apart from XAI
<https://sites.google.com/view/xai2020/home>@IJCAI, the planning specific
XAIP workshop also runs parallel to sister venues like EXTRAAMAS
<https://extraamas.ehealth.hevs.ch/>@AAMAS and XLoKR
<https://lat.inf.tu-dresden.de/XLoKR20/>@KR, as part of this broader
community around explainable AI. In order to broaden the XAIP community at
ICAPS, this year we include an additional set of topics regarding the role
of user interfaces in XAIP acknowledging the inseparable role of
interfacing in explanations.
Topics
The workshop includes – but is not limited to – the following topics:
Core XAIP
- representation, organization, and memory content used in an explanation
- the creation of such content during plan generation or understanding
- generation and evaluation of explanations
- contrastive explanations
- the way in which explanations are communicated and personalized to
humans (e.g., plan summaries, answers to questions)
- the role of knowledge and learning in explainable planners
- human vs AI models in explanations
- links between explainable planning and other disciplines (e.g., social
science, argumentation)
- use cases and applications of explainable planning
The UX of XAIP
- User interfaces for explainable automated planning and scheduling
- Plan and schedule visualization
- Mixed initiative planning and scheduling
- Emerging technology for human-planner interaction
- Metrics for human readability or comprehensibility of plans and
schedules
- Explainable automated planning and scheduling for user interfaces
- Representing and solving planning domains for user interface
creation and design tasks
- Plan, activity, and intent recognition of users’ interactions with
interfaces
- Developing user (mental) models with description languages and
decision processes
Here are a few recent surveys on topics in XAIP (newest first):
- The Emerging Landscape of Explainable AI Planning and Decision
Making. Tathagata
Chakraborti, Sarath Sreedharan, Subbarao Kambhampati. [link
<https://www.ijcai.org/Proceedings/2020/669>]
- Explainable AI Planning (XAIP): Overview and the Case of Contrastive
Explanation. Jorg Hoffmann and Daniele Magazzeni. [link
<https://www.springerprofessional.de/en/explainable-ai-planning-xaip-overview-and-the-case-of-contrastiv/17181492>
]
- Explainable Agents and Robots: Results from a Systematic Literature
Review. Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary
Framling. [link <https://dl.acm.org/doi/10.5555/3306127.3331806>]
- Explanation in Artificial Intelligence: Insights from the Social
Sciences. Tim Miller. [link
<https://www.sciencedirect.com/science/article/abs/pii/S0004370218305988>
]
Important Dates
- Paper submission deadline: September 11, 2020, UTC-12
- Notification of acceptance: September 30, 2020, UTC-12
- Camera-ready paper submissions: October 14, 2020, UTC-12
- Workshop date: October 21-23, 2020
Submission Instructions
We invite submissions of the following types:
- Full technical papers making an original contribution; up to 9 pages
including references;
- Short technical papers making an original contribution; up to 5 pages
including references;
- Position papers proposing XAIP challenges, outlining XAIP ideas,
debating issues relevant to XAIP; up to 5 pages including references.
Submissions must be made through the following EasyChair link:
https://easychair.org/conferences/?conf=xaip2020
<https://urldefense.proofpoint.com/v2/url?u=https-3A__easychair.org_conferences_-3Fconf-3Dxaip2020&d=DwMGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=bSBgjAhYJgIHgo76f-HBfhBOGTbfbQ-nj45gzb8qQHo&m=kCuI3nMEZ5o3nVJPvt-FAlYpv2ahkKq-mPJvS8E3mRw&s=YDbtQMCNHen8Xr46jW3jWqIV6_r5Yh3J9FQ7Mgfavx4&e=>
Papers must be prepared according to the instructions for ICAPS 2020 (in
AAAI format) available at
https://www.aaai.org/Publications/Templates/AuthorKit20.zip. Authors who
are considering submitting to the workshop papers rejected from the main
conference, please ensure you do your utmost to address the comments given
by ICAPS reviewers. *Please do not submit papers that are already accepted
for the main conference to the workshop.*
Every submission will be reviewed by members of the program committee
according to the usual criteria such as relevance to the workshop, the
significance of the contribution, and technical quality. Authors can select
if they want their submissions to be single-blind or double-blind
(recommended for AAAI or NeurIPS dual submissions) at the time of
submission.
The workshop is meant to be an open and inclusive forum, and we encourage
papers that report on work in progress or that do not fit the mold of a
typical conference paper.
At least one author of each accepted paper must attend the workshop in
order to present the paper.
Organizers
- Tathagata Chakraborti <http://tchakra2.com/> IBM Research AI
- Jeremy Frank <https://ti.arc.nasa.gov/profile/frank/> NASA Ames
- Rick Freedman <https://www.sift.net/staff/richard-freedman> SIFT
- Claudia V. Goldman <https://il.linkedin.com/in/claudiagoldman> General
Motors
- Daniele Magazzeni <https://nms.kcl.ac.uk/daniele.magazzeni/> King’s
College London
--
Tathagata Chakraborti
IBM Research AI | Home <http://tchakra2.com/>
Tweet at tchakra2 <https://twitter.com/tchakra2>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cs.umbc.edu/pipermail/agents/attachments/20200910/f42e6902/attachment-0001.html>
More information about the agents
mailing list