<div dir="ltr"><table><tbody><tr><td><div id="gmail-m_-465054055878900581gmail-cfpdateplace"><font size="4"><b>DeceptECAI2020: 1st International Workshop on Deceptive AI @ECAI2020</b>, Santiago de Compostela, Spain, June 9, 2020</font></div><div><br></div></td></tr></tbody></table><table><tbody><tr><td><font size="2">Conference website</font></td><td><font size="2"><a href="https://sites.google.com/view/deceptecai2020/home" target="_blank">https://sites.google.com/view/deceptecai2020/home</a></font></td></tr><tr><td><font size="2">Submission link</font></td><td><font size="2"><a href="https://easychair.org/conferences/?conf=deceptecai2020" target="_blank">https://easychair.org/conferences/?conf=deceptecai2020</a></font></td></tr><tr><td><font size="2">Submission deadline</font></td><td><font size="2"><b>March 12, 2020</b></font></td></tr><tr><td><font size="2">Notification of acceptance</font></td><td><font size="2"><b>April 5, 2020</b></font></td></tr></tbody></table><div><a href="https://easychair.org/cfp/topic.cgi?a=23836792;tid=7249426" target="_blank"><span><br></span></a></div><p dir="ltr"><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">There
is no dominant theory of deception. The literature on deception treats
different aspects and components of deception separately, sometimes
offering contradictory evidence and opinions on these components.
Emerging AI techniques offer an exciting and novel opportunity to expand
our understanding of deception from a computational perspective.
However, the design, modelling and engineering of deceptive machines is
not trivial from either conceptual, engineering, scientific, or ethical
perspectives. The aim of DeceptECAI is to bring together people from
academia, industry and policy-making in order to discuss and disseminate
the current and future threats, risks, and even benefits of designing
deceptive AI. The workshop proposes a multidisciplinary approach
(Computer Science, Psychology, Sociology, Philosophy & Ethics,
Military Studies, Law etc.) to discuss the following aspects of
deceptive AI:</span></font></p><font size="2">
</font><p dir="ltr"><font size="2"><strong>1) Behaviour</strong><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">
- What type of machine behaviour should be considered deceptive? How do
we study deceptive behaviour in machines as opposed to humans?</span></font></p><font size="2">
</font><p dir="ltr"><font size="2"><strong>2) Reasoning</strong><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">
- What kind of reasoning mechanisms lie behind deceptive behaviour?
Also, what type of reasoning mechanisms are more prone to deception?</span></font></p><font size="2">
</font><p dir="ltr"><font size="2"><strong>3) Cognition</strong><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">
- How does cognition affect deception and how does deception affect
cognition? Also, what function, if any, do agent cognitive architectures
play in deception?</span></font></p><font size="2">
</font><p dir="ltr"><font size="2"><strong>4) AI & Society</strong><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">
- How does the ability of machines to deceive influence society? What
kinds of measures do we need to take in order to neutralise or mitigate
the negative effects of deceptive AI?</span></font></p><font size="2">
</font><p><font size="2"><strong>5) Engineering Principles</strong><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">
- How should we engineer autonomous agents such that we are able to
know why and when they deceive? Also, why should or shouldn’t we
engineer or model deceptive machines?</span></font></p><font size="2">
</font><h2><font size="2">Submission Guidelines</font></h2><font size="2">
</font><p><font size="2">All papers must be original and not simultaneously submitted to another journal or conference.</font></p><font size="2">
</font><p dir="ltr" style="text-align:justify"><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Submissions are NOT anonymous. The names and affiliations of the authors should be stated in the manuscript.</span></font></p><font size="2">
</font><p><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">All papers should be formatted following the Springer</span><a style="text-decoration:none" href="https://www.springer.com/gb/computer-science/lncs/conference-proceedings-guidelines" target="_blank"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial"> </span><u>LNCS/LNAI guidelines</u></a><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial"> and submitted through EasyChair.</span></font></p><font size="2">
</font><p><font size="2">The following paper categories are welcome:</font></p><font size="2">
</font><ul><li><font size="2"><strong>Long papers </strong><strong>(12 pages + 1 page references):</strong><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial"> Long papers should </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">present original research work and be no longer than thirteen pages in </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">total: twelve pages for the main text of the paper (including all </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">figures but excluding references), and one additional page for </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">references. </span></font></li><li><font size="2"><strong>Short papers </strong><strong>(7 pages + 1 page references):</strong><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial"> Short papers may </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">report on works in progress. Short paper submissions should be no </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">longer than eight pages in total: seven pages for the main text of the </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">paper (including all figures but excluding references), and one </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">additional page for references. </span></font></li><li><font size="2"><strong>Position papers</strong><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial"> regarding potential </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">research challenges are also welcomed in either long or short paper </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">format.</span></font></li></ul><font size="2">
</font><h2><font size="2">List of Topics</font></h2><font size="2">
</font><ul><li><font size="2">Deceptive Machines</font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Multi-Agent Systems and Agent-Based Models </span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Trust and Security in AI</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Machine Behaviour</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Argumentation</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Machine Learning</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Explainable AI - XAI</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Human-Computer(Agent) Interaction - HCI/HAI</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Philosophical, Psychological, and Sociological aspects</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Ethical, Moral, Political, Economical, and Legal aspects</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Storytelling and Narration in AI</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Computational Social Science</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Applications related to deceptive AI</span></font></li></ul><font size="2">
</font><h3><font size="2">Organizing committee</font></h3><font size="2">
</font><ul><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Stefan Sarkadi - King’s College London, UK</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Peter McBurney - King’s College London, UK</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Liz Sonenberg - University of Melbourne, Australia</span></font></li><li><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">Iyad Rahwan - Max Planck Center for Humans and Machines, Germany </span></font></li></ul><font size="2">
</font><h2><font size="2">Publication</font></h2><font size="2">
</font><p><font size="2">DeceptECAI2020 proceedings will be <span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial">submitted to Springer CCIS for publication.</span></font></p><p>We are also planning a Special Issue on the topic of "Deceptive AI" in a highly-ranked AI journal. Authors of selected papers will be invited to submit extended versions of their papers to this special issue. </p><p><font size="2"><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial"></span></font></p><font size="2">
</font><h2><font size="2">Contact</font></h2><font size="2">
</font><p><font size="2">All questions about submissions should be emailed to <span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial"><a href="mailto:stefan.sarkadi@kcl.ac.uk" target="_blank">stefan.sarkadi@kcl.ac.uk</a></span></font></p></div>