[agents] CfP - 2nd International Workshop on Deceptive AI

Stefan Sarkadi stefansarkadi at gmail.com
Wed Apr 7 09:28:03 EDT 2021


*DeceptAI2021: 2nd International Workshop on Deceptive AI @
IJCAI2021Montreal, Canada, August 21-23 (TBD), 2021Conference
websitehttps://sites.google.com/view/deceptai2021
<https://sites.google.com/view/deceptai2021>Submission
linkhttps://easychair.org/conferences/?conf=deceptai2021
<https://easychair.org/conferences/?conf=deceptai2021>Submission
deadlineMay 2, 2021Topics: deception
<https://easychair.org/cfp/topic.cgi?tid=3981;a=26174424> artificial
intelligence <https://easychair.org/cfp/topic.cgi?tid=670;a=26174424>There
is no dominant theory of deception. The literature on deception treats
different aspects and components of deception separately, sometimes
offering contradictory evidence and opinions on these components. Emerging
AI techniques offer an exciting and novel opportunity to expand our
understanding of deception from a computational perspective. However, the
design, modelling and engineering of deceptive machines is not trivial from
either conceptual, engineering, scientific, or ethical perspectives.The aim
of DeceptAI is to bring together people from academia, industry and
policy-making to discuss and disseminate the current and future threats,
risks, benefits and challenges of designing deceptive AI. The workshop
proposes a multidisciplinary (Computer Science, Psychology, Sociology,
Philosophy & Ethics, Military Studies, Law etc.) approach to discuss the
following aspects of deceptive AI: - Behaviour - What type of machine
behaviour should be considered deceptive? - How do we study deceptive
behaviour in machines as opposed to humans? - Reasoning - What kind of
reasoning mechanisms lie behind deceptive behaviour? - What type of
reasoning mechanisms are more prone to deception? - Cognition - How does
cognition affect deception and how does deception affect cognition? - What
function, if any, do agent cognitive architectures play in deception? - AI,
Ethics, & Society - How does the ability of machines to deceive influence
society? - What kinds of measures do we need to take in order to neutralise
or mitigate the negative effects of deceptive AI? - Engineering Principles-
How should we engineer autonomous agents such that we are able to know why
and when they deceive? - Why should or shouldn't we engineer or model
deceptive machines?Submission GuidelinesAll papers must be original and not
simultaneously submitted to another journal or conference. The following
paper categories are welcome: - Full papers (16 pages +references)
describing novel work in the area of Deceptive AI. - Short Papers (8 pages
+ references) describing novel work in the area of Deceptive AI, this may
include Work-in-Progress.- Position Papers (2-6 pages) describing research
challenges related to Deceptive AI.Note the formatting has wide margins
making the page length much longer than expected. 5 pages is equivalent to
2 pages in IJCAI's formatting.All papers will be reviewed by at least two
members of the Program Committee. Review process will be double-blind,
therefore please remove author names and affiliations. All papers should be
formatted following the Springer Lecture Notes in Computer Science
LNCS/LNAI style and submitted through the EasyChair link below.EasyChair
Submission Link: https://easychair.org/conferences/?conf=deceptai2021
<https://easychair.org/conferences/?conf=deceptai2021>LNCS Latex:
ftp://ftp.springernature.com/cs-proceeding/llncs/llncs2e.zip
<http://ftp.springernature.com/cs-proceeding/llncs/llncs2e.zip>LNCS Word:
ftp://ftp.springernature.com/cs-proceeding/llncs/word/splnproc1703.zip
<http://ftp.springernature.com/cs-proceeding/llncs/word/splnproc1703.zip>List
of Topics (Non-Exhaustive) - Deceptive Machines - Multi-Agent Systems and
Agent-Based Models - Trust and Security in AI - Machine Behaviour -
Argumentation - Machine Learning - Explainable AI - XAI -
Human-Computer(Agent) Interaction - HCI/HAI - Human-Robot Interaction - HRI
- Philosophical, Psychological, and Sociological aspects - Ethical, Moral,
Political, Economical, and Legal aspects - Storytelling and Narration in AI
- Computational Social Science - Applications related to Deceptive AI
(Cybersecurity, Red Teams, Social Media, Social Engineering, etc.)
CommitteesOrganizing committee - Peta Masters
<https://findanexpert.unimelb.edu.au/profile/30453-peta-masters>,
University of Melbourne, AUS - Stefan Sarkadi <https://stefansarkadi.com/>,
INRIA CNRS, France- Ben Wright <https://sythmaster.github.io/>, US Naval
Research Laboratory, USA ContactAll questions about submissions should be
emailed to the DeceptAI Chairs at deceptai.organisers at gmail.com
<deceptai.organisers at gmail.com> , if there is an issue you can contact Ben
Wright at benjamin.wright.ctr at nrl.navy.mil
<benjamin.wright.ctr at nrl.navy.mil>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cs.umbc.edu/pipermail/agents/attachments/20210407/f903e15a/attachment-0001.html>


More information about the agents mailing list