[agents] The Last Call for Papers -- TRUST: Workshop at AAMAS/IJCAI/ECAI/ICML 2018

Murat Sensoy murat.sensoy at ozyegin.edu.tr
Fri Apr 13 04:25:21 EDT 2018


———————————————————————————————————————
The Last Call for Papers -- TRUST: Workshop at AAMAS/IJCAI/ECAI/ICML 2018
              Stockholm, Sweden -- July 14, 2018 or July 15, 2018
               https://sites.google.com/site/trustworkshop2018 <https://sites.google.com/site/trustworkshop2018>
------------------------------------------------------------------------------

Trust is important in many kinds of interactions, including computer-mediated
human interaction, human-computer interaction and among social agents;
it characterizes those elements that are essential in social reliability.
It also informs the selection of partners for successful multiagent
coordination. Trust is more than communication that is robust against
repudiation or interference. Increasingly, there is concern for human
users to trust the AI systems which have been designed to act on their behalf.
This trust can be engendered through effective transparency and lack of 
bias, as well as through successful attention to user needs. Mistrust has also
emerged as a current theme, especially within online settings where
misinformation may abound. AI approaches to addressing this concern have
thus come into focus.

This workshop aims to bring together researchers working on related 
issues regarding trust and artificial intelligence, expanding the discussion
beyond the borders of multiagent trust modeling, where research and dialogue
has been very active over the past twenty years.

Many computational and theoretical models and approaches to reputation
have been developed recently. Further, identity and associated
trustworthiness must be ascertained for reliable interactions or transactions.
Trust is foundational for the notion of agency and for its defining
relation of acting "on behalf of". It is also critical for modeling
and supporting groups and teams, for both organization and coordination,
with the related trade-off between individual utility and collective interest.
The electronic medium seems to weaken the usual bonds of social control
and the disposition to mislead grows stronger; this is yet another context
where trust modeling is critical.

The aim of the workshop is to bring together computer science 
researchers with a vested interested in exploring artificial intelligence
modeling of trust (ideally from different subdisciplines). We welcome
submissions of high-quality research addressing issues that are clearly
relevant to trust, deception, reputation, security and control,
from theoretical, applied and interdisciplinary perspectives. Submitted
contributions should be original and not submitted elsewhere. Papers
accepted for presentation must be relevant to the workshop and to demonstrate
clear exposition, offering new ideas is suitable depth and detail.
Papers are to be formatted in Springer format and be no more than 12 pages.

The scope of the workshop includes (but is not limited to):

Trust modeling in multiagent systems
Addressing misinformation in online systems
Engendering trust in AI systems from human users

with more specific subtopics including (but not limited to):
Trust and risk-aware decision making
Game-theoretic models of trust
Trust in the context of adversarial environments
Deception and fraud, and its detection and prevention
Intrusion resilience in trusted computing
Reputation mechanisms
Trust within socio-technical systems and organizations
Socio-cognitive models of trust
Trust within service-oriented architectures
Human or agent trust in agent partners
Trust within social networks
AI solutions to improve online fact checking and critical thinking
Detecting and preventing collusion
Improving transparency in AI systems
Addressing bias in AI systems
Detecting and addressing mistrust of AI systems from human users
Realworld applications of multiagent trust modeling

We are currently in discussion with the Editor in Chief of
ACM TOIT (Transactions on Internet Technology) for
a special issue of this journal (with a 2018 call for papers) to 
which authors of outstanding papers at the workshop will be invited to submit
an expanded version of their work.

Motivation

This workshop will continue the tradition of bringing together at AAMAS
each year researchers working on modeling trust in multiagent systems
but with an expanded vision for the gathering, to encourage participation
from researchers working on related issues: regarding trust of
AI systems (and the need to address possible mistrust of these systems) and
regarding concern about mistrust for applications where AI solutions may be
of use (such as the web and online social networks).

The workshop will consist of paper presentations, invited talks
and panel discussions, with the latter aimed at fostering discussion
of how the theme of trust is pervading not only the multiagent systems
community but also the more general AI community, with respect to
the additional topic areas outlined above.

This will be a one-day workshop.


Submission

Papers can be up to 12 pages single-sided (Springer format) and will be
submitted through the Easychair system.

Papers must be original: not previously published and not in submission.
Reviewing will be single blind (i.e. author names included on first page).


Expected Dates (to be finalized once we hear back from conference organizers):

Papers due Apr 17
Papers to PC by Apr 20
PC Reviews due May 13
Notification of Acceptances May 16
Camera-Ready due May 31

Submissions should be made through easychair:  https://easychair.org/conferences/?conf=trust2018 <https://easychair.org/conferences/?conf=trust2018>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cs.umbc.edu/pipermail/agents/attachments/20180413/f05fd80c/attachment.html>


More information about the agents mailing list