[agents] ACL 2020 Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML)

Amirali Bagher Zadeh abagherz at ANDREW.CMU.EDU
Fri Jan 31 08:00:00 EST 2020


ACL 2020 Second Grand-Challenge and Workshop on Multimodal Language
(Challenge-HML)

Website: http://multicomp.cs.cmu.edu/acl2020multimodalworkshop/

Keynotes:

   -

   Rada Mihalcea – University of Michigan (USA)
   -

   Ruslan Salakhutdinov – Carnegie Mellon University (USA)
   -

   M. Ehsan Hoque – University of Rochester (USA)
   -

   Yejin Choi - University of Washington (USA)

Important Dates

   -

   Paper Deadline: April 25th (Workshop) and May 1st (Grand-Challenge)
   -

   Grand challenge test data release: Feb 15th
   -

   Notification of Acceptance: May 9th
   -

   Camera-ready: May 21st
   -

   Workshop location: ACL 2020, Seattle, USA

**All deadlines @11:59 pm anywhere on Earth- year 2020)**

Supported by:

   -

   National Science Foundation (NSF)
   -

   Intel

=================================================================

The ACL 2020 Second Grand-Challenge and Workshop on Multimodal Language
(ACL 2020) offers a unique opportunity for interdisciplinary researchers to
study and model interactions between modalities of language, vision, and
acoustic. Modeling multimodal language is a growing research area in NLP.
This research area pushes the boundaries of multimodal learning and
requires advanced neural modeling of all three constituent modalities.
Advances in this research area allow the field of NLP to take the leap
towards better generalization to real-world communication (as opposed to
limitation to textual applications), and better downstream performance in
Conversational AI, Virtual Reality, Robotics, HCI, Healthcare, and
Education.

There are two tracks for submission: Grand-challenge and Workshop (workshop
allows archival and non-archival submissions). Grand-Challenge is focused
on multimodal sentiment and emotion recognition on CMU-MOSEI (grand-prize
of >$1k in value for the winner) and MELD dataset. The workshop accepts
publications in the below listed research areas. Archival track will be
published in ACL workshop proceedings and non-archival track will be only
presented during the workshop (but not published in proceedings). We invite
researchers from NLP, Computer Vision, Speech Processing, Robotics, HCI,
and Affective Computing to submit their papers.

   -

   Neural Modeling of Multimodal Language
   -

   Multimodal Dialogue Modeling and Generation
   -

   Multimodal Sentiment Analysis and Emotion Recognition
   -

   Language, Vision, and Speech
   -

   Multimodal Artificial Social Intelligence Modeling
   -

   Multimodal Commonsense Reasoning
   -

   Multimodal RL and Control
   -

   Multimodal Healthcare
   -

   Multimodal Educational Systems
   -

   Multimodal Affective Computing
   -

   Multimodal Robot/Computer Interaction
   -

   Multimodal and Multimedia Resources
   -

   Creative Applications of Multimodal Learning in E-commerce, Art, and
   other Impactful Areas.

We accept the following types of submissions:

   -

   Grand challenge papers are 6-8 pages, including infinite references.
   -

   Full and short workshop papers 6-8 and 4 pages respectively with
   infinite references.

Submission must be formatted according to ACL 2020 style files:
https://acl2020.org/calls/papers/#paper-submission-and-templates

Workshop Organizers

   -

   Amir Zadeh (Language Technologies Institute, Carnegie Mellon University)
   -

   Louis-Philippe Morency (Language Technologies Institute, Carnegie Mellon
   University)
   -

   Paul Pu Liang (Machine Learning Department, Carnegie Mellon University)
   -

   Soujanya Poria (Singapore University of Technology and Design)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cs.umbc.edu/pipermail/agents/attachments/20200131/4f7296fa/attachment-0001.html>


More information about the agents mailing list