<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><span id="gmail-docs-internal-guid-24e96bac-7fff-5da2-3e0e-4ffaf0dba675" style="color:rgb(0,0,0)"><h1 dir="ltr" style="line-height:1.32;margin-top:15pt;margin-bottom:0pt;padding:0pt 0pt 8pt"><span style="font-family:Arial;background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font size="4">CFP - NAACL 2021 Third Workshop on Multimodal Artificial Intelligence (MAI-Workshop) - June 6, 2021</font></span></h1><h1 dir="ltr" style="line-height:1.32;margin-top:0pt;margin-bottom:0pt;padding:7pt 0pt 8pt"><span style="font-family:Arial;background-color:transparent;font-weight:400;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font size="2">The workshop is an extension of Workshop on Multimodal Language (Challenge-HML) @ ACL 2018, 2020</font></span></h1><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Website: </span><span style="text-decoration:underline;font-family:Arial;background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;text-decoration-skip:none;vertical-align:baseline;white-space:pre-wrap"><a href="http://multicomp.cs.cmu.edu/naacl2021multimodalworkshop/" style="text-decoration:none">http://multicomp.cs.cmu.edu/naacl2021multimodalworkshop/</a></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span id="gmail-docs-internal-guid-4237d142-7fff-869b-7b1d-d2ecae4de005"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Submissions: </span><a href="https://www.softconf.com/naacl2021/MAIWorkshop/" style="text-decoration:none"><span style="font-family:Arial;background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;text-decoration:underline;text-decoration-skip:none;vertical-align:baseline;white-space:pre-wrap">https://www.softconf.com/naacl2021/MAIWorkshop/</span></a></span><br></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">COVID-19 UPDATE:</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;padding:8pt 0pt 10pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">We hope everyone and their loved ones are staying safe during the COVID-19 pandemic. MAI-workshop will be held online. </span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The NAACL 2021 Workshop on Multimodal Artificial Intelligence (MAI-Workshop) offers a unique opportunity for interdisciplinary researchers to study and model interactions between (but not limited to) modalities of language, vision, and acoustic. Advances in multimodal learning allows the field of NLP to take the leap towards better generalization to real-world (as opposed to limitation to textual applications), and better downstream performance in Conversational AI, Virtual Reality, Robotics, HCI, Healthcare, and Education.</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:10pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">We invite researchers from NLP, Computer Vision, Speech Processing, Robotics, HCI, and Affective Computing to submit their papers.</span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Neural Modeling of Multimodal Language</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal Dialogue Modeling and Generation</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal Sentiment Analysis and Emotion Recognition</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Language, Vision, and Speech</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal Artificial Social Intelligence Modeling</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal Commonsense Reasoning</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal RL and Control </span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal Healthcare</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal Educational Systems</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal Affective Computing</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal Robot/Computer Interaction</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Multimodal and Multimedia Resources</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:12pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Creative Applications of Multimodal Learning in E-commerce, Art, and other Impactful Areas.</span></p></li></ul><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:10pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Keynotes:</span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:8pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Kristen Grauman – University of Texas at Austin (USA)</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Anind Dey – University Washington (USA)</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:10pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Emily Mower Provost - University of Michigan (USA)</span></p></li></ul><p dir="ltr" style="line-height:1.38;margin-top:8pt;margin-bottom:10pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Important Dates </span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">March 15th: Deadline for all submissions</span></p></li></ul><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">April 15, 2021: Notification of Acceptance</span></p></li></ul><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">April 26, 2021: Camera-ready papers </span></p></li></ul><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:12pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">June 6, 2021: Workshop @ NAACL</span></p></li></ul><p dir="ltr" style="line-height:1.38;margin-top:8pt;margin-bottom:0pt;padding:0pt 0pt 10pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">**All deadlines @11:59 pm anywhere on Earth- year 2021)**</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">=================================================================</span></p><p dir="ltr" style="line-height:1.38;margin-top:8pt;margin-bottom:0pt;padding:0pt 0pt 10pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">There are two tracks for submission: archival and non-archival submissions. Archival track will be published in NAACL workshop proceedings and non-archival track will be only presented during the workshop (but not published in proceedings). Full and short workshop papers 6-8 and 4 pages respectively with infinite references. </span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Submission must be formatted according to NAACL 2021 style files: <a href="https://2021.naacl.org/">https://2021.naacl.org/</a></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:10pt"><span style="font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Workshop Organizers</span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:8pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Amir Zadeh – Language Technologies Institute, Carnegie Mellon University</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Louis-Philippe Morency – Language Technologies Institute, Carnegie Mellon University</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Paul Pu Liang – Machine Learning Department, Carnegie Mellon University</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Candace Ross – Massachusetts Institute of Technology</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Ruslan Salakhutdinov – Carnegie Mellon University</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Soujanya Poria – Singapore University of Technology and Design</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Erik Cambria – Nanyang Technological University</span></p></li><li dir="ltr" style="list-style-type:disc;font-family:Arial;color:rgb(51,51,51);background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:10pt"><span style="background-color:transparent;font-variant-ligatures:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Kelly Shi – Carnegie Mellon University</span></p></li></ul><br><br></span></div></div></div></div>
<br>
<hr>
<p align="center">You can unsubscribe for this list at any time through this link:

<br>
<a href="https://optout.acm.org/unsubscribe.cfm?rl=ICMI-MULTIMODAL-ANNOUNCE&RE=AGENTS@CS.UMBC.EDU" target="_blank">Unsubscribe</a>
</p>