<div dir="ltr">Hello everyone.<br>We have received a number of requests to extend the submission deadline of the papar.<br><br>[Call for Papers]<br>Advanced Robotics Special Issue on<br>Multimodal Processing and Robotics for Dialogue Systems<br><br>Co-Editors: <br>Prof. David Traum (University of Southern California, USA)<br>Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)<br>Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)<br>Prof. Ryuichiro Higashinaka (Nagoya University, Japan)<br>Dr. Takashi Minato (RIKEN/ATR, Japan)<br>Prof. Takayuki Nagai (Osaka University, Japan)<br> <br>Publication in Vol. 37, Issue 21 (Nov 2023)<br>SUBMISSION DEADLINE: 28 Feb 2023<br>In recent years, as seen in smart speakers such as Google Home and Amazon<br>Alexa, there has been remarkable progress in spoken dialogue systems<br>technology to converse with users with human-like utterances. In the future,<br>such dialogue systems are expected to support our daily activities in<br>various ways. However, dialogue in daily activities is more complex than<br>that with smart speakers; even with current spoken dialogue technology, it<br>is still difficult to maintain a successful dialogue in various<br>situations. For example, in customer service through dialogue, it is<br>necessary for operators to respond appropriately to the different ways of<br>speaking and requests of various customers. In such cases, we humans can<br>switch the speaking manner depending on the type of customer, and we can<br>successfully perform the dialogue by not only using our voice but also our<br>gaze and facial expressions.<br>This type of human-like interaction is far from possible with the existing<br>spoken dialogue systems. Humanoid robots have the possibility to realize<br>such an interaction, because they can recognize not only the user's voice<br>but also facial expressions and gestures using various sensors, and can<br>express themselves in various ways such as gestures and facial expressions<br>using their bodies. Their many means of expressions have the potential to<br>successfully continue dialogue in a manner different from conventional<br>dialogue systems.<br>The combination of such robots and dialogue systems can greatly expand the<br>possibilities of dialogue systems, while at the same time, providing a<br>variety of new challenges. Various research and development efforts are<br>currently underway to address these new challenges, including "dialogue<br>robot competition" at IROS2022.<br>In this special issue, we invite a wide range of papers on multimodal<br>dialogue systems and dialogue robots, their applications, and fundamental<br>research. Prospective contributed papers are invited to cover, but are not<br>limited to, the following topics on multimodal dialogue systems and robots:<br>*Spoken dialogue processing<br>*Multimodal processing<br>*Speech recognition<br>*Text-to-speech<br>*Emotion recognition<br>*Motion generation<br>*Facial expression generation<br>*System architecture<br>*Natural language processing<br>*Knowledge representation<br>*Benchmarking<br>*Evaluation method<br>*Ethics<br>*Dialogue systems and robots for competition<br>Submission:<br>The full-length manuscript (either PDF file or MS word file) should be sent<br>by 28th Feb 2023 to the office of Advanced Robotics, the Robotics Society of<br>Japan through the on-line submission system of the journal<br>(<a href="https://www.rsj.or.jp/AR/submission">https://www.rsj.or.jp/AR/submission</a>). Sample manuscript templates and<br>detailed instructions for authors are available at the website of the<br>journal.<br>Note that word count includes references. Captions and author bios are not included.<br>For special issues, longer papers can be accepted if the editors approve.<br>Please contact the editors before the submission if your manuscript exceeds<br>the word limit.<br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><pre cols="72">--
=====================================================================
Shogo Okada, Ph.D.
Associate Professor, School of Information science,
Japan Advanced Institute of Science and Technology (JAIST)
1-1 Asahidai, Nomi, Ishikawa, 923-1292 Japan
TEL +81-76-151-1205, FAX +81-76-151-1149
E-mail : <a href="mailto:okada-s@jaist.ac.jp" target="_blank">okada-s@jaist.ac.jp</a>
=====================================================================</pre></div></div></div></div>
<br>
<hr>
<p>Manage your subscription:</p>
<p>List Subscription Page: https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?SUBED1=ICMI-MULTIMODAL-ANNOUNCE</p>
<p>Unsubscribe: ICMI-MULTIMODAL-ANNOUNCE-signoff-request@LISTSERV.ACM.ORG</p>