<div dir="ltr"><font size="2"><span style="font-size:11pt"><b>ACM Transactions on Human-Robot Interaction (T-HRI) </b><br></span></font><div><font size="2"><span style="font-size:11pt"><br clear="all"></span></font><font size="2"><span style="font-size:11pt">CALL FOR PAPERS <span style="color:rgb(0,0,255)"> </span><font size="2"><span style="font-size:11pt"><span style="color:rgb(0,0,255)"><b>(<span class="gmail-m_5096320738934124068gmail-m_1412396784761748072gmail-m_-773994245147050143gmail-m_6181751552782620113gmail-il">Extended</span> <span class="gmail-m_5096320738934124068gmail-m_1412396784761748072gmail-m_-773994245147050143gmail-m_6181751552782620113gmail-il">Deadline</span>)</b> </span> </span></font><br>
<br>
<font size="2"><span style="font-size:10pt"></span></font><font size="2">**<span style="font-size:10pt">Apologies for cross posting ** <br>
<br>
</span></font></span></font></div><div><font size="2"><span style="font-size:11pt"><font size="2"><span style="font-size:10pt">We are happy to call for papers for the journal special issue:<br>
</span></font>
</span></font></div><div><div><span style="color:rgb(0,0,0)"><b><h3 id="gmail-m_5096320738934124068gmail-m_1412396784761748072gmail-m_-773994245147050143gmail-m_6181751552782620113gmail-m_5704034714349601881gmail-m_2920555029400710929gmail-m_-4403091505454908768gmail-m_8349110421055893240gmail-m_-3032401725268901814gmail--special-issue-on-representation-learning-for-human-and-robot-cognition">"Representation Learning for Human and Robot Cognition"</h3></b></span></div><div><b>Webpage: </b><b><b><span style="color:rgb(0,0,255)"><a href="https://thri.acm.org/CFP-RLHRC.cfm" target="_blank">https://thri.acm.org/CFP-RLHRC<wbr>.cfm</a></span></b> </b><a href="http://cognitive-mirroring.org/en/events/hai2017_workshop/" rel="noopener noreferrer" target="_blank"><b><span style="color:rgb(0,0,255)"></span></b></a></div>
<br>
<b><font size="2">I. Aim and Scope </font><br><br></b>Intelligent robots
are rapidly moving to the center of human environment; they collaborate
with human users in different applications that require high-level
cognitive functions so as to allow them to understand and learn from
human behavior within different Human-Robot Interaction (HRI) contexts.
To this end, a stubborn challenge that attracts much attention in
artificial intelligence is representation learning, which refers to
learning representations of data so as to efficiently extract relevant
features for probabilistic, nonprobabilistic, or connectionist
classifiers. This active area of research spans different fields and
applications including speech recognition, object recognition, emotion
recognition, natural language processing, language emergence and
development, in addition to mirroring different human cognitive
processes through appropriate computational modeling.<br><br>Learning
constitutes a basic operation in the human cognitive system and
developmental process, where perceptual information enhances the ability
of the sensory system to respond to external stimuli through
interaction with the environment. This learning process depends on the
optimality of features (representations of data), which allows humans to
make sense of everything they feel, hear, touch, and see in the
environment. Using intelligent robots could open the door to shed light
on the underlying mechanisms of representation learning and its
associated cognitive processes so as to take a closer step towards
making robots able to better collaborate with human users in space.<br><br>This
special issue aims to shed light on cutting edge lines of
interdisciplinary research in artificial intelligence, cognitive
science, neuroscience, cognitive robotics, and human-robot interaction,
focusing on representation learning with the objective of creating
natural and intelligent interaction between humans and robots. Recent
advances and future research lines in representation learning will be
discussed in detail in this journal special issue. <b></b><strong><br><br>II. Potential Topics</strong><strong> </strong></div><div>
<p>
Topics relevant to this special issue include, but are not limited to: </p><ul style="list-style-type:disc">
<li>Language learning, embodiment, and social intelligence</li>
<li>Human symbol system and symbol emergence in robotics</li>
<li>Computational modeling for high-level human cognitive functions</li>
<li>Predictive learning from sensorimotor information</li>
<li>Multimodal interaction and concept formulation</li>
<li>Language and action development</li>
<li>Learning, reasoning, and adaptation in collaborative human-robot tasks</li>
<li>Affordance learning</li>
<li>Cross-situational learning</li>
<li>Learning by demonstration and imitation</li>
<li>Language and grammar induction in robots</li>
</ul><p></p></div><div><p>
<strong>III. Submission</strong></p><p><strong></strong><strong> </strong></p>
<p></p><p>ACM Transactions on Human-Robot Interaction is a
peer-reviewed, interdisciplinary, open-access journal using an online
submission and manuscript tracking system. To submit your paper, please:<br></p><ul><li>Go to <a href="https://mc.manuscriptcentral.com/thri" target="_blank">https://mc.manuscriptcentral.c<wbr>om/thri</a> and login or follow the "Create an account" link to register.</li><li>After logging in, click the "Author" tab.</li><li>Follow the instructions to "Start New Submission".</li><li>Choose the submission category “<b>SI: Representation Learning for Human and Robot Cognition</b>”.<br></li></ul></div><div><p>
<strong>IV. Timline<br></strong></p>
<ul><li>Deadline for paper submission: <span style="color:rgb(0,0,255)"><b>August 1</b></span>, 2018</li><li>First notification for authors: September 15, 2018</li><li>Deadline for revised papers submission: November 15, 2018</li><li>Final notification for authors: January 15, 2019</li><li>Deadline for submission of camera-ready manuscripts: March 1, 2019</li><li>Expected publication date: May 2019<br></li></ul><p></p></div><p>
<strong>V. Guest editors</strong></p>Takato Horii, The University of Electro-Communications, Japan
<span></span><font color="#000000"><font face="EAAAAA+Carlito, serif"><font style="font-size:11pt" size="2">(</font></font></font><font color="#0070c0"><font face="Arial, serif"><font style="font-size:11pt" size="2"><a href="mailto:takato@uec.ac.jp" target="_blank">takato@uec.ac.jp</a></font></font></font><font color="#000000"><font face="EAAAAA+Carlito, serif"><font style="font-size:11pt" size="2">).</font></font></font>
<br><font size="2"><span></span></font><font size="2"><span><span><font size="2"><span><span><font size="2"><span><span><font size="2"><span><span><font size="2"><span></span></font></span></span></font></span></span></font></span></span></font></span></span></font>Dr. Amir Aly, Ritsumeikan University, Japan<font color="#000000"><font face="EAAAAA+Carlito, serif"><font style="font-size:11pt" size="2"> (</font></font></font><font color="#0070c0"><font face="Arial, serif"><font style="font-size:11pt" size="2"><a href="mailto:amir.aly@em.ci.ritsumei.ac.jp" target="_blank">amir.aly@em.ci.ritsumei.ac.jp</a></font></font></font><font color="#000000"><font face="EAAAAA+Carlito, serif"><font style="font-size:11pt" size="2"><wbr>).</font></font></font>
<br>Dr. Yukie Nagai, National Institute of Information and Communications Technology (NICT), Japan
<span></span><font color="#000000"><font face="EAAAAA+Carlito, serif"><font style="font-size:11pt" size="2">(</font></font></font><font color="#0070c0"><font face="Arial, serif"><font style="font-size:11pt" size="2"><a href="mailto:yukie@nict.go.jp" target="_blank">yukie@nict.go.jp</a></font></font></font><font color="#000000"><font face="EAAAAA+Carlito, serif"><font style="font-size:11pt" size="2">).</font></font></font>
<br>Prof. Takayuki Nagai, The University of Electro-Communications, Japan
<span></span><font color="#000000"><font face="EAAAAA+Carlito, serif"><font style="font-size:11pt" size="2">(</font></font></font><font color="#0070c0"><font face="Arial, serif"><font style="font-size:11pt" size="2"><a href="mailto:nagai@ee.uet.at.jp" target="_blank">nagai@ee.uet.at.jp</a></font></font></font><font color="#000000"><font face="EAAAAA+Carlito, serif"><font style="font-size:11pt" size="2">).</font></font></font><br clear="all"><br>--------------------- <br><div class="gmail-m_5096320738934124068gmail-m_1412396784761748072gmail-m_-773994245147050143gmail-m_6181751552782620113gmail-m_5704034714349601881gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font size="2"><span><b>Amir Aly, Ph.D.</b><br></span></font><font size="2"><span>Senior Researcher</span></font><font size="2"><span><span><font size="2"><span><span><font size="2"><span><span><font size="2"><span><span><font size="2"><span><br></span></font></span></span></font></span></span></font></span></span></font></span>Emergent Systems Laboratory</span></font><br><font size="2"><span><span><font size="2"><span><span><font size="2"><span></span></font></span></span></font></span>College of Information Science and Engineering</span></font><br><font size="2"><span><span><font size="2"><span><span><font size="2"><span><span><font size="2"><span><span><font size="2"><span><span><font size="2"><span>Ritsumeikan University<br></span></font></span></span></font></span></span></font></span></span></font></span></span></font></span>1-1-1 Noji Higashi, Kusatsu, Shiga 525-8577<br>Japan</span></font></div></div></div></div></div></div></div></div>
</div>