[agents] Cfp: IVA workshop on methodology and/of evaluation of Intelligent virtual agents

Willem-Paul Brinkman - EWI W.P.Brinkman at tudelft.nl
Tue May 14 03:51:20 EDT 2019


---
Date: 2 July 2019
Location: Paris
Website: iva2019methodologyworkshop.wordpress.com 

Conference: ACM Intelligent virtual agents (iva2019.sciencesconf.org)

---

The aim of the 2nd workshop on methodology and evaluation is to critically but constructively discuss the empirical evaluation methods that are used in Human Computer Interaction, specifically in the area of Intelligent Virtual Agents. The social and life sciences are in a crisis of methodology as the results of many scientific studies are difficult or impossible to replicate in subsequent investigation (e.g. Pashler & Wagenmakers, 2012). The Open ScienceCollaboration (2015) observed, for example, that the effect size of replications was about half of the reported original effect size and that where 97% of the original studies had significant result, only 39% of the replication studies had significant results. In fact it has been suggested that more than 50% of psychological research results might be false (i.e. theories hold no or very low verisimilitude) (Ioannidis, 2005). Many of the methods employed by HCI researchers come from the fields that are currently in a replication crisis. Hence, do our studies have similar issues?

Long before the replication crisis hit psychology, Meehl (1990) suggested ten obfuscating factors that make that research on psychological theories are often uninterpretable. Viewing these factors gives us an idea of the scope of the problems that our research methodology might face:
1.Loose derivation chain: Very few derivation chains running from the theoretical premises to the predicted observational relation are deductively tight;
2.Problematic auxiliary theories: each auxiliary theory is itself nearly as problematic as the main theory we are testing;
3.Problematic ceteris paribus clause;
4.Experimenter error;
5.Inadequate statistical power;
6.Crud factor: everything correlates with everything;
7.Pilot studies: A true pilot study is a main study in the small. But these are often not published which can lead to line of research being dropped;
8.Selective bias in submitting report;
9.Selective editorial bias;
10.Detached validation claim for psychometric instruments: claiming a measure is 'valid' without further consideration;

A variety of ideas to improve research practices have been proposed and it is likely these ideas can be beneficial to the methods used in the field of HCI. Some actionable points leading to open and reproducible science are pre-registration of experiments, replication of findings, collaboration and education of researchers. For our field this could mean replicating our stimulus (such as an intelligent virtual agent) and the effect it has on users. The replication crisis needs our attention and as we reflect on our methods it makes sense to discuss in general our scientific methods and practises.

A workshop aimed at improving the quality of IVA research and methods should be welcomed by all IVA researchers. During the workshop we will discuss the methodological challenges identified in other fields and how they relate to the methods we use in our field. Additionally, we will discuss the proposed remedies and whether these are applicable for the research we conduct. We will discuss whether questions such as those posed above are relevant and, if so, how to go about answering them. This workshop is intended as a starting point and it will be the first of a series of workshops (at IVA and other conferences in the field) on this topic.

The goal is to embrace a positive, proactive approach that is sustainable and will lead to better science (no naming and shaming). The idea is to foster discussion and one way to achieve this is by having provocative statements to respond to. We invite participants to submit thought provoking statements about the methodology in HCI and/or respond to statements that we propose. Additionally, we invite (junior & senior) researchers to submit research ideas. Together with the participants and panel, we will offer practical support to improve the quality of their empirical work. Participants can posit their statements and/or discussions in an​ extended abstract​​(max 3 pages, excluding references)​.

Papers can be submitted via e-mail to: ​m.bruijnes at utwente.nl

Submission (extended) Deadline: 1 June 2019


For more information on the workshop see iva2019methodologyworkshop.wordpress.com 

Organisation
Merijn Bruijnes (University of Twente)
Ulysses Bernardet (Aston University)
Willem-Paul Brinkman (Delft University of Technology)
Deborah Richards (Macquarie University)
Annika Silvervarg (Linköping University)
Jelte van Waterschoot (University of Twente)



More information about the agents mailing list