Workshop Goals

As a community of HCI researchers, we need to steer research addressing the slippery problem of elicitation and capture of emotions in the moment. Together with the Affective Computing community, we need to concretely define the collection and capture of valid emotion ground truths as an agenda and goal for the CHI community. Thus, the overarching goal of the first edition of the MEEC workshop is to establish lasting and meaningful connections across research communities concerned with affective computing, and to bring together students, researchers, and practitioners from various disciplines who are working on studying, designing, building, and/or evaluating the elicitation, capture, and prediction of human emotions.

The workshop will be highly interactive (see Workshop Program), and involve sketching out and deeply exposing the challenges of elicitation and capture. Specifically, we consider the following:

Elicitation:

  • Which multi-modal (e.g., film, music) and multi-sensory (e.g., auditory, taste, olfactory) elicitation methods are most suitable for which contexts?
  • What are the peculiarities across domains (e.g., understanding mobile interaction with and within automated vehicles?
  • How can we leverage the immersiveness of VR technologies for use as an elicitation method, and what limits does this impose on capture?
  • How can we elicit emotional states over time (e.g., mood)? What ethical considerations in elicitation need to be considered to ensure we respect the users’ personal, cognitive, and emotional boundaries?

Capture:

  • How can we capture a wider range of human emotions / feelings / moods, in the moment that they occur? While methods are being developed to collect in situ affect data, challenges remain in the range of emotions and moods we can capture.
  • Which emotions should we capture and how do cross-cultural differences impact this?
  • What emotional models do we draw upon, discrete e.g., Ekman’s six basic emotions or dimensional, e.g., Russell’s Circumplex model?
  • Which annotation modalities (e.g., speech, gestures) and tools (e.g., questionnaires, ESMs) are most apt?
  • Which devices (e.g., mobile, wearable) and sensors (e.g., RGB / thermal cameras, EEG) provide a good trade-off between unobtrusiveness and accurate measurements?
  • How can we factor in attentional considerations (e.g., interruptions) to lower dropoff rates and improve self-reports in ESMs?

After the workshop, we will provide a summary report to be published on the website, an ACM Interactions contribution, and put the proceedings online.