 |
|
ICMI'13
Table
of Contents
ICMI
2013 Chairs' Welcome
Julien Epps (The University of New South Wales)
Fang Chen (National ICT Australia)
Sharon Oviatt (Incaa Designs)
Kenji Mase (Nagoya University)
Andrew Sears (Rochester Institute of Technology)
Kristiina Jokinen (University of Helsinki)
Bj�rn Schuller (Technische Universit�t M�nchen)
ICMI
2013 Conference Organization
ICMI
2013 Additional Reviewers
ICMI
2013 Sponsor & Supporters
Keynote
1
Behavior
Imaging and the Study of Autism (Page
1)
James M. Rehg (Georgia Institute of Technology) |
|
Oral
Session 1: Personality
On
the Relationship between Head Pose, Social Attention and Personality
Prediction for Unstructured and Dynamic Group Interactions (Page
3)
Ramanathan Subramanian (Advanced Digital Sciences Center)
Yan Yan (University of Trento)
Jacopo Staiano (University of Trento)
Oswald Lanz (Fondazione Bruno Kessler)
Nicu Sebe (University of Trento)
One
of a Kind: Inferring Personality Impressions in Meetings (Page
11)
Oya Aran (Idiap Research Institute)
Daniel Gatica-Perez (Idiap Research Institute & �cole Polytechnique
F�d�rale de Lausanne)
Who
Is Persuasive? The Role of Perceived Personality and Communication Modality
in Social Multimedia (Page
19)
Gelareh Mohammadi (Idiap Research Institute & �cole Polytechnique
F�d�rale de Lausanne)
Sunghyun Park (University of Southern California)
Kenji Sagae (University of Southern California)
Alessandro Vinciarelli (University of Glasgow & Idiap Research Institute)
Louis-Philippe Morency (University of Southern California)
Going
Beyond Traits: Multimodal Classification of Personality States in the
Wild (Page 27)
Kyriaki Kalimeri (University of Trento & Bruno Kessler Foundation)
Bruno Lepri (Bruno Kessler Foundation & Massachusetts Institute of
Technology)
Fabio Pianesi (Bruno Kessler Foundation) |
|
Oral
Session 2: Communication
Implementation
and Evaluation of a Multimodal Addressee Identification Mechanism for
Multiparty Conversation Systems (Page
35)
Yukiko I. Nakano (Seikei University)
Naoya Baba (Seikei University)
Hung-Hsuan Huang (Ritsumeikan University)
Yuki Hayashi (Seikei University)
Managing
Chaos: Models of Turn-taking in Character-multichild Interactions (Page
43)
Iolanda Leite (Disney Research, Pittsburgh & INESC-ID)
Hannaneh Hajishirzi (Disney Research, Pittsburgh & University of
Washington)
Sean Andrist (Disney Research, Pittsburgh & University of Wisconsin-Madison)
Jill Lehman (Disney Research, Pittsburgh)
Speaker-Adaptive
Multimodal Prediction Model for Listener Responses (Page
51)
Iwan de Kok (University of Twente)
Dirk Heylen (University of Twente)
Louis-Philippe Morency (University of Southern California)
User
Experiences of Mobile Audio Conferencing with Spatial Audio, Haptics
and Gestures (Page
59)
Jussi Rantala (University of Tampere)
Sebastian M�ller (University of Tampere)
Roope Raisamo (University of Tampere)
Katja Suhonen (Tampere University of Technology)
Kaisa V��n�nen-Vainio-Mattila (Tampere University of Technology)
Vuokko Lantz (Nokia Research Center) |
|
Demo
Session 1
A
Framework for Multimodal Data Collection, Visualization, Annotation
and Learning (Page
67)
Anne Loomis Thompson (Microsoft Research)
Dan Bohus (Microsoft Research)
Demonstration
of Sketch-Thru-Plan: A Multimodal Interface for Command and Control (Page
69)
Philip R. Cohen (Adapx Inc.)
Cecelia Buchanan (Adapx Inc.)
Edward J. Kaiser (Adapx Inc.)
Michael Corrigan (Adapx Inc)
Scott Lind (Adapx Inc.)
Matt Wesson (Adapx Inc)
Robotic
Learning Companions for Early Language Development (Page
71)
Jacqueline M. Kory (Massachusetts Institute of Technology)
Sooyeon Jeong (Massachusetts Institute of Technology)
Cynthia L. Breazeal (Massachusetts Institute of Technology)
WikiTalk
Human-Robot Interactions (Page
73)
Graham Wilcock (University of Helsinki)
Kristiina Jokinen (University of Helsinki) |
|
Poster
Session 1
Saliency-Guided
3D Head Pose Estimation on 3D Expression Models (Page
75)
Peng Liu (State University of New York at Binghamton)
Michael Reale (State University of New York at Binghamton)
Xing Zhang (State University of New York at Binghamton)
Lijun Yin (State University of New York at Binghamton)
Predicting
Next Speaker and Timing from Gaze Transition Patterns in Multi-Party
Meetings (Page
79)
Ryo Ishii (NTT Corporation)
Kazuhiro Otsuka (NTT Corporation)
Shiro Kumano (NTT Corporation)
Masafumi Matsuda (NTT Corporation)
Junji Yamato (NTT Corporation)
A
Semi-Automated System for Accurate Gaze Coding in Natural Dyadic Interactions (Page
87)
Kenneth A. Funes-Mora (Idiap Research Institute & �cole Polytechnique
F�d�rale de Lausanne)
Laurent Nguyen (Idiap Research Institute & �cole Polytechnique F�d�rale
de Lausanne)
Daniel Gatica-Perez (Idiap Research Institute & �cole Polytechnique
F�d�rale de Lausanne)
Jean-Marc Odobez (Idiap Research Institute & �cole Polytechnique
F�d�rale de Lausanne)
Evaluating
the Robustness of an Appearance-based Gaze Estimation Method for Multimodal
Interfaces (Page
91)
Nanxiang Li (University of Texas at Dallas)
Carlos Busso (University of Texas at Dallas)
A
Gaze-based Method for Relating Group Involvement to Individual Engagement
in Multimodal Multiparty Dialogue (Page
99)
Catharine Oertel (KTH The Royal Institute of Technology)
Giampiero Salvi (KTH The Royal Institute of Technology)
Leveraging
the Robot Dialog State for Visual Focus of Attention Recognition (Page
107)
Samira Sheikhi (Idiap Research Institute)
Vasil Khalidov (Idiap Research Institute)
David Klotz (Bielefeld University)
Britta Wrede (Bielefeld University)
Jean-Marc Odobez (Idiap Research Institute) |
|
CoWME:
A General Framework to Evaluate Cognitive Workload During Multimodal
Interaction (Page
111)
Davide Maria Calandra (University of Naples "Federico II")
Antonio Caso (University of Naples "Federico II")
Francesco Cutugno (University of Naples "Federico II")
Antonio Origlia (University of Naples "Federico II")
Silvia Rossi (University of Naples "Federico II")
Hi
YouTube! Personality Impressions and Verbal Content in Social Video (Page
119)
Joan-Isaac Biel (Idiap Research Institute & �cole Polytechnique F�d�rale
de Lausanne)
Vagia Tsiminaki (Idiap Research Institute)
John Dines (Idiap Research Institute)
Daniel Gatica-Perez (Idiap Research Institute & �cole Polytechnique
F�d�rale de Lausanne)
Cross-Domain
Personality Prediction: From Video Blogs to Small Group Meetings (Page
127)
Oya Aran (Idiap Research Institute)
Daniel Gatica-Perez (Idiap Research Institute & �cole Polytechnique
F�d�rale de Lausanne)
Automatic
Detection of Deceit in Verbal Communication (Page
131)
Rada Mihalcea (University of Michigan)
Ver�nica P�rez-Rosas (University of North Texas)
Mihai Burzo (University of Michigan - Flint)
Audiovisual
Behavior Descriptors for Depression Assessment (Page
135)
Stefan Scherer (University of Southern California)
Giota Stratou (University of Southern California)
Louis-Philippe Morency (University of Southern California)
A
Markov Logic Framework for Recognizing Complex Events from Multimodal
Data (Page 141)
Young Chol Song (University of Rochester)
Henry Kautz (University of Rochester)
James Allen (University of Rochester)
Mary Swift (University of Rochester)
Yuncheng Li (University of Rochester)
Jiebo Luo (University of Rochester)
Ce Zhang (University of Wisconsin-Madison)
Interactive
Relevance Search and Modeling: Support for Expert-Driven Analysis of
Multimodal Data (Page
149)
Chreston Miller (Virginia Tech)
Francis Quek (Virginia Tech)
Louis-Philippe Morency (University of Southern California) |
|
Predicting
Speech Overlaps from Speech Tokens and Co-occurring Body Behaviours
in Dyadic Conversations (Page
157)
Costanza Navarretta (University of Copenhagen)
Interaction
Analysis and Joint Attention Tracking in Augmented Reality (Page
165)
Alexander Neumann (Bielefeld University)
Christian Schnier (Bielefeld University)
Thomas Hermann (Bielefeld University)
Karola Pitsch (Bielefeld University)
Mo!Games:
Evaluating Mobile Gestures in the Wild (Page
173)
Julie R. Williamson (University of Glasgow)
Stephen Brewster (University of Glasgow)
Rama Vennelakanti (Hewlett-Packard Labs India)
Timing
and Entrainment of Multimodal Backchanneling Behavior for an Embodied
Conversational Agent (Page
181)
Benjamin Inden (Bielefeld University)
Zofia Malisz (Bielefeld University)
Petra Wagner (Bielefeld University)
Ipke Wachsmuth (Bielefeld University)
Video
Analysis of Approach-Avoidance Behaviors of Teenagers Speaking with
Virtual Agents (Page
189)
David Antonio G�mez J�uregui (LIMSI-CNRS)
L�onor Philip (LIMSI-CNRS)
C�line Clavel (LIMSI-CNRS)
St�phane Padovani (POWOWBOX)
Mahin Bailly (BORDAS)
Jean-Claude Martin (LIMSI-CNRS)
A
Dialogue System for Multimodal Human-Robot Interaction (Page
197)
Lorenzo Lucignano (Universit� di Napoli "Federico II")
Francesco Cutugno (Universit� di Napoli "Federico II")
Silvia Rossi (Universit� di Napoli "Federico II")
Alberto Finzi (Universit� di Napoli "Federico II")
The
Zigzag Paradigm: A New P300-based Brain Computer Interface (Page
205)
Qasem Obeidat (North Dakota State University)
Tom Campbell (Universit�t Oldenburg)
Jun Kong (North Dakota State University)
|
|
Oral
Session 3: Intelligent & Multimodal Interfaces
Interfaces
for Thinkers: Computer Input Capabilities that Support Inferential Reasoning (Page
221)
Sharon Oviatt (Incaa Designs)
Adaptive
Timeline Interface to Personal History Data (Page
229)
Antti Ajanki (Aalto University School of Science)
Markus Koskela (Aalto University School of Science)
Jorma Laaksonen (Aalto University School of Science)
Samuel Kaski (Aalto University School of Science)
Learning
a Sparse Codebook of Facial and Body Microexpressions for Emotion Recognition (Page
237)
Yale Song (Massachusetts Institute of Technology)
Louis-Philippe Morency (University of Southern California)
Randall Davis (Massachusetts Institute of Technology) |
|
Keynote
2
Giving
Interaction a Hand - Deep Models of Co-speech Gesture in Multimodal
Systems (Page 245)
Stefan Kopp (Bielefeld University) |
|
Oral
Session 4: Embodied Interfaces
Five
Key Challenges in End-User Development for Tangible and Embodied Interaction (Page
247)
Daniel Tetteroo (Eindhoven University of Technology)
Iris Soute (Eindhoven University of Technology)
Panos Markopoulos (Eindhoven University of Technology)
How
Can I Help You? Comparing Engagement Classification Strategies for a
Robot Bartender (Page
255)
Mary Ellen Foster (Heriot-Watt University)
Andre Gaschler (TU M�nchen)
Manuel Giuliani (TU M�nchen)
Comparing
Task-Based and Socially Intelligent Behaviour in a Robot Bartender (Page
263)
Manuel Giuliani (fortiss GmbH)
Ronald P. A. Petrick (University of Edinburgh)
Mary Ellen Foster (Heriot-Watt University)
Andre Gaschler (fortiss GmbH)
Amy Isard (University of Edinburgh)
Maria Pateraki (Foundation for Research and Technology - Hellas)
Markos Sigalas (Foundation for Research and Technology - Hellas)
A
Dynamic Multimodal Approach for Assessing Learners' Interaction Experience (Page
271)
Im�ne Jraidi (Universit� de Montr�al)
Maher Chaouachi (Universit� de Montr�al)
Claude Frasson (Universit� de Montr�al) |
|
Oral
Session 5: Hand and Body
Relative
Accuracy Measures for Stroke Gestures (Page
279)
Radu-Daniel Vatavu (University Stefan cel Mare of Suceava)
Lisa Anthony (University of Maryland Baltimore County)
Jacob O. Wobbrock (University of Washington)
Aiding
Human Discovery of Handwriting Recognition Errors (Page
295)
Ryan Stedman (University of Waterloo)
Michael Terry (University of Waterloo)
Edward Lank (University of Waterloo)
Context-based
Conversational Hand Gesture Classification in Narrative Interaction (Page
303)
Shogo Okada (Tokyo Institute of Technology)
Mayumi Bono (National Institute of Informatics)
Katsuya Takanashi (Kyoto University)
Yasuyuki Sumi (Future University Hakodate)
Katsumi Nitta (Tokyo Institute of Technology) |
|
Demo
Session 2
A
Haptic Touchscreen Interface for Mobile Devices (Page
311)
Jong-Uk Lee (Electronics and Telecommunications Research Institute)
Jeong-Mook Lim (Electronics and Telecommunications Research Institute)
Heesook Shin (Electronics and Telecommunications Research Institute)
Ki-Uk Kyung (Electronics and Telecommunications Research Institute)
A
Social Interaction System for Studying Humor with the Robot NAO (Page
313)
Laurence Devillers (Universit� Paris-Sorbonne)
Mariette Soury (Universit� Paris-Sud)
TaSST:
Affective Mediated Touch (Page
315)
Adu�n Darriba Frederiks (Amsterdam University of Applied Sciences)
Dirk Heylen (University of Twente)
Gijs Huisman (University of Twente)
Talk
ROILA to your Robot (Page
317)
Omar Mubin (University of Western Sydney)
Joshua Henderson (University of Western Sydney)
Christoph Bartneck (University of Canterbury)
NEMOHIFI:
An Affective HiFi Agent (Page
319)
Syaheerah Lebai Lutfi (University Sains Malaysia)
Fernando Fern�ndez-Mart�nez (Universidad Carlos III de Madrid)
Jaime Lorenzo-Trueba (Universidad Polit�cnica de Madrid)
Roberto Barra-Chicote (Universidad Polit�cnica de Madrid)
Juan Manuel Montero (Universidad Polit�cnica de Madrid) |
|
Poster
Session 2: Doctoral Spotlight
Persuasiveness
in Social Multimedia: The Role of Communication Modality and the Challenge
of Crowdsourcing Annotations (Page
321)
Sunghyun Park (University of Southern California)
Towards
a Dynamic View of Personality: Multimodal Classification of Personality
States in Everyday Situations (Page
325)
Kyriaki Kalimeri (University of Trento & Bruno Kessler Foundation)
Designing
Effective Multimodal Behaviors for Robots: A Data-Driven Perspective (Page
329)
Chien-Ming Huang (University of Wisconsin-Madison)
Controllable
Models of Gaze Behavior for Virtual Agents and Humanlike Robots (Page
333)
Sean Andrist (University of Wisconsin-Madison)
The
Nature of the Bots: How People Respond to Robots, Virtual Agents and
Humans as Multimodal Stimuli (Page
337)
Jamy Li (Stanford University)
Adaptive
Virtual Rapport for Embodied Conversational Agents (Page
341)
Ivan Gris (The University of Texas at El Paso)
3D
Head Pose and Gaze Tracking and Their Application to Diverse Multimodal
Tasks (Page 345)
Kenneth Alberto Funes Mora (Idiap Research Institute and �cole Polytechnique
F�d�rale de Lausanne)
Towards
Developing a Model for Group Involvement and Individual Engagement (Page
349)
Catharine Oertel (KTH Royal Institute of Technology)
Gesture
Recognition Using Depth Images (Page
353)
Bin Liang (Charles Sturt University)
Modeling
Semantic Aspects of Gaze Behavior While Catalog Browsing (Page
357)
Erina Ishikawa (Kyoto University)
Computational
Behaviour Modelling for Autism Diagnosis (Page
361)
Shyam Sundar Rajagopalan (University of Canberra) |
|
Grand
Challenge Overviews
ChaLearn
Multi-Modal Gesture Recognition 2013: Grand Challenge and Workshop Summary (Page
365)
Sergio Escalera (Universitat Aut�noma de Barcelona)
Jordi Gonz�lez (Universitat de Barcelona)
Xavier Bar� (Open University of Catalonia)
Miguel Reyes (Universitat de Barcelona)
Isabelle Guyon (ChaLearn)
Vassilis Athitsos (Texas University)
Hugo J. Escalante (INAOE)
Leonid Sigal (Disney Research, Pittsburgh)
Antonis Argyros (FORTH)
Cristian Sminchisescu (Lund University)
Richard Bowden (University of Surrey)
Stan Sclaroff (Boston University)
Emotion
Recognition in the Wild Challenge (EmotiW) Challenge and Workshop Summary (Page
371)
Abhinav Dhall (Australian National University)
Roland Goecke (University of Canberra & Australian National University)
Jyoti Joshi (University of Canberra & Australian National University)
Michael Wagner (University of Canberra & Australian National University)
Tom Gedeon (Australian National University)
ICMI
2013 Grand Challenge Workshop on Multimodal Learning Analytics (Page
373)
Louis-Philippe Morency (University of Southern California)
Sharon Oviatt (Incaa Design)
Stefan Scherer (University of Southern California)
Nadir Weibel (University of California, San Diego)
Marcelo Worsley (Stanford University) |
|
Keynote
3
Hands
and Speech in Space: Multimodal Interaction with Augmented Reality Interfaces (Page
379)
Mark Billinghurst (University of Canterbury) |
|
Oral
Session 6: AR, VR & Mobile
Evaluating
Dual-view Perceptual Issues in Handheld Augmented Reality: Device vs.
User Perspective Rendering (Page
381)
Klen Čopič Pucihar (Lancaster University)
Paul Coulton (Lancaster University)
Jason Alexander (Lancaster University)
MM+Space:
n x 4 Degree-of-Freedom Kinetic Display for Recreating Multiparty Conversation
Spaces (Page 389)
Kazuhiro Otsuka (NTT)
Shiro Kumano (NTT)
Ryo Ishii (NTT)
Maja Zbogar (NTT)
Junji Yamato (NTT)
Inferring
Social Activities with Mobile Sensor Networks (Page
405)
Trinh Minh Tri Do (Idiap Research Institute)
Kyriaki Kalimeri (Fondazione Bruno Kessler)
Bruno Lepri (Fondazione Bruno Kessler)
Fabio Pianesi (Fondazione Bruno Kessler)
Daniel Gatica-Perez (Idiap Research Institute & EPFL) |
|
Oral
Session 7: Eyes & Body
Effects
of Language Proficiency on Eye-gaze in Second Language Conversations:
Toward Supporting Second Language Collaboration (Page
413)
Ichiro Umata (National Institute of Information and Communications)
Seiichi Yamamoto (Doshisha University)
Koki Ijuin (Doshisha University)
Masafumi Nishida (Doshisha University)
Predicting
Where We Look from Spatiotemporal Gaps (Page
421)
Ryo Yonetani (Kyoto University)
Hiroaki Kawashima (Kyoto University)
Takashi Matsuyama (Kyoto University)
Automatic
Multimodal Descriptors of Rhythmic Body Movement (Page
429)
Marwa Mahmoud (University of Cambridge)
Louis-Philippe Morency (University of Southern California)
Peter Robinson (University of Cambridge)
Multimodal
Analysis of Body Communication Cues in Employment Interviews (Page
437)
Laurent Son Nguyen (Idiap Research Institute & EPFL)
Alvaro Marcos-Ramiro (University of Alcala)
Martha Marr�n Romera (University of Alcala)
Daniel Gatica-Perez (Idiap Research Institute & EPFL) |
|
ChaLearn
Challenge and Workshop on Multi-modal Gesture Recognition
Multi-modal
Gesture Recognition Challenge 2013: Dataset and Results (Page
445)
Sergio Escalera (Universitat de Barcelona)
Jordi Gonz�lez (Universitat de Barcelona)
Xavier Bar� (Open University of Catalonia & Universitat de Barcelona)
Miguel Reyes (Universitat de Barcelona)
Oscar Lopes (Universitat de Barcelona)
Isabelle Guyon (ChaLearn)
Vassilis Athitsos (University of Texas)
Hugo J. Escalante (INAOE)
Fusing
Multi-modal Features for Gesture Recognition (Page
453)
Jiaxiang Wu (Chinese Academy of Sciences)
Jian Cheng (Chinese Academy of Sciences)
Chaoyang Zhao (Chinese Academy of Sciences)
Hanqing Lu (Chinese Academy of Sciences)
A
Multi Modal Approach to Gesture Recognition from Audio and Video Data (Page
461)
Immanuel Bayer (University of Konstanz)
Thierry Silbermann (University of Konstanz)
Online
RGB-D Gesture Recognition with Extreme Learning Machines (Page
467)
Xi Chen (Aalto University School of Science)
Markus Koskela (Aalto University School of Science) |
|
A
Multi-modal Gesture Recognition System Using Audio, Video, and Skeletal
Joint Data (Page
475)
Karthik Nandakumar (A*STAR)
Kong Wah Wan (A*STAR)
Siu Man Alice Chan (A*STAR)
Wen Zheng Terence Ng (A*STAR)
Jian Gang Wang (A*STAR)
Wei Yun Yau (A*STAR)
ChAirGest
- A Challenge for Multimodal Mid-Air Gesture Recognition for Close HCI (Page
483)
Simon Ruffieux (University of Applied Sciences of Western Switzerland)
Denis Lalanne (University of Fribourg)
Elena Mugellini (University of Applied Sciences of Western Switzerland)
Gesture
Spotting and Recognition Using Salience Detection and Concatenated Hidden
Markov Models (Page
489)
Ying Yin (Massachusetts Institute of Technology)
Randall Davis (Massachusetts Institute of Technology)
Multi-modal
Social Signal Analysis for Predicting Agreement in Conversation Settings (Page
495)
V�ctor Ponce-L�pez (Open University of Catalonia & University of
Barcelona)
Sergio Escalera (University of Barcelona)
Xavier Bar� (Open University of Catalonia)
Multi-modal
Descriptors for Multi-class Hand Pose Recognition in Human Computer
Interaction Systems (Page
503)
Jordi Abella (Universitat Aut�noma de Barcelona)
Ra�l Alcaide (Universitat Aut�noma de Barcelona)
Anna Sabat� (Universitat Aut�noma de Barcelona)
Joan Mas (Universitat Aut�noma de Barcelona)
Sergio Escalera (Universitat Aut�noma de Barcelona)
Jordi Gonz�lez (Universitat Aut�noma de Barcelona)
Coen Antens (Universitat Aut�noma de Barcelona) |
|
Emotion
Recognition In The Wild Challenge and Workshop
Emotion
Recognition in the Wild Challenge 2013 (Page
509)
Abhinav Dhall (Australian National University)
Roland Goecke (University of Canberra & Australian National University)
Jyoti Joshi (University of Canberra & Australian National University)
Michael Wagner (University of Canberra & Australian National University)
Tom Gedeon (Australian National University)
Multiple
Kernel Learning for Emotion Recognition in the Wild (Page
517)
Karan Sikka (University of California, San Diego)
Karmen Dykstra (University of California, San Diego)
Suchitra Sathyanarayana (University of California, San Diego)
Gwen Littlewort (University of California, San Diego)
Marian Bartlett (University of California, San Diego)
Partial
Least Squares Regression on Grassmannian Manifold for Emotion Recognition (Page
525)
Mengyi Liu (Chinese Academy of Sciences)
Ruiping Wang (Chinese Academy of Sciences)
Zhiwu Huang (Chinese Academy of Sciences)
Shiguang Shan (Chinese Academy of Sciences)
Xilin Chen (Chinese Academy of Sciences)
Emotion
Recognition with Boosted Tree Classifiers (Page
531)
Matthew Day (University of York)
Distribution-based
Iterative Pairwise Classification of Emotions in the Wild Using LGBP-TOP (Page
535)
Timur R. Almaev (The University of Nottingham)
Anıl Y�ce (�cole Polytechnique F�d�rale de Lausanne)
Alexandru Ghitulescu (The University of Nottingham)
Michel F. Valstar (The University of Nottingham) |
|
Combining
Modality Specific Deep Neural Networks for Emotion Recognition in Video (Page
543)
Samira Ebrahimi Kahou (Universit� de Montr�al)
Christopher Pal (Universit� de Montr�al)
Xavier Bouthillier (Universit� de Montr�al)
Pierre Froumenty (Universit� de Montr�al)
Çağlar G�l�ehre (Universit� de Montr�al)
Roland Memisevic (Universit� de Montr�al)
Pascal Vincent (Universit� de Montr�al)
Aaron Courville (Universit� de Montr�al)
Yoshua Bengio (Universit� de Montr�al)
Raul Chandias Ferrari (Universit� de Montr�al)
Mehdi Mirza (Universit� de Montr�al)
S�bastien Jean (Universit� de Montr�al)
Pierre-Luc Carrier (Universit� de Montr�al)
Yann Dauphin (Universit� de Montr�al)
Nicolas Boulanger-Lewandowski (Universit� de Montr�al)
Abhishek Aggarwal (Universit� de Montr�al)
Jeremie Zumer (Universit� de Montr�al)
Pascal Lamblin (Universit� de Montr�al)
Jean-Philippe Raymond (Universit� de Montr�al)
Guillaume Desjardins (Universit� de Montr�al)
Razvan Pascanu (Universit� de Montr�al)
David Warde-Farley (Universit� de Montr�al)
Atousa Torabi (Universit� de Montr�al)
Arjun Sharma (Universit� de Montr�al)
Emmanuel Bengio (Universit� de Montr�al)
Kishore Reddy Konda (Goethe Universit�t Frankfurt)
Zhenzhou Wu (McGill University)
Multi
Classifier Systems and Forward Backward Feature Selection Algorithms
to Classify Emotional Coloured Speech (Page
551)
Sascha Meudt (University of Ulm)
Dimitri Zharkov (University of Ulm)
Markus K�chele (University of Ulm)
Friedhelm Schwenker (University of Ulm)
Emotion
Recognition using Facial and Audio Features (Page
557)
Tarun Krishna (The LNM Institute of Information Technology)
Ayush Rai (The LNM Institute of Information Technology)
Shubham Bansal (The LNM Institute of Information Technology)
Shubham Khandelwal (The LNM Institute of Information Technology)
Shubham Gupta (The LNM Institute of Information Technology)
Dushyant Goel (Stony Brook University) |
|
Multimodal
Learning Analytics Challenge
Multimodal
Learning Analytics: Description of Math Data Corpus for ICMI Grand Challenge
Workshop (Page
563)
Sharon Oviatt (Incaa Designs)
Adrienne Cohen (University of Washington)
Nadir Weibel (University of California, San Diego)
Problem
Solving, Domain Expertise and Learning: Ground-truth Performance Results
for Math Data Corpus (Page
569)
Sharon Oviatt (Incaa Designs)
Automatic
Identification of Experts and Performance Prediction in the Multimodal
Math Data Corpus through Analysis of Speech Interaction (Page
575)
Saturnino Luz (Trinity College Dublin)
Expertise
Estimation Based on Simple Multimodal Features (Page
583)
Xavier Ochoa (Escuela Superior Polit�cnica del Litoral)
Katherine Chiluiza (Escuela Superior Polit�cnica del Litoral)
Gonzalo M�ndez (Escuela Superior Polit�cnica del Litoral)
Gonzalo Luzardo (Escuela Superior Polit�cnica del Litoral)
Bruno Guam�n (Escuela Superior Polit�cnica del Litoral)
Jaime Castells (Escuela Superior Polit�cnica del Litoral)
Using
Micro-Patterns of Speech to Predict the Correctness of Answers to Mathematics
Problems: An Exercise in Multimodal Learning Analytics (Page
591)
Kate Thompson (The University of Sydney)
Written
and Multimodal Representations as Predictors of Expertise and Problem-solving
Success in Mathematics (Page
599)
Sharon Oviatt (Incaa Designs)
Adrienne Cohen (University of Washington) |
|
Workshop
Overview
ERM4HCI
2013 - The 1st Workshop on Emotion Representation and Modelling in Human-Computer-Interaction-Systems (Page
607)
Kim Hartmann (Otto von Guericke University)
Ronald B�ck (Otto von Guericke University)
Christian Becker-Asano (Albert-Ludwigs-Universit�t Freiburg)
Jonathan Gratch (University of Southern California)
Bj�rn Schuller (Imperial College London)
Klaus R. Scherer (University of Geneva)
GazeIn'13
- The 6th Workshop on Eye Gaze in Intelligent Human Machine Interaction:
Gaze in Multimodal Interaction (Page
609)
Roman Bednarik (University of Eastern Finland)
Hung-Hsuan Huang (Ritsumeikan University)
Yukiko Nakano (Seikei University)
Kristiina Jokinen (University of Helsinki)
Smart
Material Interfaces: "Another Step to a Material Future" (Page
611)
Manuel Kretzer (Swiss Federal Institute of Technology)
Andrea Minuto (University of Twente)
Anton Nijholt (University of Twente) |
|
|