Menu Close

Keynotes

Several keynote talks will be presented at HBU. This years keynote speakers include:

Prof. Rachael Jack, University of Glasgow

Rachael Jack is a Professor at the Institute of Neuroscience & Psychology, University of Glasgow, Scotland. Jack’s research has produced significant advances in understanding facial expression of emotion within and across cultures using a novel interdisciplinary approach that combines psychophysics, social psychology, dynamic 3D computer graphics, and mathematical psychology. Most notably, Jack’s work has revealed cultural specificities in facial expressions of emotion; that four, not six, expressive patterns are common across cultures; and that facial expressions transmit information in a hierarchical structure over time. Together, Jack’s work has challenged the dominant view that six basic facial expressions of emotion are universal, which led to a new theoretical framework of facial expression communication that Jack’s team is now transferring to digital agents to synthesize culturally sensitive social avatars and robots. Jack’s work has featured in several high-profile scientific outlets (e.g., Annual Review of Psychology, Current Biology, Psychological Science, PNAS, TICS). Jack is currently funded by the European Research Council (ERC) to lead the Computing the Face Syntax of Social Face Signals research program, which willdeliver a formal model of human social face signalling with transference to social robotics. Jack is recipient of several awards including the American Psychological Association (APA) New Investigator award, the Social and Affective Neuroscience Society (SANS) Innovation award, and the British Psychological Society (BPS) Spearman Medal. Jack is Associate Editor at Journal of Experimental Psychology: General, and Journal of Experimental Social Psychology, onseveral editorial boards (e.g., Emotion) and serves several roles on the conference committees/boards including Society for Affective Sciences, IEEE Automatic Face & Gesture Recognition, Intelligent Virtual Agents, and Vision Science Society. Jack is also Chair of the Association for Psychological Science (APS) Internationalization Committee.

Title: Modelling Dynamic Facial Expressions Using Psychological Science

Understanding how facial expressions communicate the myriad social messages used for human social interaction remains challenging due to the number and complexity of the facial expressions the face can make. Traditional approaches that primarily used theory-driven methods and hypothesis testing, while advancing knowledge, have restricted understanding via Western-centric biases. Now, new technologies and data-driven methods developed in interdisciplinary teams alleviate these constraints, providing real traction and novel insights. Here, we showcase one such approach that combines social and cultural psychology, vision science, mathematical psychology, and 3D dynamic computer graphics, to objectively model dynamic facial expressions of a wide range of nuanced social messages, including across cultures. Analyses of these facial expression models further enables precise characterizations of the face movements that are cross-cultural and those that are culture-specific, and what social information they convey, including broad information (e.g., valence, arousal) and specific messages (e.g., delighted, irritated). Together, our work both challenges longstanding views of universality and enables the building of generative, syntactical models for social face signaling that can inform the design of digital agents and affective computing applications.

Assoc. Prof. Louis-Philippe Morency, Carnegie Mellon University

I am tenure-track Faculty at CMU Language Technology Institute where I lead the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). I was previously Research Faculty at USC Computer Science Department. I received my Ph.D. in Computer Science from MIT Computer Science and Artificial Intelligence Laboratory. My research focuses on building the computational foundations to enable computers with the abilities to analyze, recognize and predict subtle human communicative behaviors during social interactions. Central to this research effort is the technical challenge of multimodal machine learning: mathematical foundation to study heterogeneous multimodal data and the contingency often found between modalities. This multi-disciplinary research topic overlaps the fields of multimodal interaction, social psychology, computer vision, machine learning and artificial intelligence, and has many applications in areas as diverse as medicine, robotics and education.

Title: Multimodal AI: Understanding Human Behaviors

Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today’s computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistic, my research focuses on creating computational technologies able to analyze, recognize and predict human subtle communicative behaviors in social context. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements in multimodal machine learning, addressing five core challenges: representation, alignment, fusion, translation and co-learning.

Assoc. Prof. Sidney D’Mello, University of Colorado Boulder

Sidney D’Mello (PhD in Computer Science, 2009) is an Associate Professor in the Institute of Cognitive Science and Department of Computer Science at the University of Colorado Boulder. His work lies at the intersection of the computing, cognitive, affective, social, and learning sciences. D’Mello is interested in the dynamic interplay between cognition and emotion while individuals and groups engage in complex real-world activities. He applies insights gleaned from this basic research program to develop intelligent technologies that help people achieve to their fullest potential by coordinating what they think and feel with what they know and do. D’Mello has co-edited seven books and published more than 300 journal papers, book chapters, and conference proceedings (16 of which received awards at international conferences; four others were award finalists). His work has been funded by numerous grants and he currently serves(d) as associate editor for six journals, serves(d) on the editorial board of four others, and was elected to the leadership team of three professional organizations.

Title: Understanding Human Functioning & Enhancing Human Potential through Computational Methods

It is generally accepted that computational methods can complement traditional approaches to understanding human functioning, including thoughts, feelings, behaviors, and social interactions. I suggest that their utility extends beyond a mere complementary role. They serve a necessary role when data is too large for manual analysis, an opportunistic role by addressing questions that are beyond the purview of traditional methods, and a promissory role in facilitating change when fully-automated computational models are embedded in closed-loop intelligent systems. Multimodal computational approaches provide further benefits by affording analysis of disparate constructs emerging across multiple types of interactions in diverse contexts. To illustrate, I will discuss a research program that use linguistic, paralinguistic, behavioral, and physiological signals for the analysis of individual, small group, multi-party, and human-computer interactions in the lab and in the wild with the goals of understanding cognitive, noncognitive, and socio-affective-cognitive processes while improving human efficiency, engagement, and effectiveness. I will also discuss how these ideas align with our new NSF National AI Institute on Student-AI Teaming and how you can get involved in the research.

Assoc. Prof. Lisa Anthony, University of Florida

Lisa Anthony is presently an associate professor with tenure in the Department of Computer & Information Science & Engineering (CISE) at the University of Florida in Gainesville, FL. She holds a BS and MS in Computer Science (Drexel University, 2002), and a PhD in Human-Computer Interaction (Carnegie Mellon University, 2008). After her PhD, Lisa spent two years in an industry research and development laboratory working on DARPA- and ONR-funded user-centered interface projects, followed by two years working on multimodal interaction at the University of Maryland Baltimore County. Her current research interests include understanding how children can make use of advanced interaction techniques and how to develop technology to support them in variety of contexts, including education, healthcare and serious games. She has received funding from industry and government sources, including the prestigious National Science Foundation CAREER award. Her PhD dissertation investigated the use of handwriting input for middle school math tutoring software, and her simple and accurate multistroke gesture recognizers called $N and $P are well-known in the field of interactive surface gesture recognition. Her work has been recognized by the research community with four best paper awards, including one in 2013 at the top HCI conference, the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI).

Title: Understanding, Designing, and Developing Natural User Interactions for Children

The field of Natural User Interaction (NUI) focuses on allowing users to interact with technology through the range of human abilities, such as touch, voice, vision and motion. Children are still developing their cognitive and physical capabilities, creating unique design challenges and opportunities for interacting in these modalities. In this talk, Lisa Anthony will describe her research projects in (a) understanding children’s expectations and abilities with respect to NUIs and (b) designing and developing new multimodal NUIs for children in a variety of contexts. Examples of projects she will present are her NSF-funded projects on understanding input behaviors by children using touch and gesture interaction on mobile devices, multimodal interaction, and design for interactive spherical displays for learning.

Prof. Mohamed Chetouani, Institute of Intelligent Systems and Robotics

Mohamed Chetouani is presently a full professor at the Institute of Intelligent Systems and Robotics, University of Sorbonne. He received the M.S. degree in Robotics and Intelligent Systems from the UPMC, Paris, 2001. He received the PhD degree in Speech Signal Processing from the same university in 2004. In 2005, he was an invited Visiting Research Fellow at the Department of Computer Science and Mathematics of the University of Stirling (UK). Prof. Chetouani was also an invited researcher at the Signal Processing Group of Escola Universitaria Politecnica de Mataro, Barcelona (Spain). His research activities, carried out at the Institute for Intelligent Systems and Robotics, cover the areas of social signal processing and personal robotics through non-linear signal processing, feature extraction, pattern classification and machine learning. He is also the co-chairman of the French Working Group on Human- Robots/Systems Interaction (GDR Robotique CNRS) and a Deputy Coordinator of the Topic Group on Natural Interaction with Social Robots (euRobotics). He is the Deputy Director of the Laboratory of Excellence SMART Human/Machine/Human Interactions In The Digital Society.

Title: Social Learning Agents: Role of Human Behaviors