by Steve DiPaola, Nilay Yalcin, Ulysses Bernardet, Maryam Saveri, Micheal Nixon

AI Affective Virtual Human


About :: The Research :: Setup and Results :: Downloads and Links :: Contact


About
Our open source toolkit / cognitive research in AI 3D Virtual Human (embodied IVA :Intelligence Virtual Agents) : a real-time system that can respond emotionally (voice, facial animation, gesture, etc) to a user in front of it via a host of gestural, motion and bio- sensor systems, with several in lab AI systems able to detect speech from the user in front of it and give a coherent answers via speech, expression and gesture. The system uses SmartBody (USC) API who we have colaborated with for years.

The Research
Our affective real-time 3D AI virtual human setup with face emotion recognition, movement recognition and data glove recognition.

Setup and Results
The virtual character’s behaviour is controlled by modular controllers for three modalities: facial expressions, postures/gestures, and gaze movements. For each of the three modalities, three kinds of behaviour are defined: 1) idle, 2) communicative and 3) reactive. The idle and communicative behaviour is generated in the Event-based system, while reactive behaviour is controlled with the Emotionally-Continuous system






Downloads and Links


Papers / Posters
  PDF Poster: Stanford. AI Avatar : Poster from Stanford VR and Behavioral change Conference 2017.
  PDF: IVA 2016a. Simulink Toolbox for Real-time VirtualCharacter Control
  PDF: IVA 2016b An Architecture for Biologically Grounded Real-time Reflexive Behavior
  PDF: IVA 2015 A Framework for Exogenous and Endogenous Reflexive Behavior in Virtual Characters

Additional Media and Code
  Media / Code Repository Repository of Code and Media for our RealAct system

Contacts:

Steve DiPaola :: sdipaola @ sfu.ca
cell phone (Vancouver, BC) 604.719.6579