The iVizLab in "Expression Based Interactive or Intelligent Visualization" is grounded in AI techniques , computer graphics, visualization and user interface but also strives to encompass work from the areas of cognition, perception science, and knowledge spaces with the aim of creating more socially engaging systems that enhance communication, collaboration and learning.
An interdisciplinary research lab,that strives to make computational systems bend more to the human experience by incorporating biological, cognitive and behavioural knowledge models.
Please view the research pages as well as the publications and people page to view the work.
Much of the iVizLab research uses 'knowledge domain' or 'artificial intelligence' approaches.
The labís work is focused on computational models of human characteristics such as expression, emotion, behaviour and creativity; including computer facial/character systems and AI based cognitive modelling systems.
Using Cognitive Science as a basis for our work, we attempt to model aspects of human creativity in AI. Specially we are using Neural Networks (and evolutionary systems) in the form of Deep Learning, CNNs, RNNs and other modern techniques to model aspects of human expression and creativity. We are known for modelling expression semantics and generation of visual art (stills, videos, VR) but have extended our work into expressive forms of linguistic (word based) narrative.http://ivizlab.sfu.ca/research/deepai
Our lab has extensive experience in using different sensing technology including eye tracking and facial emotion recognition (DiPaola et al 2013), as well as gesture tracking and bio sensing heart rate and EDA (Song & DiPaola, 2015) which both affect the generative system and can be used to understand the reception to the generated graphics (still, video, VR).http://ivizlab.sfu.ca/research/biosense
Our open source toolkit / cognitive research in AI 3D Virtual Human (embodied IVA : Intelligence Virtual Agents) - a real-time system that can respond emotionally (voice, facial animation, gesture, etc) to a user in front of it via a host of gestural, motion and bio- sensor systems. The system uses SmartBody (USC) and MatLab Simulink as its control and AI system.http://ivizlab.sfu.ca/research/virthuman