Using Cognitive Science as a basis for our work, we attempt to model aspects of human creativity in AI. Specially we are using Neural Networks (and evolutionary systems) in the form of Deep Learning, CNNs, RNNs and other modern techniques to model aspects of human expression and creativity. We are known for modelling expression semantics and generation of visual art (stills, videos, VR) but have extended our work into expressive forms of linguistic (word based) narrative.
Our lab has extensive experience in using different sensing technology including eye tracking and facial emotion recognition (DiPaola et al 2013), as well as gesture tracking and heart rate and EDA bio sensing (Song & DiPaola, 2015) to affect generative computer graphics systems. These bio-feedback systems can be used to further understand the body’s reception to generated stimulus (photos, video, VR). They can also be used in conjuncture with other systems such as physical testing and psychological evaluation to help visualize the body’s systems and responses.
What is abstraction? Can you use AI techniques to model the semantics of an idea, object, or entity, where that understanding allows for abstraction of the meaning? We use several AI techniques including genetic programming, Neural Nets and Deep Learning to explore abstraction in its many forms. Mainly here in the visual and narrative arts.
This research uses creative evolutionary systems to explore computer creativity for various applications (in our first pass – evolving a family of abstract portrait painter programs). We use relatively new form of Genetic Programming (GP) called Cartesian Genetic Programming (CGP) first developed by Julian Miller .
Portrait artists and painters in general have over centuries developed, a little understood, intuitive and open methodology that exploits cognitive mechanisms in the human perception and visual system.
Using new visual computer modelling techniques, we show that artists use vision based techniques (lost and found edges, center of focus techniques) to guide the eye path of the viewer through their paintings in significant ways.
Our long range research project is a visual development system for exploring face space, both in terms of facial types and animated expressions. This development toolkit is based on a hierarchical parametric approach. This approach gives us an additive language of hierarchical expressions, emotions and lip-sync sequences.
Rapid of growth of visual communication systems, from video phones to virtual agents in games and web services, has a brought a new generation of multimedia systems that we refer to as face-centric. Such systems are mainly concerned with multimedia representation of facial activities.
Actual screen shot from our Virtual Beluga Interactive Prototype
which shows realistically swimming Beluga in a wild grouping (pod) created via 3d real-time graphics and artificial intelligence systems.