Here's what we found for the FaceResearch tag:
Our lab has extensive experience in using different sensing technology including eye tracking and facial emotion recognition (DiPaola et al 2013), as well as gesture tracking and bio sensing heart rate and EDA (Song & DiPaola, 2015) which both affect the generative system and can be used to understand the reception to the generated graphics (still, video, VR).
http://ivizlab.sfu.ca/research/biosenseA system for creating, animating and communicating with 3D faces types. This dev toolkit is based on a hierarchical parametric approach allowing for an additive realtime language of expressions, emotions and lip-sync sequences..
http://ivizlab.sfu.ca/research/facetoolkitgenFace uses a genetic algorithm technique to search through a large multi-dimensional space of correlated faces -- making it possible to traverse a path from any face to any other face, morphing through locally similar faces along that path.
http://ivizlab.sfu.ca/research/genfaceIntroducing the concept of Face Multimedia Object (FMO), iFACE develops an integrated framework for working with FMO by defining and implementing the necessary functionality (3D and 2D) and exposing proper interfaces to a variety of application types (Web clients, GUI programs, and interactive users).
http://ivizlab.sfu.ca/research/ifaceMusic-driven Emotionally Expressive Face (MusicFace) creates 'facial choreography' driven by musical input. Structural and emotion data are extracted from the music via a fuzzy rule based system and remapped to 'equivelents' in face space.
http://ivizlab.sfu.ca/research/musicfaceUsing new computer modelling techniques, we show that artists including Rembrandt, use vision based techniques (lost and found edges, center of focus techniques) to guide the eye path of the viewer through their paintings in significant ways...
http://ivizlab.sfu.ca/research/rembrandtOur open source toolkit / cognitive research in AI 3D Virtual Human (embodied IVA : Intelligence Virtual Agents) - a real-time system that can respond emotionally (voice, facial animation, gesture, etc) to a user in front of it via a host of gestural, motion and bio- sensor systems. The system uses SmartBody (USC) and MatLab Simulink as its control and AI system.
http://ivizlab.sfu.ca/research/virthumanEtemad A, Arya A, Parush A, DiPaola S—Computer Animation and Virtual Worlds Journal, In Press, doi: 10.1002/cav.1631, 2015
PDF journal
Arya A, Enns J, Jefferies L, DiPaola S,—Computer Animation and Virtual Worlds (CAVW) Journal, Vol 17, No 3-4 , pp 371–382, 2006.
PDF journal
Zammitto V, DiPaola S, Arya A,—Intl Conference on Games Research and Development (CyberGames), 8 pages, Beijing, China. 2008.
proceeding
DiPaola S, Arya A, Chan J,—In Proc: E-Learn 2005, 6 pages, Vancouver, 2005.
proceeding
DiPaola S, Arya A,—In Proc: Electronic Visualisation and the Arts, 8 pages, London, England, July 26-31, 2004.
proceeding
Dalvandi A, Amini P, DiPaola S—In Proc: Electronic Visualisation and the Arts, pp. 121-128. British Computer Society. London, 2010.
PDF proceeding
DiPaola S, Arya A,—In Proc: Conference on Future Play, Toronto, Future Play '07, pp 129-136, ACM, New York, NY, 2007.
proceeding
DiPaola S,—In Proc: IEEE Information Visualization, London, pp 105-109, 2002.
proceeding
Arya A, DiPaola S—IEEE Transactions on Multimedia, Vol 9, No 6, pp 1137-1146, 2007.
PDF journal
Arya A, DiPaola S, Parush A,—Intl Journal of Computer Games Technology, Vol 2009, Article ID 462315, pp 1-13, 2009.
PDF journal
Seifi H, DiPaola S, Arya A, —International Journal of Computer Games Technology, International Journal of Computer Games Technology, vol. 2011, Article ID 164949, 7 pages, 2011.
PDF journal
DiPaola S,—In Proc: ACM Intelligent Virtual Agents, Springer, Amsterdam, Keynote Short Paper, 4 pages, September, 2009.
proceeding
Arya A, DiPaola S,—Journal of Image and Video Processing. , Special Issue on Facial Image Processing. Vol. 2007, Article ID 48757, pp 1-12, 2007.
PDF journal
DiPaola S,—IEEE Journal of Visualization and Computer Animation, Vol 2, No 4, pp 129-131, 1991.
PDF journal
DiPaola S, Riebe C, Enns J,—Leonardo, Vol 43, No 3, pp 145-151, 2010.
PDF journal
Arya A, DiPaola S,—5th Intl Workshop on Image Analysis for Multimedia Interactive Services, 5 pages, Lisbon, Portugal, April 21-23, 2004 .
proceeding
DiPaola S,—Computer Facial Animation, by Parke F, Waters K,, AK Peters, pp 101-104, 214-219, cover artwork. 1996.
book
DiPaola S,—Computer Facial Animation, by Parke F, Waters K, 2nd Edition, AK Peters, pp 133-136,225-251,368-369. 2008.
book
DiPaola S,—Book Chapter, Educational Gameplay and Simulation Environments, Editors: Kaufman D, Sauvé L, pp 213-230, 2010.
book
DiPaola S,—In ACM SIGGRAPH 2002 Conference Abstracts and Applications, (San Antonio, Texas, July 21 - 26, 2002). SIGGRAPH '02. ACM, New York, NY, pp 207-207. 2002.
proceeding
DiPaola S,—ACM SIGGRAPH Facial Animation Tutorial, SIGGRAPH '89, 1989.
proceeding
DiPaola S, Arya A,—In Proc: 2006 ACM SIGGRAPH Symposium on Videogames, (Boston), pp. 143-149, Sandbox '06. ACM, New York, NY, 2006.
PDF proceeding
Riebe C, DiPaola S, Enns J,—Journal of Vision, Vol 9, No 8, pp 368 (abstract), 2009.
PDF journal
DiPaola S, Arya A,—Intl Conf in Central Europe on Computer Graphics Visualization and Computer Vision, 8 pages, 2006.
proceeding
Arya A, DiPaola S, Jefferies L, Enns J,—Intl Conf on Computer Graphics, Visualization and Computer Vision (WSCG-2006), 8 pages, Czech Republic, January 30 - February 3, 2006.
proceeding