Nilay Ozge Yalcin

Building from a background in Cognitive Science, Nilay’s PhD research focuses on  the role of emotions and especially empathy in multi-modal human to machine communication. She uses an interdisciplinary approach that combines computer science methods with the theories of psychology, linguistics and sociology to understand and explore the mechanisms of human communication and dialog. Nilay is working to develop an Affective Intelligent Agent system which acts as an interactive assistant for language-based communication. She is investigating the social, emphatetic and affective behavior as well as the notion of personality in artificial agents and their effects on human-agent interaction. She also works on achieving computational abstraction techniques for anonymization without losing emotional content.

She is involved in two projects at iVizLab and is a teaching assistant for COGS100 course.

Contact: oyalcin@sfu.ca

Position: PhD Researcher

Research

Deep Learning AI Creativity For Visuals / Words

Using Cognitive Science as a basis for our work, we attempt to model aspects of human creativity in AI. Specially we are using Neural Networks (and evolutionary systems) in the form of Deep Learning, CNNs, RNNs and other modern techniques to model aspects of human expression and creativity. We are known for modelling expression semantics and generation of visual art (stills, videos, VR) but have extended our work into expressive forms of linguistic (word based) narrative.

BioSensing 2D / 3D / VR Systems

Our lab has extensive experience in using different sensing technology including eye tracking and facial emotion recognition (DiPaola et al 2013), as well as gesture tracking and  heart rate and EDA bio sensing (Song & DiPaola, 2015) to affect generative computer graphics systems. These bio-feedback systems can be used to further understand the body’s reception to generated stimulus (photos, video, VR). They can also be used in conjuncture with other systems such as physical testing and psychological evaluation to help visualize the body’s systems and responses.

AI Affective Virtual Human

Our affective real-time 3D AI virtual human project with face emotion recognition, movement recognition and full AI talking, gesture and reasoning.

Publications

https://dipaola.org/art/wp-content/uploads/2018/09/bicaj18thumb-300x300.jpg

A computational model of empathy for interactive agents
Journal Article: Biologically Inspired Cognitive Architectures Journal, 2018

N. Yalcin, S. DiPaola

Elsevier

https://dipaola.org/art/wp-content/uploads/2018/07/sama-thumb-300x300.jpg

Embodied Interactions with a Sufi Dhikr Ritual: Negotiating Privacy and Transmission of Intangible Cultural Heritage in “Virtual Sama”
Conference Proceedings: Electronic Visualisation and the Arts, British Computer Society, July 2017
pp. 365-372, DOI: http://dx.doi.org/10.14236/ewic/EVA2017.73

A. Kadir, K. Hennessy, N. Yalcin, S. DiPaola

London, UK

British Computer Society

https://dipaola.org/art/wp-content/uploads/2018/07/engage-eva_thumb.jpg

Engagement with Artificial Intelligence through Natural Interaction Models
Conference Proceedings: , July 2017
pp. 296-303. DOI: http://dx.doi.org/10.14236/ewic/EVA2017.60

S. Salevati, N. Yalcin, S. DiPaola

London, UK

BCS Learning and Development Ltd.