Facial Expression Recognition


Why should we care about recognizing human expressions? While communicating, humans express just so much information through words. Other non-verbal cues are involved in the communication process. Such cues involve our facial expressions, pose, among others. Our intent is to create algorithms that understand human expressions, so they can be used in other applications or devices to improve the interactions with humans. Imagine how much more fluid your interactions with your smart devices would be if they understood your current facial expression: the robot that takes care of you will act differently depending on your expression, your TV recommends movies according to your mood, or the automatic system that adjust its behavior because you are frustrated. We are trying to make that a reality.

However, understanding the subtle changes in expressions, or even more, emotions, is a daunting task. It requires discriminating descriptors that generalize well (not only among changes in environments but also among races and cultures), as well as excellent classification algorithms that do not incorporate biases. To this end, we need methodologies that advance the technology to learn from data and to create more data to train those algorithms. These two subjects are the main drivers of my research.

I explore the facial expression recognition problem. This task comprises the identification of the human expressions by a machine, commonly labeled according to a taxonomy of discrete expressions. And the ultimate task is to understand the internal states of humans through their external cues (infer emotions, if you will).



  • Development of Recurrent Convolutional Neural Network Architectures for Facial Expression Recognition. São Paulo Research Foundation (FAPESP). 2017.
  • Design and Implementation of Spatiotemporal Local Directional Patterns for Facial Expression Recognition. FONDECYT de Iniciación Investigación. 2013.