This paper details a procedure for generating a mapping function which maps an image of a neutral face to one depicting a smile. This is achieved by the computation of the Facial Expression Shape Model (FESM) and the Facial Expression Texture Model (FETM). These are statistical models of facial expression based on anatomical analysis of facial expression called the Facial Action Coding System (FACS).
The FEAM and the FETM allow for the generation of a subject independent mapping function. These models provide a robust means for upholding the rules of the FACS and are flexible enough to describe subjects that are not present during the training phase. We use these models in conjunction with several Artificial Neural Networks (ANN) to generate photo-realistic images of facial expressions.