I designed a web-based expressive face to experiment with emotions and human reactions to computers displaying emotions. I implemented the face using a high-fidelity spring model of the anatomical musculature. Each muscle group contains several springs, each connected from the anchor of the muscle group to one of the points on the feature the muscle groups controls. As a given muscle group engages, each spring's k value and free length vary linearly with engagement. Additionally, each of the points on the feature is anchored to it's default location with a small spring. A simulation of the connection points and behaviors of the major facial muscle groups drives the locations of control points on the drawing primitives and therefore creates low-level muscle-based movement of the displayed face. Since the system is simulated in realtime, I include some viscous damping on each point to reduce oscillations.
In this project, I take an animator / artist's approach to expressing emotions on a face. It is easy for animators to convey very high fidelity emotions with just a few lines. However, it's much more difficult to fully programmatically utilize the emotional bandwidth. This project attempts to understand how to increate bandwidth utilization by creating a very low fidelity front end (just a few lines) with a very high fidelity back end. I studied Scott McCloud's adaptation of Paul Eckman's description of the fundamental human emotions (joy, sadness, anger, disgust, fear and surprise). McCloud describes in detail how the face uses the anatomical muscle groups mentioned above to create each of the expressions Eckman described. Additionally, McCloud gives a description of how the various fundamental emotions mix together to make more complex emotions (e.g. joy + sadness = nostalgia).