While the previous MPEG standards focus primarily on video coding and transmission issues, MPEG-4 concentrates on hybrid coding of natural and synthetic data streams. In this framework, possible applications include teleconferencing and entertainment applications, where an adaptable synthetic agent substitutes the actual user. Such agents can interact with each other, receiving input from multi-sensor data, and utilize high-level information, such as detected emotions and expressions. This greatly enhances human-computer interaction, by replacing single media representations with dynamic renderings, while providing feedback on the users’ emotional status and reactions. Educational environments, virtual collaboration environments and online shopping and entertainment applications are expected to profit from this concept. Facial expression synthesis and animation, in particular, is given much attention within the MPEG-4 framework, where higher-level, explicit Facial Animation Parameters (FAPs) have been dedicated to this purpose. In this work, we employ general purpose FAPs so as to reduce the definition of facial expressions for synthesis purposes, by estimating the actual expression as a combination of universal ones. In addition, we provide explicit features, as well as possible values for the FAPs implementation, while forming a relation between FAPs and the activation parameter proposed in classic psychological studies.
|