当前位置:文档下载 > 所有分类 > Web-based database for facial expression analysis
侵权投诉

Web-based database for facial expression analysis

ABSTRACT * In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images th

WEB-BASED DATABASE FOR FACIAL EXPRESSION ANALYSIS

Maja Pantic, Michel Valstar, Ron Rademaker and Ludo Maat*

Delft University of Technology

EEMCS / Man-Machine Interaction Group

Delft, the Netherlands

, ,

ABSTRACT

In the last decade, the research topic of automatic analysis of facial

expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic facial expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark facial expression database difficult. We then present the MMI Facial Expression Database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation. It has been built as a web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on facial expression analysis to date.

1. INTRODUCTION

Facial expression is one of the most cogent, naturally preeminent means for human beings to communicate emotions, to clarify and stress what is said, to signal comprehension, disagreement, and intentions, in brief, to regulate interactions with the environment and other persons in the vicinity [1, 2]. Automatic analysis of facial expression attracted the interest of many AI researchers since such systems will have numerous applications in behavioral science, medicine, security, and human-computer interaction.

To develop and evaluate such applications, large collections of training and test data are needed [3, 4]. While motion records are necessary for studying temporal dynamics of facial expressions, static images are important for obtaining information on the configuration of facial expressions which is essential, in turn, for inferring the related meaning (e.g., in terms of emotions). Therefore both static face images and face videos are needed.

While the researchers of machine analysis of facial affect are interested in facial expressions of emotions such as the prototypic expressions of happiness, sadness, anger, disgust, surprise, and fear, the researchers of machine analysis of atomic facial signals are interested in facial expressions produced by activating a single

*

The work of M. Pantic is supported by the Netherlands Organization for Scientific Research Grant EW-639.021.202. The work of M.F. Valstar is supported by the Netherlands BSIK-MultimediaN-N2 Interaction project.

0-7803-9332-5/05/$20.00 ©2005 IEEE

facial muscle or by activating a combination of facial muscles. Therefore both kinds of training and test material are needed. For general relevance, the reference images should be scored in terms of AUs defined in the Facial Action Coding System (FACS) [5, 6]. FACS is a system designed for human observers to describe changes in facial expression in terms of observable facial muscle actions (i.e., facial action units, AUs). FACS provides the rules for visual detection of 44 different AUs and their temporal segments (onset, apex, offset) in a video of an observed face. Using these rules, a human coder decomposes a shown facial expression into the specific AUs that produced the expression.

In a frontal-view face image, facial actions such as tongue pushed under the upper lip (AU36t) or pushing the jaw forwards (AU29) represent out-of-image-plane non-rigid facial movements that are difficult to detect. Such facial actions are clearly observable in a profile view of the face. Because the usage of face-profile view promises a qualitative enhancement of AU detection performed, efforts have been made to automate facial expression analysis from face-profile images [7, 8]. Hence, both frontal and profile facial views are of interest for the research in the field.

In spite of repeated calls for the need of a comprehensive, readily accessible reference set of face images that could provide a basis for benchmarks for all different efforts in the research on machine analysis of facial expressions, no such database has been yet created that is shared by all diverse facial-expression-research communities [4]. In general, only isolated pieces of such a facial database exist. An example is the unpublished database of Ekman-Hager Facial Action Exemplars [9]. It has been used by several research groups (e.g., [10, 11]) to train and test their methods for AU detection from frontal-view facial expression sequences. An overview of the databases of face images that have been made

Table 1: Overview of the existing Face Databases

static images 40000848

emotion exp. smile single AU exp. some

multiple AUs exp. smile no. gender ♀♂ ♀♂ ♀ ♀♂ ♀♂ ♀♂ facial hair, glasses

8 8 8 9 9 9 profile view

downloadable

searchable

第1页

免费下载Word文档免费下载:Web-based database for facial expression analysis

(下载1-5页,共5页)

我要评论

返回顶部