Home  | Legals | Sitemap | KIT

Gender Classification

In this task the goal is to determine the gender of the persons depicted in the individual images.

Evaluation Metrics

We propose to use the following evaluation metrics:

Data Format

The label/fold files are structured such that there is one line per image. Each line starts with the filename followed by the fold ID and the gender. Each of the values is separated by a tab. The gender can be either M for male or F for female. An example line looks like the following:

224896_00M25.JPG        0        M

where 224896_00M25.JPG is the filename of the image, 0 is the fold ID, and M shows that the person depicted on the image is of male gender.

Benchmarking Protocol

For evaluating gender classification approaches 5-fold cross-validation shall be used for both conditions. To prevent algorithms from learning the identity of the persons in the training set rather than the gender it has to be made sure that all images of individual subjects are only in one fold at a time. Additionally, the folds are selected in such a way that the distribution of age, gender and ethnicity in the folds is similar to the distribution in the whole database. The file lists for these folds can be found in the Downloads section below.

Controlled Condition

We propose to use the MORPH-II database for the controlled labratory condition for gender classification.

Uncontrolled Condition

For the uncontrolled condition we decided to use the Labeled Faces in the Wild (LFW) dataset.



  • Tobias Gehrig, Karlsruhe Institute of Technology, Germany
Author Title Source

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller

University of Massachusetts, Amherst, Tech. Rep., Oct. 2007.

K. Ricanek Jr. and T. Tesafaye

In IEEE 7th International Conference on Automatic Face and Gesture Recognition (FGR’06), Southampton, UK, Apr. 2006, pp. 341–345.

Single- and Cross-Database Benchmarks for Gender Classification Under Unconstrained Settings

Evaluation Metrics

Benchmarking Protocol

See above.



  • Pablo Dago-Casas, GRADIANT
  • Daniel González-Jiménez, GRADIANT
  • José Luis Alba-Castro, Universidade de Vigo
  • Long Long Yu, GRADIANT
Author Title Source

P. Dago-Casas, D. González-Jiménez, L. Long-Yu, and J. L. Alba-Castro

in Proc. First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies, in conjunction with ICCV2011, Barcelona, Spain, 13 Nov. 2011.

A. Gallagher and T. Chen

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 256-263, 2009.
The dataset consists of Flick face images labeled with 7 age categories.