Benchmark of Large-scale Unconstrained Face Recognition

Home   |   Download   |   Contact   |   Results   |   References

Many efforts have been made in recent years to tackle the unconstrained face recognition challenge. For the benchmark of this challenge, the Labeled Faces in the Wild (LFW) [2] database has been widely used. However, the standard LFW protocol is very limited:

  1. Only 3,000 genuine and 3,000 impostor matches for classification.
  2. Today a 97% accuracy can be achieved with this benchmark, remaining a very limited room for algorithm development. However, we argue that this accuracy may be too optimistic because the underlying false accept rate (FAR) may still be high (e.g. 3%).
  3. Performance evaluation at low FARs is not statistically sound by the standard protocol due to the limited number of impostor matches.

Thereby we develop a new benchmark protocol to fully exploit all the 13,233 LFW face images for large-scale unconstrained face recognition evaluation. The new benchmark protocol, called BLUFR, contains both verification and open-set identification scenarios, with a focus at low FARs. There are 10 trials of experiments, with each trial containing about 156,915 genuine matching scores and 46,960,863 impostor matching scores on average for performance evaluation.

We provide a benchmark tool here to further advance research in this field. For more information, please read our IJCB paper and the README files in the benchmark tookit.



Shengcai Liao,

National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences.


Verification ROC curves Identification ROC curves

Rank Method VR (%) @FAR=0.1% DIR (%) @FAR=1%,Rank=1
1 HighDimLBP + JointBayes [1] 41.66 18.07
2 HighDimLBP + LDA [1] 36.12 14.94
3 HighDimLBP + KISSME [1] 25.35 11.34
4 LE + JointBayes [1] 23.31 11.26
5 HighDimLBP + LMNN [1] 22.68 9.53
6 LE + LDA [1] 18.12 9.38
7 HighDimLBP + ITML [1] 17.32 8.59
8 LE + KISSME [1] 16.12 6.83
9 LBP + JointBayes [1] 14.18 8.82
10 LE + LMNN [1] 13.57 4.66


(1) Algorithms are ranked by VR @FAR=0.1%.

(2) Performances are measured in (μ - σ) of 10 trials.

(3) The citations indicate where the results are from.

Download the result files and demo code for performance plot:

Please contribute your algorithm's performance so that we can keep a track of the state of the art for large-scale unconstrained face recognition.


[1] Shengcai Liao, Zhen Lei, Dong Yi, Stan Z. Li, "A Benchmark Study of Large-scale Unconstrained Face Recognition." In IAPR/IEEE International Joint Conference on Biometrics, Sep. 29 - Oct. 2, Clearwater, Florida, USA, 2014. [pdf] [slides]

[2] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007.

Last updated: Jul. 31, 2014