Home | News | Team | Projects | Research | Standards | Demos | Databases | Parteners | Contact      ÖÐÎÄ
SEARCH£º        
 
APiS 1.0 Database
1. Introduction

Pedestrian attributes are helpful to infer high-level semantic knowledge, improving the performance of pedestrian tracking, retrieval, re-identification, etc. However, current pedestrian databases are mainly for pedestrian detection or tracking applications, and semantic attribute annotations related to pedestrians are rarely provided. To solve this issue, we construct an Attributed Pedestrians in Surveillance (APiS) 1.0 database with various scenes. Moreover, we develop an evaluation protocol for researchers to evaluate pedestrian attribute classification algorithms.

Sample

Figure 1 Some pedestrians with semantic annotations in the APiS 1.0 database. (a) Binary attribute annotations (b) Upper-body clothing color annotations. (c) Lower-body clothing color annotations

2. Database Composition

The APiS 1.0 [1] database includes 3661 pedestrian images with 11 binary and 2 multi-class (color) attribute annotations. Figure 1 shows some pedestrians with semantic attribute annotations in the APiS 1.0 database. The cropped pedestrian images collected in the APiS database come from four sources: KITTI [2] database, CBCL Street Scenes [3] database, INRIA [4] database and SVS database (Surveillance Video Sequences at a train station collected by ourselves). The APiS 1.0 database includes the following contents:

  1. Pedestrian bounding boxes. We use a pedestrian detector [5] to locate pedestrians with bounding boxes. Because of copyright issue, we are unable to provide all cropped images directly. You can download the row data of KITTI [2] and CBCL Street Scenes [3] from their websites, and then use our bounding box information to crop pedestrian images.
  2. Attribute annotations. Each cropped pedestrian is resized to 128x48 first and then annotated with 11 binary attributes and 2 multi-class attributes. The attribute statistics of APiS 1.0 database are list in Table 1.
  3. Evaluation protocols. The protocols include two aspects: one for evaluation of binary attribute classification and other one for multi-class attribute classification.

    Sample

    Table 1 The attribute statistics of APiS 1.0 database.

3. Evaluation Protocols

We evaluate the performance of each attribute classification with 5-fold cross-validation. That is, we provide a sample index to separate the APiS 1.0 database into 5 equal sized subsets, and then evaluate each attribute classification based on the same sample index. The 5 results from the 5 folds are further averaged to produce a single performance report.

In the evaluation of the binary attribute classification, samples with ambiguous annotations are excluded. Two performance measures, the recall rate and false positive rate, are applied for evaluation. The recall rate means the fraction of the correctly detected positives out of the whole positive samples, and the false positive rate represents the fraction of the mis-classified negatives out of the whole negative samples. The Receiver Operating Characteristic (ROC) curve is also adopted to compare different algorithms. At various threshold settings, a ROC curve can be drawn by plotting the recall rate vs. the false positive rate. Since our evaluation is based on cross-validation, we report the performance with the average ROC curve. In order to make a more intuitive performance report, the Area Under the average ROC Curve (AUC) is also used for evaluation. The larger the AUC is, the better the classification performance will be.

In the evaluation of the multi-class attribute classification, samples with ambiguous or occluded annotations are excluded. In order to handle unseen colors beyond the training data, we design an open-set identification [6] experiment to evaluate the performance of the multi-class attribute classification. In our open-set identification experiment, the defined colors are used as the gallery classes, while the undefined samples are used as negative samples beyond the defined colors. Therefore, in the testing phase, the undefined samples should be rejected as not belonging to the gallery, so that we know they are with other colors beyond the defined colors. To evaluate the open-set identification performance, we adopt the detection & identification rate Pdi and false positive rate Pfp defined in [6]. Assume that G represents a gallery set, QG and QN represents two probe sets. While QG consists of classes in the gallery set QG but with different images, QN contains classes that are not present in G. Then, Pdi and Pfp are formulated as:

Sample

Sample

where score(q) is the decision score function that decides whether q is a defined sample, t is the decision threshold, and cid(q)=1 is the classification indicator which is equals to 1 if and only if q is correctly classified. We also use the average ROC curve to report the open-set identification performance as in [7], where the ROC curve represents detection & identification rate vs. false positive rate at various threshold settings. In order to show the difference in performance of each defined color, we further separate the query images into several subsets, with each subset containing a single defined color and the undefined color. Then the above evaluation protocol is applied to each subset to draw a performance curve with respect to the defined color of that subset.

4. Baseline Performance Report

Baseline performance report available on results page.

5. Download Instructions
To download the database, please follow the steps below:

  1. Download and print the document Agreement for using APiS 1.0 database.
  2. Sign the agreement and Send the agreement to jqzhu@cbsr.ia.ac.cn or jianqingzhu@foxmail.com.
  3. Check your email to find a login account and a password of our website after one day, if your application has been approved.
  4. Download the APiS 1.0 database from our website with the authorized account within 48 hours.

Copyright Note
The database is released for research and educational purposes. We hold no liability for any undesirable consequences of using the database. All rights of the APiS 1.0 database are reserved. Any person or organization is not permitted to distribute, publish, copy, or disseminate this database.

Reference
[1] Jianqing Zhu, Shengcai Liao, Zhen Lei, Dong Yi and Stan Z. Li, ¡°Pedestrian Attribute Classification in Surveillance: Database and Evaluation¡±. In ICCV workshop on Large-Scale Video Search and Mining (LSVSM'13), Sydney, December, 2013.[pdf][code and baseline results]
[2] A. Geiger, P. Lenz, and R. Urtasun. ¡°Are we ready for autonomous driving? the KITTI vision benchmark suite¡±. In CVPR, 2012. http://www.cvlibs.net/datasets/kitti
[3] S. M. Bileschi and L. Wolf. ¡°CBCL street scenes¡±. 2006. http://cbcl.mit.edu/software-datasets/streetscenes
[4] N. Dalal and B. Triggs. ¡°Histograms of oriented gradients for human detection¡±. In CVPR, 2005.
[5] J. Yan, Z. Lei, D. Yi, and S. Z. Li. ¡°Multi-pedestrian detection in crowded scenes: A global view¡±. In CVPR, 2012.
[6] P. J. Phillips, P. Grother, and R. Micheals. ¡°Evaluation methods in face recognition¡±. In Handbook of Face Recognition, pages 551¨C574. Springer, 2011.
[7] S. Liao, A. K. Jain, and S. Z. Li. ¡°Partial face recognition: Alignment-free approach¡±. In IEEE Trans. on Pattern Analysis and Machine Intelligence, pages 1193¨C1205, 2013.

¡¡  Introduction
¡¡  Iris Databases
¡¡  Gait Databases
¡¡  HFB Face Databases
¡¡  NIR-VIS 2.0 Databases
¡¡  NIR Face Databases
¡¡  BIT Face Databases
¡¡  Fingerprint Databases
¡¡  Handwriting Databases
¡¡  Action Databases
¡¡  Palmprint Databases
¡¡  Multi-spectral Palmprint Databases
¡¡  APiS 1.0 Databases
Copy rigth All right reserved 2005 Center for Biometrics and Security Research
Center for Biometrics and Security Research 12th Floor,Institute of Automation chinese Academy of Sciences
P.O.Box2728Beijing 100080 P.R.China
Tel:010-62632259 Fax:010-62632259 E-MAIL:hjzhang@cbsr.ia.ac.cn