The mechanism to explore these questions will be the CMU database.
The experiments we did on the CMU data are based upon the original CMU experiment specification. To see our version of that set and the proposal generated in response to it, go here.
The CMU database consists of several views of 25 subjects on a treadmill. In particular, view 03 is from the side (person walking right to left), view 05 is from about 45 degrees, and view 07 is from straight ahead. The conditions (or views) we use in our experiments are
We use 24 of the 25 subjects in their database because subject 04089 does not have data for the ball conditions.
We generate tables that are similarity comparison (by finding the L2norm) between gallery and probe views of the following experiments. The similarity matrix are presented as an Excel spreadsheet, and text file with just numbers. The headings of the columns refer to the gallery, and the row headings are the probe view(s). For out technique there is not a training set.
Our method of performing gait recognition uses static body measurements that are sensitive to stride length. We report results using walk vector w. The walk vector was exacted from the first 120 frames of each sequences, which was sub-sampled every other frame for a total of 60 frames per sequence. Since the subjects were not changing depths in the images, we didn't apply our depth compensation method. Thus, the walk vector has units of pixels.
Our paradigm of measuring static body parameters does not require any specific training set that helps define a model that is then applied to the gallery and the probes. When performing matching across views, all subjects are used to compute "cross-condition mapping functions". That is, a single linear regression was done for each measured feature in our method that would map from one condition to the other. All subjects were used because of the limited number of subjects.
Copyright © 1997-2001