Embedded Ferns
Discriminative Feature-to-Point Matching

Method

In this work we approach the problem of image-based localization, i.e. how to infer an accurate camera pose from a given image within a known 3D world. While the prevalent approach to image-based localization is to match interest points detected in the query image to a sparse 3D point cloud representing the world using using nearest neighbour analysis, we define this correspondence finding problem as a classification task. We propose an extension of the random fern principle, denoted as the embedded random fern, by projecting features to fern-specific embedding spaces, which yields improved matching rates in short runtime.

Concept

Figure 1: Illustration of image-based localization problem. Given a single query image, we want to accurately estimate
the 6 DoF camera pose by matching the image to a 3D point cloud representing our known 3D world.


Figure 2: Augmented reality application for image-based localization approach. Logo of the CultAR project is augmented on model.

Embedded Ferns

For efficient and effective classification we introduce a novel classifier, that is based on the random fern principle, which is a popular random ensemble method. We stick to the general definition that each fern splits the feature space into numerous bins, and that class-specific probabilities for classification are simply defined by evaluating the number of labelled training samples that reach the individual bins. Results are then combined in a semi-naive Bayesian manner over all ferns. As main difference to standard ferns we identify discriminative embedding spaces per fern using a supervised machine learning method (Canonical Correlation Analysis). In such a way the splits per fern are identified in closed-form and randomization is injected by selecting different feature dimensions per fern.

Concept

Figure 3: Illustration of proposed embedded fern classifier. Input data matrix is reduced to a fern-specific matrix by randomly selecting a feature dimension set. This matrix is then used together with the provided label matrix in Canonical Correlation Analysis (CCA) which provides a new embedding space. This projection enables the assignment of each training sample to a bin, as well as the calculation of the class-conditional probabilities. During testing the same feature dimensions are selected, and the learned projection is applied to assign the sample to a bin. Finally all base classifier probabilities are combined in a semi-naive Bayesian manner.


Code

The code contains a Matlab implementation that should run on all platforms of the embedded fern classifier.
Download CODE Ver.1 (Matlab+Data Sets (~275MB!!!)).

The package also allows to test the classifier on three different vision data sets: Multipie, Caltech 101 and MNIST.
  1. Multipie: Random Fern: 42.72% - Embedded Ferns: 58.27%
  2. Caltech 101: Random Fern: 21.38% - Embedded Ferns: 39.80%
  3. MNIST: Random Fern: 89.97 %- Embedded Ferns: 92.93%


Publications

  1. Discriminative Feature-to-Point Matching in Image-Based Localization (PDF)
    Michael Donoser and Dieter Schmalstieg
    Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), 2014