Towards Optimal Naive Bayes Nearest Neighbor |
Regis Behmo (LIAMA/EcoleCentrale) |
Paul Marcombes (LIAMA/IMAGINE, LIGM, Universite Paris-Est) |
Arnak Dalalyan (IMAGINE, LIGM, Universite Paris-Est) |
Véronique Prinet (CASIA/LIAMA) |
Abstract
Naive Bayes Nearest Neighbor (NBNN) is a feature-based image classifier that achieves impressive degree of accuracy by exploiting `Image-to-
Class' distances and by avoiding quantization of local image descriptors [1]. It is
based on the hypothesis that each local descriptor is drawn from a class-dependent
probability measure. The density of the latter is estimated by the non-parametric
kernel estimator, which is further simplified under the assumption that the normalization factor is class-independent. While leading to significant simplification,
the assumption underlying the original NBNN is too restrictive and considerably
degrades its generalization ability. Related publications
[1] Boiman, O., Shechtman, E., Irani, M.: In defense of nearest-neighbor based image classification. In: CVPR. (2008) [2] Lampert, C., Blaschko, M., Hofmann, T.: Beyond sliding windows: Object localization by efficient subwindow search. In: CVPR. (2008) |
|