In the nearest neighbor problem a set of data points in d-dimensional space is given. These points are preprocessed into a data structure, so that given any query point q, the nearest or generally k nearest points of P to q can be reported efficiently. The distance between two points can be defined in many ways. ANN assumes that distances are measured using any class of distance functions called Minkowski metrics. These include the well known Euclidean distance, Manhattan distance, and max distance.
Based on our own experience, ANN performs quite efficiently for point sets ranging in size from thousands to hundreds of thousands, and in dimensions as high as 20. (For applications in significantly higher dimensions, the results are rather spotty, but you might try it anyway.)
The library implements a number of different data structures, based on kd-trees and box-decomposition trees, and employs a couple of different search strategies.
The library also comes with test programs for measuring the quality of performance of ANN on any particular data sets, as well as programs for visualizing the structure of the geometric data structures.
Computing exact nearest neighbors in dimensions much higher than 8 seems to be a very difficult task. Few methods seem to be significantly better than a brute-force computation of all distances. However, it has been shown that by computing nearest neighbors approximately, it is possible to achieve significantly faster running times (on the order of 10's to 100's) often with a relatively small actual errors. ANN allows the user to select a maximum error factor, thus providing a tradeoff between accuracy and running time.
ANN is implemented in C++. It requires an ANSI C++ compiler (e.g. gnu C++ versions 2.7.2 or higher). It has been successfully compiled and run on Sun workstations running SunOS 4.x and SunOS 5.x (Solaris).
If you have questions or comments, please email them to Dave Mount: mount@cs.umd.edu.
Back to Dave Mount's home page.
Last updated on June 24, 1998.