File Clustering.h
-
namespace faiss
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions
The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.
The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.
Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is
x[ i * d + j ]
where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.
The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implements a few neural net layers, mainly to support QINCo
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.
Functions
-
float kmeans_clustering(size_t d, size_t n, size_t k, const float *x, float *centroids)
simplified interface
- Parameters:
d – dimension of the data
n – nb of training vectors
k – nb of output centroids
x – training set (size n * d)
centroids – output centroids (size k * d)
- Returns:
final quantization error
-
struct ClusteringParameters
- #include <Clustering.h>
Class for the clustering parameters. Can be passed to the constructor of the Clustering object.
Subclassed by faiss::Clustering, faiss::ProgressiveDimClusteringParameters
Public Members
-
int niter = 25
number of clustering iterations
-
int nredo = 1
redo clustering this many times and keep the clusters with the best objective
-
bool verbose = false
-
bool spherical = false
whether to normalize centroids after each iteration (useful for inner product clustering)
-
bool int_centroids = false
round centroids coordinates to integer after each iteration?
-
bool update_index = false
re-train index after each iteration?
-
bool frozen_centroids = false
Use the subset of centroids provided as input and do not change them during iterations
-
int min_points_per_centroid = 39
If fewer than this number of training vectors per centroid are provided, writes a warning. Note that fewer than 1 point per centroid raises an exception.
-
int max_points_per_centroid = 256
to limit size of dataset, otherwise the training set is subsampled
-
int seed = 1234
seed for the random number generator. negative values lead to seeding an internal rng with std::high_resolution_clock.
-
size_t decode_block_size = 32768
when the training set is encoded, batch size of the codec decoder
-
bool check_input_data_for_NaNs = true
whether to check for NaNs in an input data
-
bool use_faster_subsampling = false
Whether to use splitmix64-based random number generator for subsampling, which is faster, but may pick duplicate points.
-
int niter = 25
-
struct ClusteringIterationStats
-
struct Clustering : public faiss::ClusteringParameters
- #include <Clustering.h>
K-means clustering based on assignment - centroid update iterations
The clustering is based on an Index object that assigns training points to the centroids. Therefore, at each iteration the centroids are added to the index.
On output, the centoids table is set to the latest version of the centroids and they are also added to the index. If the centroids table it is not empty on input, it is also used for initialization.
Subclassed by faiss::Clustering1D
Public Functions
-
Clustering(int d, int k)
-
Clustering(int d, int k, const ClusteringParameters &cp)
-
virtual void train(idx_t n, const float *x, faiss::Index &index, const float *x_weights = nullptr)
run k-means training
- Parameters:
x – training vectors, size n * d
index – index used for assignment
x_weights – weight associated to each vector: NULL or size n
-
void train_encoded(idx_t nx, const uint8_t *x_in, const Index *codec, Index &index, const float *weights = nullptr)
run with encoded vectors
win addition to train()’s parameters takes a codec as parameter to decode the input vectors.
- Parameters:
codec – codec used to decode the vectors (nullptr = vectors are in fact floats)
-
void post_process_centroids()
Post-process the centroids after each centroid update. includes optional L2 normalization and nearest integer rounding
-
inline virtual ~Clustering()
Public Members
-
size_t d
dimension of the vectors
-
size_t k
nb of centroids
-
std::vector<float> centroids
centroids (k * d) if centroids are set on input to train, they will be used as initialization
-
std::vector<ClusteringIterationStats> iteration_stats
stats at every iteration of clustering
-
int niter = 25
number of clustering iterations
-
int nredo = 1
redo clustering this many times and keep the clusters with the best objective
-
bool verbose = false
-
bool spherical = false
whether to normalize centroids after each iteration (useful for inner product clustering)
-
bool int_centroids = false
round centroids coordinates to integer after each iteration?
-
bool update_index = false
re-train index after each iteration?
-
bool frozen_centroids = false
Use the subset of centroids provided as input and do not change them during iterations
-
int min_points_per_centroid = 39
If fewer than this number of training vectors per centroid are provided, writes a warning. Note that fewer than 1 point per centroid raises an exception.
-
int max_points_per_centroid = 256
to limit size of dataset, otherwise the training set is subsampled
-
int seed = 1234
seed for the random number generator. negative values lead to seeding an internal rng with std::high_resolution_clock.
-
size_t decode_block_size = 32768
when the training set is encoded, batch size of the codec decoder
-
bool check_input_data_for_NaNs = true
whether to check for NaNs in an input data
-
bool use_faster_subsampling = false
Whether to use splitmix64-based random number generator for subsampling, which is faster, but may pick duplicate points.
-
Clustering(int d, int k)
-
struct Clustering1D : public faiss::Clustering
- #include <Clustering.h>
Exact 1D clustering algorithm
Since it does not use an index, it does not overload the train() function
Public Functions
-
explicit Clustering1D(int k)
-
Clustering1D(int k, const ClusteringParameters &cp)
-
inline virtual ~Clustering1D()
-
virtual void train(idx_t n, const float *x, faiss::Index &index, const float *x_weights = nullptr)
run k-means training
- Parameters:
x – training vectors, size n * d
index – index used for assignment
x_weights – weight associated to each vector: NULL or size n
-
void train_encoded(idx_t nx, const uint8_t *x_in, const Index *codec, Index &index, const float *weights = nullptr)
run with encoded vectors
win addition to train()’s parameters takes a codec as parameter to decode the input vectors.
- Parameters:
codec – codec used to decode the vectors (nullptr = vectors are in fact floats)
-
void post_process_centroids()
Post-process the centroids after each centroid update. includes optional L2 normalization and nearest integer rounding
Public Members
-
size_t d
dimension of the vectors
-
size_t k
nb of centroids
-
std::vector<float> centroids
centroids (k * d) if centroids are set on input to train, they will be used as initialization
-
std::vector<ClusteringIterationStats> iteration_stats
stats at every iteration of clustering
-
int niter = 25
number of clustering iterations
-
int nredo = 1
redo clustering this many times and keep the clusters with the best objective
-
bool verbose = false
-
bool spherical = false
whether to normalize centroids after each iteration (useful for inner product clustering)
-
bool int_centroids = false
round centroids coordinates to integer after each iteration?
-
bool update_index = false
re-train index after each iteration?
-
bool frozen_centroids = false
Use the subset of centroids provided as input and do not change them during iterations
-
int min_points_per_centroid = 39
If fewer than this number of training vectors per centroid are provided, writes a warning. Note that fewer than 1 point per centroid raises an exception.
-
int max_points_per_centroid = 256
to limit size of dataset, otherwise the training set is subsampled
-
int seed = 1234
seed for the random number generator. negative values lead to seeding an internal rng with std::high_resolution_clock.
-
size_t decode_block_size = 32768
when the training set is encoded, batch size of the codec decoder
-
bool check_input_data_for_NaNs = true
whether to check for NaNs in an input data
-
bool use_faster_subsampling = false
Whether to use splitmix64-based random number generator for subsampling, which is faster, but may pick duplicate points.
-
explicit Clustering1D(int k)
-
struct ProgressiveDimClusteringParameters : public faiss::ClusteringParameters
Subclassed by faiss::ProgressiveDimClustering
Public Functions
-
ProgressiveDimClusteringParameters()
Public Members
-
int progressive_dim_steps
number of incremental steps
-
bool apply_pca
apply PCA on input
-
int niter = 25
number of clustering iterations
-
int nredo = 1
redo clustering this many times and keep the clusters with the best objective
-
bool verbose = false
-
bool spherical = false
whether to normalize centroids after each iteration (useful for inner product clustering)
-
bool int_centroids = false
round centroids coordinates to integer after each iteration?
-
bool update_index = false
re-train index after each iteration?
-
bool frozen_centroids = false
Use the subset of centroids provided as input and do not change them during iterations
-
int min_points_per_centroid = 39
If fewer than this number of training vectors per centroid are provided, writes a warning. Note that fewer than 1 point per centroid raises an exception.
-
int max_points_per_centroid = 256
to limit size of dataset, otherwise the training set is subsampled
-
int seed = 1234
seed for the random number generator. negative values lead to seeding an internal rng with std::high_resolution_clock.
-
size_t decode_block_size = 32768
when the training set is encoded, batch size of the codec decoder
-
bool check_input_data_for_NaNs = true
whether to check for NaNs in an input data
-
bool use_faster_subsampling = false
Whether to use splitmix64-based random number generator for subsampling, which is faster, but may pick duplicate points.
-
ProgressiveDimClusteringParameters()
-
struct ProgressiveDimIndexFactory
- #include <Clustering.h>
generates an index suitable for clustering when called
Subclassed by faiss::gpu::GpuProgressiveDimIndexFactory
-
struct ProgressiveDimClustering : public faiss::ProgressiveDimClusteringParameters
- #include <Clustering.h>
K-means clustering with progressive dimensions used
The clustering first happens in dim 1, then with exponentially increasing dimension until d (I steps). This is typically applied after a PCA transformation (optional). Reference:
“Improved Residual Vector Quantization for High-dimensional Approximate
Nearest Neighbor Search”
Shicong Liu, Hongtao Lu, Junru Shao, AAAI’15
https://arxiv.org/abs/1509.05195
Public Functions
-
ProgressiveDimClustering(int d, int k)
-
ProgressiveDimClustering(int d, int k, const ProgressiveDimClusteringParameters &cp)
-
void train(idx_t n, const float *x, ProgressiveDimIndexFactory &factory)
-
inline virtual ~ProgressiveDimClustering()
Public Members
-
size_t d
dimension of the vectors
-
size_t k
nb of centroids
-
std::vector<ClusteringIterationStats> iteration_stats
stats at every iteration of clustering
-
int progressive_dim_steps
number of incremental steps
-
bool apply_pca
apply PCA on input
-
int niter = 25
number of clustering iterations
-
int nredo = 1
redo clustering this many times and keep the clusters with the best objective
-
bool verbose = false
-
bool spherical = false
whether to normalize centroids after each iteration (useful for inner product clustering)
-
bool int_centroids = false
round centroids coordinates to integer after each iteration?
-
bool update_index = false
re-train index after each iteration?
-
bool frozen_centroids = false
Use the subset of centroids provided as input and do not change them during iterations
-
int min_points_per_centroid = 39
If fewer than this number of training vectors per centroid are provided, writes a warning. Note that fewer than 1 point per centroid raises an exception.
-
int max_points_per_centroid = 256
to limit size of dataset, otherwise the training set is subsampled
-
int seed = 1234
seed for the random number generator. negative values lead to seeding an internal rng with std::high_resolution_clock.
-
size_t decode_block_size = 32768
when the training set is encoded, batch size of the codec decoder
-
bool check_input_data_for_NaNs = true
whether to check for NaNs in an input data
-
bool use_faster_subsampling = false
Whether to use splitmix64-based random number generator for subsampling, which is faster, but may pick duplicate points.
-
ProgressiveDimClustering(int d, int k)
-
float kmeans_clustering(size_t d, size_t n, size_t k, const float *x, float *centroids)