File ResidualQuantizer.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

struct ResidualQuantizer : public faiss::AdditiveQuantizer
#include <ResidualQuantizer.h>

Residual quantizer with variable number of bits per sub-quantizer

The residual centroids are stored in a big cumulative centroid table. The codes are represented either as a non-compact table of size (n, M) or as the compact output (n, code_size).

Public Types

using train_type_t = int

initialization

Public Functions

ResidualQuantizer(size_t d, const std::vector<size_t> &nbits, Search_type_t search_type = ST_decompress)
ResidualQuantizer(size_t d, size_t M, size_t nbits, Search_type_t search_type = ST_decompress)
ResidualQuantizer()
virtual void train(size_t n, const float *x) override

Train the residual quantizer.

void initialize_from(const ResidualQuantizer &other, int skip_M = 0)

Copy the M codebook levels from other, starting from skip_M.

float retrain_AQ_codebook(size_t n, const float *x)

Encode the vectors and compute codebook that minimizes the quantization error on these codes

Parameters:
  • x – training vectors, size n * d

  • n – nb of training vectors, n >= total_codebook_size

Returns:

returns quantization error for the new codebook with old codes

virtual void compute_codes_add_centroids(const float *x, uint8_t *codes, size_t n, const float *centroids = nullptr) const override

Encode a set of vectors

Parameters:
  • x – vectors to encode, size n * d

  • codes – output codes, size n * code_size

  • centroids – centroids to be added to x, size n * d

void refine_beam(size_t n, size_t beam_size, const float *residuals, int new_beam_size, int32_t *new_codes, float *new_residuals = nullptr, float *new_distances = nullptr) const

lower-level encode function

Parameters:
  • n – number of vectors to hanlde

  • residuals – vectors to encode, size (n, beam_size, d)

  • beam_size – input beam size

  • new_beam_size – output beam size (should be <= K * beam_size)

  • new_codes – output codes, size (n, new_beam_size, m + 1)

  • new_residuals – output residuals, size (n, new_beam_size, d)

  • new_distances – output distances, size (n, new_beam_size)

void refine_beam_LUT(size_t n, const float *query_norms, const float *query_cp, int new_beam_size, int32_t *new_codes, float *new_distances = nullptr) const
size_t memory_per_point(int beam_size = -1) const

Beam search can consume a lot of memory. This function estimates the amount of mem used by refine_beam to adjust the batch size

Parameters:

beam_size – if != -1, override the beam size

void compute_codebook_tables()

Cross products used in codebook tables used for beam_LUT = 1

Public Members

train_type_t train_type = Train_progressive_dim

Binary or of the Train_* flags below.

int niter_codebook_refine = 5

number of iterations for codebook refinement.

int max_beam_size = 5

beam size used for training and for encoding

int use_beam_LUT = 0

use LUT for beam search

ApproxTopK_mode_t approx_topk_mode = ApproxTopK_mode_t::EXACT_TOPK

Currently used mode of approximate min-k computations. Default value is EXACT_TOPK.

ProgressiveDimClusteringParameters cp

clustering parameters

ProgressiveDimIndexFactory *assign_index_factory = nullptr

if non-NULL, use this index for assignment

std::vector<float> codebook_cross_products

dot products of all codebook entries with the previous codebooks size sum(codebook_offsets[m] * 2^nbits[m], m=0..M-1)

std::vector<float> cent_norms

norms of all codebook entries (size total_codebook_size)

Public Static Attributes

static const int Train_default = 0

regular k-means (minimal amount of computation)

static const int Train_progressive_dim = 1

progressive dim clustering (set by default)

static const int Train_refine_codebook = 2

do a few iterations of codebook refinement after first level estimation

static const int Train_top_beam = 1024

set this bit on train_type if beam is to be trained only on the first element of the beam (faster but less accurate)

static const int Skip_codebook_tables = 2048

set this bit to not autmatically compute the codebook tables after training