File IndexIVFAdditiveQuantizerFastScan.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

struct IndexIVFAdditiveQuantizerFastScan : public faiss::IndexIVFFastScan
#include <IndexIVFAdditiveQuantizerFastScan.h>

Fast scan version of IVFAQ. Works for 4-bit AQ for now.

The codes in the inverted lists are not stored sequentially but grouped in blocks of size bbs. This makes it possible to very quickly compute distances with SIMD instructions.

Implementations (implem): 0: auto-select implementation (default) 1: orig’s search, re-implemented 2: orig’s search, re-ordered by invlist 10: optimizer int16 search, collect results in heap, no qbs 11: idem, collect results in reservoir 12: optimizer int16 search, collect results in heap, uses qbs 13: idem, collect results in reservoir

Subclassed by faiss::IndexIVFLocalSearchQuantizerFastScan, faiss::IndexIVFProductLocalSearchQuantizerFastScan, faiss::IndexIVFProductResidualQuantizerFastScan, faiss::IndexIVFResidualQuantizerFastScan

Public Types

using Search_type_t = AdditiveQuantizer::Search_type_t

Public Functions

IndexIVFAdditiveQuantizerFastScan(Index *quantizer, AdditiveQuantizer *aq, size_t d, size_t nlist, MetricType metric = METRIC_L2, int bbs = 32)
void init(AdditiveQuantizer *aq, size_t nlist, MetricType metric, int bbs)
IndexIVFAdditiveQuantizerFastScan()
~IndexIVFAdditiveQuantizerFastScan() override
explicit IndexIVFAdditiveQuantizerFastScan(const IndexIVFAdditiveQuantizer &orig, int bbs = 32)
virtual void train_encoder(idx_t n, const float *x, const idx_t *assign) override

Train the encoder for the vectors.

If by_residual then it is called with residuals and corresponding assign array, otherwise x is the raw training vectors and assign=nullptr

virtual idx_t train_encoder_num_vectors() const override

can be redefined by subclasses to indicate how many training vectors they need

void estimate_norm_scale(idx_t n, const float *x)
virtual void encode_vectors(idx_t n, const float *x, const idx_t *list_nos, uint8_t *codes, bool include_listno = false) const override

same as the regular IVFAQ encoder. The codes are not reorganized by blocks a that point

virtual void search(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const SearchParameters *params = nullptr) const override

assign the vectors, then call search_preassign

virtual bool lookup_table_is_3d() const override
virtual void compute_LUT(size_t n, const float *x, const CoarseQuantized &cq, AlignedTable<float> &dis_tables, AlignedTable<float> &biases) const override
virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const override

decode a set of vectors

Parameters:
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

Public Members

AdditiveQuantizer *aq
bool rescale_norm = false
int norm_scale = 1
size_t max_train_points
struct IndexIVFLocalSearchQuantizerFastScan : public faiss::IndexIVFAdditiveQuantizerFastScan

Public Types

using Search_type_t = AdditiveQuantizer::Search_type_t

Public Functions

IndexIVFLocalSearchQuantizerFastScan(Index *quantizer, size_t d, size_t nlist, size_t M, size_t nbits, MetricType metric = METRIC_L2, Search_type_t search_type = AdditiveQuantizer::ST_norm_lsq2x4, int bbs = 32)
IndexIVFLocalSearchQuantizerFastScan()
void init(AdditiveQuantizer *aq, size_t nlist, MetricType metric, int bbs)
virtual void train_encoder(idx_t n, const float *x, const idx_t *assign) override

Train the encoder for the vectors.

If by_residual then it is called with residuals and corresponding assign array, otherwise x is the raw training vectors and assign=nullptr

virtual idx_t train_encoder_num_vectors() const override

can be redefined by subclasses to indicate how many training vectors they need

void estimate_norm_scale(idx_t n, const float *x)
virtual void encode_vectors(idx_t n, const float *x, const idx_t *list_nos, uint8_t *codes, bool include_listno = false) const override

same as the regular IVFAQ encoder. The codes are not reorganized by blocks a that point

virtual void search(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const SearchParameters *params = nullptr) const override

assign the vectors, then call search_preassign

virtual bool lookup_table_is_3d() const override
virtual void compute_LUT(size_t n, const float *x, const CoarseQuantized &cq, AlignedTable<float> &dis_tables, AlignedTable<float> &biases) const override
virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const override

decode a set of vectors

Parameters:
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

Public Members

LocalSearchQuantizer lsq
AdditiveQuantizer *aq
bool rescale_norm = false
int norm_scale = 1
size_t max_train_points
struct IndexIVFResidualQuantizerFastScan : public faiss::IndexIVFAdditiveQuantizerFastScan

Public Types

using Search_type_t = AdditiveQuantizer::Search_type_t

Public Functions

IndexIVFResidualQuantizerFastScan(Index *quantizer, size_t d, size_t nlist, size_t M, size_t nbits, MetricType metric = METRIC_L2, Search_type_t search_type = AdditiveQuantizer::ST_norm_lsq2x4, int bbs = 32)
IndexIVFResidualQuantizerFastScan()
void init(AdditiveQuantizer *aq, size_t nlist, MetricType metric, int bbs)
virtual void train_encoder(idx_t n, const float *x, const idx_t *assign) override

Train the encoder for the vectors.

If by_residual then it is called with residuals and corresponding assign array, otherwise x is the raw training vectors and assign=nullptr

virtual idx_t train_encoder_num_vectors() const override

can be redefined by subclasses to indicate how many training vectors they need

void estimate_norm_scale(idx_t n, const float *x)
virtual void encode_vectors(idx_t n, const float *x, const idx_t *list_nos, uint8_t *codes, bool include_listno = false) const override

same as the regular IVFAQ encoder. The codes are not reorganized by blocks a that point

virtual void search(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const SearchParameters *params = nullptr) const override

assign the vectors, then call search_preassign

virtual bool lookup_table_is_3d() const override
virtual void compute_LUT(size_t n, const float *x, const CoarseQuantized &cq, AlignedTable<float> &dis_tables, AlignedTable<float> &biases) const override
virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const override

decode a set of vectors

Parameters:
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

Public Members

ResidualQuantizer rq
AdditiveQuantizer *aq
bool rescale_norm = false
int norm_scale = 1
size_t max_train_points
struct IndexIVFProductLocalSearchQuantizerFastScan : public faiss::IndexIVFAdditiveQuantizerFastScan

Public Types

using Search_type_t = AdditiveQuantizer::Search_type_t

Public Functions

IndexIVFProductLocalSearchQuantizerFastScan(Index *quantizer, size_t d, size_t nlist, size_t nsplits, size_t Msub, size_t nbits, MetricType metric = METRIC_L2, Search_type_t search_type = AdditiveQuantizer::ST_norm_lsq2x4, int bbs = 32)
IndexIVFProductLocalSearchQuantizerFastScan()
void init(AdditiveQuantizer *aq, size_t nlist, MetricType metric, int bbs)
virtual void train_encoder(idx_t n, const float *x, const idx_t *assign) override

Train the encoder for the vectors.

If by_residual then it is called with residuals and corresponding assign array, otherwise x is the raw training vectors and assign=nullptr

virtual idx_t train_encoder_num_vectors() const override

can be redefined by subclasses to indicate how many training vectors they need

void estimate_norm_scale(idx_t n, const float *x)
virtual void encode_vectors(idx_t n, const float *x, const idx_t *list_nos, uint8_t *codes, bool include_listno = false) const override

same as the regular IVFAQ encoder. The codes are not reorganized by blocks a that point

virtual void search(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const SearchParameters *params = nullptr) const override

assign the vectors, then call search_preassign

virtual bool lookup_table_is_3d() const override
virtual void compute_LUT(size_t n, const float *x, const CoarseQuantized &cq, AlignedTable<float> &dis_tables, AlignedTable<float> &biases) const override
virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const override

decode a set of vectors

Parameters:
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

Public Members

ProductLocalSearchQuantizer plsq
AdditiveQuantizer *aq
bool rescale_norm = false
int norm_scale = 1
size_t max_train_points
struct IndexIVFProductResidualQuantizerFastScan : public faiss::IndexIVFAdditiveQuantizerFastScan

Public Types

using Search_type_t = AdditiveQuantizer::Search_type_t

Public Functions

IndexIVFProductResidualQuantizerFastScan(Index *quantizer, size_t d, size_t nlist, size_t nsplits, size_t Msub, size_t nbits, MetricType metric = METRIC_L2, Search_type_t search_type = AdditiveQuantizer::ST_norm_lsq2x4, int bbs = 32)
IndexIVFProductResidualQuantizerFastScan()
void init(AdditiveQuantizer *aq, size_t nlist, MetricType metric, int bbs)
virtual void train_encoder(idx_t n, const float *x, const idx_t *assign) override

Train the encoder for the vectors.

If by_residual then it is called with residuals and corresponding assign array, otherwise x is the raw training vectors and assign=nullptr

virtual idx_t train_encoder_num_vectors() const override

can be redefined by subclasses to indicate how many training vectors they need

void estimate_norm_scale(idx_t n, const float *x)
virtual void encode_vectors(idx_t n, const float *x, const idx_t *list_nos, uint8_t *codes, bool include_listno = false) const override

same as the regular IVFAQ encoder. The codes are not reorganized by blocks a that point

virtual void search(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const SearchParameters *params = nullptr) const override

assign the vectors, then call search_preassign

virtual bool lookup_table_is_3d() const override
virtual void compute_LUT(size_t n, const float *x, const CoarseQuantized &cq, AlignedTable<float> &dis_tables, AlignedTable<float> &biases) const override
virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const override

decode a set of vectors

Parameters:
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

Public Members

ProductResidualQuantizer prq
AdditiveQuantizer *aq
bool rescale_norm = false
int norm_scale = 1
size_t max_train_points