File AdditiveQuantizer.h

namespace faiss

Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Definition of inverted lists + a few common classes that implement the interface.

Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

In this file are the implementations of extra metrics beyond L2 and inner product

Implements a few neural net layers, mainly to support QINCo

Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

struct AdditiveQuantizer : public faiss::Quantizer
#include <AdditiveQuantizer.h>

Abstract structure for additive quantizers

Different from the product quantizer in which the decoded vector is the concatenation of M sub-vectors, additive quantizers sum M sub-vectors to get the decoded vector.

Subclassed by faiss::LocalSearchQuantizer, faiss::ProductAdditiveQuantizer, faiss::ResidualQuantizer

Public Types

enum Search_type_t

Encodes how search is performed and how vectors are encoded.

Values:

enumerator ST_decompress

decompress database vector

enumerator ST_LUT_nonorm

use a LUT, don’t include norms (OK for IP or normalized vectors)

enumerator ST_norm_from_LUT

compute the norms from the look-up tables (cost is in O(M^2))

enumerator ST_norm_float

use a LUT, and store float32 norm with the vectors

enumerator ST_norm_qint8

use a LUT, and store 8bit-quantized norm

enumerator ST_norm_qint4
enumerator ST_norm_cqint8

use a LUT, and store non-uniform quantized norm

enumerator ST_norm_cqint4
enumerator ST_norm_lsq2x4

use a 2x4 bits lsq as norm quantizer (for fast scan)

enumerator ST_norm_rq2x4

use a 2x4 bits rq as norm quantizer (for fast scan)

Public Functions

void compute_codebook_tables()
uint64_t encode_norm(float norm) const

encode a norm into norm_bits bits

uint32_t encode_qcint(float x) const

encode norm by non-uniform scalar quantization

float decode_qcint(uint32_t c) const

decode norm by non-uniform scalar quantization

AdditiveQuantizer(size_t d, const std::vector<size_t> &nbits, Search_type_t search_type = ST_decompress)
AdditiveQuantizer()

compute derived values when d, M and nbits have been set

void set_derived_values()

Train the norm quantizer.

void train_norm(size_t n, const float *norms)
inline virtual void compute_codes(const float *x, uint8_t *codes, size_t n) const override

Quantize a set of vectors

Parameters:
  • x – input vectors, size n * d

  • codes – output codes, size n * code_size

virtual void compute_codes_add_centroids(const float *x, uint8_t *codes, size_t n, const float *centroids = nullptr) const = 0

Encode a set of vectors

Parameters:
  • x – vectors to encode, size n * d

  • codes – output codes, size n * code_size

  • centroids – centroids to be added to x, size n * d

void pack_codes(size_t n, const int32_t *codes, uint8_t *packed_codes, int64_t ld_codes = -1, const float *norms = nullptr, const float *centroids = nullptr) const

pack a series of code to bit-compact format

Parameters:
  • codes – codes to be packed, size n * code_size

  • packed_codes – output bit-compact codes

  • ld_codes – leading dimension of codes

  • norms – norms of the vectors (size n). Will be computed if needed but not provided

  • centroids – centroids to be added to x, size n * d

virtual void decode(const uint8_t *codes, float *x, size_t n) const override

Decode a set of vectors

Parameters:
  • codes – codes to decode, size n * code_size

  • x – output vectors, size n * d

virtual void decode_unpacked(const int32_t *codes, float *x, size_t n, int64_t ld_codes = -1) const

Decode a set of vectors in non-packed format

Parameters:
  • codes – codes to decode, size n * ld_codes

  • x – output vectors, size n * d

template<bool is_IP, Search_type_t effective_search_type>
float compute_1_distance_LUT(const uint8_t *codes, const float *LUT) const
void decode_64bit(idx_t n, float *x) const

decoding function for a code in a 64-bit word

virtual void compute_LUT(size_t n, const float *xq, float *LUT, float alpha = 1.0f, long ld_lut = -1) const

Compute inner-product look-up tables. Used in the centroid search functions.

Parameters:
  • xq – query vector, size (n, d)

  • LUT – look-up table, size (n, total_codebook_size)

  • alpha – compute alpha * inner-product

  • ld_lut – leading dimension of LUT

void knn_centroids_inner_product(idx_t n, const float *xq, idx_t k, float *distances, idx_t *labels) const

exact IP search

void compute_centroid_norms(float *norms) const

For L2 search we need the L2 norms of the centroids

Parameters:

norms – output norms table, size total_codebook_size

void knn_centroids_L2(idx_t n, const float *xq, idx_t k, float *distances, idx_t *labels, const float *centroid_norms) const

Exact L2 search, with precomputed norms

virtual ~AdditiveQuantizer()

Public Members

size_t M

number of codebooks

std::vector<size_t> nbits

bits for each step

std::vector<float> codebooks

codebooks

std::vector<uint64_t> codebook_offsets

codebook #1 is stored in rows codebook_offsets[i]:codebook_offsets[i+1] in the codebooks table of size total_codebook_size by d

size_t tot_bits = 0

total number of bits (indexes + norms)

size_t norm_bits = 0

bits allocated for the norms

size_t total_codebook_size = 0

size of the codebook in vectors

bool only_8bit = false

are all nbits = 8 (use faster decoder)

bool verbose = false

verbose during training?

bool is_trained = false

is trained or not

std::vector<float> norm_tabs

auxiliary data for ST_norm_lsq2x4 and ST_norm_rq2x4 store norms of codebook entries for 4-bit fastscan

IndexFlat1D qnorm

store and search norms

std::vector<float> centroid_norms

norms of all codebook entries (size total_codebook_size)

std::vector<float> codebook_cross_products

dot products of all codebook entries with the previous codebooks size sum(codebook_offsets[m] * 2^nbits[m], m=0..M-1)

size_t max_mem_distances = 5 * (size_t(1) << 30)

norms and distance matrixes with beam search can get large, so use this to control for the amount of memory that can be allocated

Search_type_t search_type

Also determines what’s in the codes.

float norm_min = NAN

min/max for quantization of norms

float norm_max = NAN