Struct faiss::LocalSearchQuantizer

struct LocalSearchQuantizer : public faiss::AdditiveQuantizer

Implementation of LSQ/LSQ++ described in the following two papers:

Revisiting additive quantization Julieta Martinez, et al. ECCV 2016

LSQ++: Lower running time and higher recall in multi-codebook quantization Julieta Martinez, et al. ECCV 2018

This implementation is mostly translated from the Julia implementations by Julieta Martinez: (https://github.com/una-dinosauria/local-search-quantization, https://github.com/una-dinosauria/Rayuela.jl)

The trained codes are stored in codebooks which is called centroids in PQ and RQ.

Public Types

enum Search_type_t

Encodes how search is performed and how vectors are encoded.

Values:

enumerator ST_decompress

decompress database vector

enumerator ST_LUT_nonorm

use a LUT, don’t include norms (OK for IP or normalized vectors)

enumerator ST_norm_from_LUT

compute the norms from the look-up tables (cost is in O(M^2))

enumerator ST_norm_float

use a LUT, and store float32 norm with the vectors

enumerator ST_norm_qint8

use a LUT, and store 8bit-quantized norm

enumerator ST_norm_qint4
enumerator ST_norm_cqint8

use a LUT, and store non-uniform quantized norm

enumerator ST_norm_cqint4
enumerator ST_norm_lsq2x4

use a 2x4 bits lsq as norm quantizer (for fast scan)

enumerator ST_norm_rq2x4

use a 2x4 bits rq as norm quantizer (for fast scan)

Public Functions

LocalSearchQuantizer(size_t d, size_t M, size_t nbits, Search_type_t search_type = ST_decompress)
LocalSearchQuantizer()
~LocalSearchQuantizer() override
virtual void train(size_t n, const float *x) override

Train the quantizer

Parameters:

x – training vectors, size n * d

virtual void compute_codes_add_centroids(const float *x, uint8_t *codes, size_t n, const float *centroids = nullptr) const override

Encode a set of vectors

Parameters:
  • x – vectors to encode, size n * d

  • codes – output codes, size n * code_size

  • n – number of vectors

  • centroids – centroids to be added to x, size n * d

void update_codebooks(const float *x, const int32_t *codes, size_t n)

Update codebooks given encodings

Parameters:
  • x – training vectors, size n * d

  • codes – encoded training vectors, size n * M

  • n – number of vectors

void icm_encode(int32_t *codes, const float *x, size_t n, size_t ils_iters, std::mt19937 &gen) const

Encode vectors given codebooks using iterative conditional mode (icm).

Parameters:
  • codes – output codes, size n * M

  • x – vectors to encode, size n * d

  • n – number of vectors

  • ils_iters – number of iterations of iterative local search

void icm_encode_impl(int32_t *codes, const float *x, const float *unaries, std::mt19937 &gen, size_t n, size_t ils_iters, bool verbose) const
void icm_encode_step(int32_t *codes, const float *unaries, const float *binaries, size_t n, size_t n_iters) const
void perturb_codes(int32_t *codes, size_t n, std::mt19937 &gen) const

Add some perturbation to codes

Parameters:
  • codes – codes to be perturbed, size n * M

  • n – number of vectors

void perturb_codebooks(float T, const std::vector<float> &stddev, std::mt19937 &gen)

Add some perturbation to codebooks

Parameters:
  • T – temperature of simulated annealing

  • stddev – standard derivations of each dimension in training data

void compute_binary_terms(float *binaries) const

Compute binary terms

Parameters:

binaries – binary terms, size M * M * K * K

void compute_unary_terms(const float *x, float *unaries, size_t n) const

Compute unary terms

Parameters:
  • n – number of vectors

  • x – vectors to encode, size n * d

  • unaries – unary terms, size n * M * K

float evaluate(const int32_t *codes, const float *x, size_t n, float *objs = nullptr) const

Helper function to compute reconstruction error

Parameters:
  • codes – encoded codes, size n * M

  • x – vectors to encode, size n * d

  • n – number of vectors

  • objs – if it is not null, store reconstruction error of each vector into it, size n

void compute_codebook_tables()
uint64_t encode_norm(float norm) const

encode a norm into norm_bits bits

uint32_t encode_qcint(float x) const

encode norm by non-uniform scalar quantization

float decode_qcint(uint32_t c) const

decode norm by non-uniform scalar quantization

void set_derived_values()

Train the norm quantizer.

void train_norm(size_t n, const float *norms)
inline virtual void compute_codes(const float *x, uint8_t *codes, size_t n) const override

Quantize a set of vectors

Parameters:
  • x – input vectors, size n * d

  • codes – output codes, size n * code_size

void pack_codes(size_t n, const int32_t *codes, uint8_t *packed_codes, int64_t ld_codes = -1, const float *norms = nullptr, const float *centroids = nullptr) const

pack a series of code to bit-compact format

Parameters:
  • codes – codes to be packed, size n * code_size

  • packed_codes – output bit-compact codes

  • ld_codes – leading dimension of codes

  • norms – norms of the vectors (size n). Will be computed if needed but not provided

  • centroids – centroids to be added to x, size n * d

virtual void decode(const uint8_t *codes, float *x, size_t n) const override

Decode a set of vectors

Parameters:
  • codes – codes to decode, size n * code_size

  • x – output vectors, size n * d

virtual void decode_unpacked(const int32_t *codes, float *x, size_t n, int64_t ld_codes = -1) const

Decode a set of vectors in non-packed format

Parameters:
  • codes – codes to decode, size n * ld_codes

  • x – output vectors, size n * d

template<bool is_IP, Search_type_t effective_search_type>
float compute_1_distance_LUT(const uint8_t *codes, const float *LUT) const
void decode_64bit(idx_t n, float *x) const

decoding function for a code in a 64-bit word

virtual void compute_LUT(size_t n, const float *xq, float *LUT, float alpha = 1.0f, long ld_lut = -1) const

Compute inner-product look-up tables. Used in the centroid search functions.

Parameters:
  • xq – query vector, size (n, d)

  • LUT – look-up table, size (n, total_codebook_size)

  • alpha – compute alpha * inner-product

  • ld_lut – leading dimension of LUT

void knn_centroids_inner_product(idx_t n, const float *xq, idx_t k, float *distances, idx_t *labels) const

exact IP search

void compute_centroid_norms(float *norms) const

For L2 search we need the L2 norms of the centroids

Parameters:

norms – output norms table, size total_codebook_size

void knn_centroids_L2(idx_t n, const float *xq, idx_t k, float *distances, idx_t *labels, const float *centroid_norms) const

Exact L2 search, with precomputed norms

Public Members

size_t K

number of codes per codebook

size_t train_iters = 25

number of iterations in training

size_t encode_ils_iters = 16

iterations of local search in encoding

size_t train_ils_iters = 8

iterations of local search in training

size_t icm_iters = 4

number of iterations in icm

float p = 0.5f

temperature factor

float lambd = 1e-2f

regularization factor

size_t chunk_size = 10000

nb of vectors to encode at a time

int random_seed = 0x12345

seed for random generator

size_t nperts = 4

number of perturbation in each code

if non-NULL, use this encoder to encode (owned by the object)

lsq::IcmEncoderFactory *icm_encoder_factory = nullptr
bool update_codebooks_with_double = true
size_t M

number of codebooks

std::vector<size_t> nbits

bits for each step

std::vector<float> codebooks

codebooks

std::vector<uint64_t> codebook_offsets

codebook #1 is stored in rows codebook_offsets[i]:codebook_offsets[i+1] in the codebooks table of size total_codebook_size by d

size_t tot_bits = 0

total number of bits (indexes + norms)

size_t norm_bits = 0

bits allocated for the norms

size_t total_codebook_size = 0

size of the codebook in vectors

bool only_8bit = false

are all nbits = 8 (use faster decoder)

bool verbose = false

verbose during training?

bool is_trained = false

is trained or not

std::vector<float> norm_tabs

auxiliary data for ST_norm_lsq2x4 and ST_norm_rq2x4 store norms of codebook entries for 4-bit fastscan

IndexFlat1D qnorm

store and search norms

std::vector<float> centroid_norms

norms of all codebook entries (size total_codebook_size)

std::vector<float> codebook_cross_products

dot products of all codebook entries with the previous codebooks size sum(codebook_offsets[m] * 2^nbits[m], m=0..M-1)

size_t max_mem_distances = 5 * (size_t(1) << 30)

norms and distance matrixes with beam search can get large, so use this to control for the amount of memory that can be allocated

Search_type_t search_type

Also determines what’s in the codes.

float norm_min = NAN

min/max for quantization of norms

float norm_max = NAN
size_t d

size of the input vectors

size_t code_size

bytes per indexed vector