File IndexIVFPQ.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

Functions

void initialize_IVFPQ_precomputed_table(int &use_precomputed_table, const Index *quantizer, const ProductQuantizer &pq, AlignedTable<float> &precomputed_table, bool by_residual, bool verbose)

Pre-compute distance tables for IVFPQ with by-residual and METRIC_L2

Parameters:
  • use_precomputed_table – (I/O) =-1: force disable =0: decide heuristically (default: use tables only if they are < precomputed_tables_max_bytes), set use_precomputed_table on output =1: tables that work for all quantizers (size 256 * nlist * M) =2: specific version for MultiIndexQuantizer (much more compact)

  • precomputed_table – precomputed table to initialize

Variables

FAISS_API size_t precomputed_table_max_bytes
FAISS_API int index_ivfpq_add_core_o_bs
FAISS_API IndexIVFPQStats indexIVFPQ_stats
struct IVFPQSearchParameters : public faiss::SearchParametersIVF

Public Functions

inline IVFPQSearchParameters()
inline ~IVFPQSearchParameters()

Public Members

size_t scan_table_threshold

use table computation or on-the-fly?

int polysemous_ht

Hamming thresh for polysemous filtering.

struct IndexIVFPQ : public faiss::IndexIVF
#include <IndexIVFPQ.h>

Inverted file with Product Quantizer encoding. Each residual vector is encoded as a product quantizer code.

Subclassed by faiss::IndexIVFPQR

Public Functions

IndexIVFPQ(Index *quantizer, size_t d, size_t nlist, size_t M, size_t nbits_per_idx, MetricType metric = METRIC_L2)
virtual void encode_vectors(idx_t n, const float *x, const idx_t *list_nos, uint8_t *codes, bool include_listnos = false) const override

Encodes a set of vectors as they would appear in the inverted lists

Parameters:
  • list_nos – inverted list ids as returned by the quantizer (size n). -1s are ignored.

  • codes – output codes, size n * code_size

  • include_listno – include the list ids in the code (in this case add ceil(log8(nlist)) to the code size)

virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const override

decode a set of vectors

Parameters:
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

virtual void add_core(idx_t n, const float *x, const idx_t *xids, const idx_t *precomputed_idx, void *inverted_list_context = nullptr) override

Implementation of vector addition where the vector assignments are predefined. The default implementation hands over the code extraction to encode_vectors.

Parameters:

precomputed_idx – quantization indices for the input vectors (size n)

void add_core_o(idx_t n, const float *x, const idx_t *xids, float *residuals_2, const idx_t *precomputed_idx = nullptr, void *inverted_list_context = nullptr)

same as add_core, also:

  • output 2nd level residuals if residuals_2 != NULL

  • accepts precomputed_idx = nullptr

virtual void train_encoder(idx_t n, const float *x, const idx_t *assign) override

trains the product quantizer

virtual idx_t train_encoder_num_vectors() const override

can be redefined by subclasses to indicate how many training vectors they need

virtual void reconstruct_from_offset(int64_t list_no, int64_t offset, float *recons) const override

Reconstruct a vector given the location in terms of (inv list index + inv list offset) instead of the id.

Useful for reconstructing when the direct_map is not maintained and the inv list offset is computed by search_preassigned() with store_pairs set.

size_t find_duplicates(idx_t *ids, size_t *lims) const

Find exact duplicates in the dataset.

the duplicates are returned in pre-allocated arrays (see the max sizes).

Parameters:
  • lims – limits between groups of duplicates (max size ntotal / 2 + 1)

  • ids – ids[lims[i]] : ids[lims[i+1]-1] is a group of duplicates (max size ntotal)

Returns:

n number of groups found

void encode(idx_t key, const float *x, uint8_t *code) const
void encode_multiple(size_t n, idx_t *keys, const float *x, uint8_t *codes, bool compute_keys = false) const

Encode multiple vectors

Parameters:
  • n – nb vectors to encode

  • keys – posting list ids for those vectors (size n)

  • x – vectors (size n * d)

  • codes – output codes (size n * code_size)

  • compute_keys – if false, assume keys are precomputed, otherwise compute them

void decode_multiple(size_t n, const idx_t *keys, const uint8_t *xcodes, float *x) const

inverse of encode_multiple

virtual InvertedListScanner *get_InvertedListScanner(bool store_pairs, const IDSelector *sel) const override

Get a scanner for this index (store_pairs means ignore labels)

The default search implementation uses this to compute the distances

void precompute_table()

build precomputed table

IndexIVFPQ()

Public Members

ProductQuantizer pq

produces the codes

bool do_polysemous_training

reorder PQ centroids after training?

PolysemousTraining *polysemous_training

if NULL, use default

size_t scan_table_threshold

use table computation or on-the-fly?

int polysemous_ht

Hamming thresh for polysemous filtering.

int use_precomputed_table

Precompute table that speed up query preprocessing at some memory cost (used only for by_residual with L2 metric)

AlignedTable<float> precomputed_table

if use_precompute_table size nlist * pq.M * pq.ksub

struct IndexIVFPQStats
#include <IndexIVFPQ.h>

statistics are robust to internal threading, but not if IndexIVFPQ::search_preassigned is called by multiple threads

Public Functions

inline IndexIVFPQStats()
void reset()

Public Members

size_t nrefine

nb of refines (IVFPQR)

size_t n_hamming_pass

nb of passed Hamming distance tests (for polysemous)

size_t search_cycles
size_t refine_cycles

only for IVFPQR