File GpuIndexIVFPQ.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

namespace gpu
struct GpuIndexIVFPQConfig : public faiss::gpu::GpuIndexIVFConfig

Public Members

bool useFloat16LookupTables = false

Whether or not float16 residual distance tables are used in the list scanning kernels. When subQuantizers * 2^bitsPerCode > 16384, this is required.

bool usePrecomputedTables = false

Whether or not we enable the precomputed table option for search, which can substantially increase the memory requirement.

bool interleavedLayout = false

Use the alternative memory layout for the IVF lists WARNING: this is a feature under development, and is only supported with RAFT enabled for the index. Do not use if RAFT is not enabled.

bool useMMCodeDistance = false

Use GEMM-backed computation of PQ code distances for the no precomputed table version of IVFPQ. This is for debugging purposes, it should not substantially affect the results one way for another.

Note that MM code distance is enabled automatically if one uses a number of dimensions per sub-quantizer that is not natively specialized (an odd number like 7 or so).

class GpuIndexIVFPQ : public faiss::gpu::GpuIndexIVF
#include <GpuIndexIVFPQ.h>

IVFPQ index for the GPU.

Public Functions

GpuIndexIVFPQ(GpuResourcesProvider *provider, const faiss::IndexIVFPQ *index, GpuIndexIVFPQConfig config = GpuIndexIVFPQConfig())

Construct from a pre-existing faiss::IndexIVFPQ instance, copying data over to the given GPU, if the input index is trained.

GpuIndexIVFPQ(GpuResourcesProvider *provider, int dims, idx_t nlist, idx_t subQuantizers, idx_t bitsPerCode, faiss::MetricType metric = faiss::METRIC_L2, GpuIndexIVFPQConfig config = GpuIndexIVFPQConfig())

Constructs a new instance with an empty flat quantizer; the user provides the number of IVF lists desired.

GpuIndexIVFPQ(GpuResourcesProvider *provider, Index *coarseQuantizer, int dims, idx_t nlist, idx_t subQuantizers, idx_t bitsPerCode, faiss::MetricType metric = faiss::METRIC_L2, GpuIndexIVFPQConfig config = GpuIndexIVFPQConfig())

Constructs a new instance with a provided CPU or GPU coarse quantizer; the user provides the number of IVF lists desired.

~GpuIndexIVFPQ() override
void copyFrom(const faiss::IndexIVFPQ *index)

Reserve space on the GPU for the inverted lists for num vectors, assumed equally distributed among Initialize ourselves from the given CPU index; will overwrite all data in ourselves

void copyTo(faiss::IndexIVFPQ *index) const

Copy ourselves to the given CPU index; will overwrite all data in the index instance

void reserveMemory(size_t numVecs)

Reserve GPU memory in our inverted lists for this number of vectors.

void setPrecomputedCodes(bool enable)

Enable or disable pre-computed codes.

bool getPrecomputedCodes() const

Are pre-computed codes enabled?

int getNumSubQuantizers() const

Return the number of sub-quantizers we are using.

int getBitsPerCode() const

Return the number of bits per PQ code.

int getCentroidsPerSubQuantizer() const

Return the number of centroids per PQ code (2^bits per code)

size_t reclaimMemory()

After adding vectors, one can call this to reclaim device memory to exactly the amount needed. Returns space reclaimed in bytes

virtual void reset() override

Clears out all inverted lists, but retains the coarse and product centroid information

virtual void updateQuantizer() override

Should be called if the user ever changes the state of the IVF coarse quantizer manually (e.g., substitutes a new instance or changes vectors in the coarse quantizer outside the scope of training)

virtual void train(idx_t n, const float *x) override

Trains the coarse and product quantizer based on the given vector data.

Public Members

ProductQuantizer pq

Like the CPU version, we expose a publically-visible ProductQuantizer for manipulation

Protected Functions

void setIndex_(GpuResources *resources, int dim, idx_t nlist, faiss::MetricType metric, float metricArg, int numSubQuantizers, int bitsPerSubQuantizer, bool useFloat16LookupTables, bool useMMCodeDistance, bool interleavedLayout, float *pqCentroidData, IndicesOptions indicesOptions, MemorySpace space)

Initialize appropriate index.

void verifyPQSettings_() const

Throws errors if configuration settings are improper.

void trainResidualQuantizer_(idx_t n, const float *x)

Trains the PQ quantizer based on the given vector data.

Protected Attributes

const GpuIndexIVFPQConfig ivfpqConfig_

Our configuration options that we were initialized with.

bool usePrecomputedTables_

Runtime override: whether or not we use precomputed tables.

int subQuantizers_

Number of sub-quantizers per encoded vector.

int bitsPerCode_

Bits per sub-quantizer code.

size_t reserveMemoryVecs_

Desired inverted list memory reservation.

std::shared_ptr<IVFPQ> index_

The product quantizer instance that we own; contains the inverted lists