Class faiss::gpu::GpuIndexIVFPQ

class GpuIndexIVFPQ : public faiss::gpu::GpuIndexIVF

IVFPQ index for the GPU.

Public Types

using component_t = float
using distance_t = float

Public Functions

GpuIndexIVFPQ(GpuResourcesProvider *provider, const faiss::IndexIVFPQ *index, GpuIndexIVFPQConfig config = GpuIndexIVFPQConfig())

Construct from a pre-existing faiss::IndexIVFPQ instance, copying data over to the given GPU, if the input index is trained.

GpuIndexIVFPQ(GpuResourcesProvider *provider, int dims, idx_t nlist, idx_t subQuantizers, idx_t bitsPerCode, faiss::MetricType metric = faiss::METRIC_L2, GpuIndexIVFPQConfig config = GpuIndexIVFPQConfig())

Constructs a new instance with an empty flat quantizer; the user provides the number of IVF lists desired.

GpuIndexIVFPQ(GpuResourcesProvider *provider, Index *coarseQuantizer, int dims, idx_t nlist, idx_t subQuantizers, idx_t bitsPerCode, faiss::MetricType metric = faiss::METRIC_L2, GpuIndexIVFPQConfig config = GpuIndexIVFPQConfig())

Constructs a new instance with a provided CPU or GPU coarse quantizer; the user provides the number of IVF lists desired.

~GpuIndexIVFPQ() override
void copyFrom(const faiss::IndexIVFPQ *index)

Reserve space on the GPU for the inverted lists for num vectors, assumed equally distributed among Initialize ourselves from the given CPU index; will overwrite all data in ourselves

void copyTo(faiss::IndexIVFPQ *index) const

Copy ourselves to the given CPU index; will overwrite all data in the index instance

void reserveMemory(size_t numVecs)

Reserve GPU memory in our inverted lists for this number of vectors.

void setPrecomputedCodes(bool enable)

Enable or disable pre-computed codes.

bool getPrecomputedCodes() const

Are pre-computed codes enabled?

int getNumSubQuantizers() const

Return the number of sub-quantizers we are using.

int getBitsPerCode() const

Return the number of bits per PQ code.

int getCentroidsPerSubQuantizer() const

Return the number of centroids per PQ code (2^bits per code)

size_t reclaimMemory()

After adding vectors, one can call this to reclaim device memory to exactly the amount needed. Returns space reclaimed in bytes

virtual void reset() override

Clears out all inverted lists, but retains the coarse and product centroid information

virtual void updateQuantizer() override

Should be called if the user ever changes the state of the IVF coarse quantizer manually (e.g., substitutes a new instance or changes vectors in the coarse quantizer outside the scope of training)

virtual void train(idx_t n, const float *x) override

Trains the coarse and product quantizer based on the given vector data.

void copyFrom(const faiss::IndexIVF *index)

Copy what we need from the CPU equivalent.

void copyTo(faiss::IndexIVF *index) const

Copy what we have to the CPU equivalent.

virtual idx_t getNumLists() const

Returns the number of inverted lists we’re managing.

virtual idx_t getListLength(idx_t listId) const

Returns the number of vectors present in a particular inverted list.

virtual std::vector<uint8_t> getListVectorData(idx_t listId, bool gpuFormat = false) const

Return the encoded vector data contained in a particular inverted list, for debugging purposes. If gpuFormat is true, the data is returned as it is encoded in the GPU-side representation. Otherwise, it is converted to the CPU format. compliant format, while the native GPU format may differ.

virtual std::vector<idx_t> getListIndices(idx_t listId) const

Return the vector indices contained in a particular inverted list, for debugging purposes.

virtual void search_preassigned(idx_t n, const float *x, idx_t k, const idx_t *assign, const float *centroid_dis, float *distances, idx_t *labels, bool store_pairs, const SearchParametersIVF *params = nullptr, IndexIVFStats *stats = nullptr) const override

search a set of vectors, that are pre-quantized by the IVF quantizer. Fill in the corresponding heaps with the query results. The default implementation uses InvertedListScanners to do the search.

Parameters:
  • n – nb of vectors to query

  • x – query vectors, size nx * d

  • assign – coarse quantization indices, size nx * nprobe

  • centroid_dis – distances to coarse centroids, size nx * nprobe

  • distance – output distances, size n * k

  • labels – output labels, size n * k

  • store_pairs – store inv list index + inv list offset instead in upper/lower 32 bit of result, instead of ids (used for reranking).

  • params – used to override the object’s search parameters

  • stats – search stats to be updated (can be null)

virtual void range_search_preassigned(idx_t nx, const float *x, float radius, const idx_t *keys, const float *coarse_dis, RangeSearchResult *result, bool store_pairs = false, const IVFSearchParameters *params = nullptr, IndexIVFStats *stats = nullptr) const override

Range search a set of vectors, that are pre-quantized by the IVF quantizer. Fill in the RangeSearchResults results. The default implementation uses InvertedListScanners to do the search.

Parameters:
  • n – nb of vectors to query

  • x – query vectors, size nx * d

  • assign – coarse quantization indices, size nx * nprobe

  • centroid_dis – distances to coarse centroids, size nx * nprobe

  • result – Output results

  • store_pairs – store inv list index + inv list offset instead in upper/lower 32 bit of result, instead of ids (used for reranking).

  • params – used to override the object’s search parameters

  • stats – search stats to be updated (can be null)

int getDevice() const

Returns the device that this index is resident on.

std::shared_ptr<GpuResources> getResources()

Returns a reference to our GpuResources object that manages memory, stream and handle resources on the GPU

void setMinPagingSize(size_t size)

Set the minimum data size for searches (in MiB) for which we use CPU -> GPU paging

size_t getMinPagingSize() const

Returns the current minimum data size for paged searches.

virtual void add(idx_t, const float *x) override

x can be resident on the CPU or any GPU; copies are performed as needed Handles paged adds if the add set is too large; calls addInternal_

virtual void add_with_ids(idx_t n, const float *x, const idx_t *ids) override

x and ids can be resident on the CPU or any GPU; copies are performed as needed Handles paged adds if the add set is too large; calls addInternal_

virtual void assign(idx_t n, const float *x, idx_t *labels, idx_t k = 1) const override

x and labels can be resident on the CPU or any GPU; copies are performed as needed

virtual void search(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const SearchParameters *params = nullptr) const override

x, distances and labels can be resident on the CPU or any GPU; copies are performed as needed

virtual void search_and_reconstruct(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, float *recons, const SearchParameters *params = nullptr) const override

x, distances and labels and recons can be resident on the CPU or any GPU; copies are performed as needed

virtual void compute_residual(const float *x, float *residual, idx_t key) const override

Overridden to force GPU indices to provide their own GPU-friendly implementation

virtual void compute_residual_n(idx_t n, const float *xs, float *residuals, const idx_t *keys) const override

Overridden to force GPU indices to provide their own GPU-friendly implementation

virtual void range_search(idx_t n, const float *x, float radius, RangeSearchResult *result, const SearchParameters *params = nullptr) const

query n vectors of dimension d to the index.

return all vectors with distance < radius. Note that many indexes do not implement the range_search (only the k-NN search is mandatory).

Parameters:
  • n – number of vectors

  • x – input vectors to search, size n * d

  • radius – search radius

  • result – result table

virtual size_t remove_ids(const IDSelector &sel)

removes IDs from the index. Not supported by all indexes. Returns the number of elements removed.

virtual void reconstruct(idx_t key, float *recons) const

Reconstruct a stored vector (or an approximation if lossy coding)

this function may not be defined for some indexes

Parameters:
  • key – id of the vector to reconstruct

  • recons – reconstucted vector (size d)

virtual void reconstruct_batch(idx_t n, const idx_t *keys, float *recons) const

Reconstruct several stored vectors (or an approximation if lossy coding)

this function may not be defined for some indexes

Parameters:
  • n – number of vectors to reconstruct

  • keys – ids of the vectors to reconstruct (size n)

  • recons – reconstucted vector (size n * d)

virtual void reconstruct_n(idx_t i0, idx_t ni, float *recons) const

Reconstruct vectors i0 to i0 + ni - 1

this function may not be defined for some indexes

Parameters:
  • i0 – index of the first vector in the sequence

  • ni – number of vectors in the sequence

  • recons – reconstucted vector (size ni * d)

virtual DistanceComputer *get_distance_computer() const

Get a DistanceComputer (defined in AuxIndexStructures) object for this kind of index.

DistanceComputer is implemented for indexes that support random access of their vectors.

virtual size_t sa_code_size() const

size of the produced codes in bytes

virtual void sa_encode(idx_t n, const float *x, uint8_t *bytes) const

encode a set of vectors

Parameters:
  • n – number of vectors

  • x – input vectors, size n * d

  • bytes – output encoded vectors, size n * sa_code_size()

virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const

decode a set of vectors

Parameters:
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

virtual void merge_from(Index &otherIndex, idx_t add_id = 0)

moves the entries from another dataset to self. On output, other is empty. add_id is added to all moved ids (for sequential ids, this would be this->ntotal)

virtual void check_compatible_for_merge(const Index &otherIndex) const

check that the two indexes are compatible (ie, they are trained in the same way and have the same parameters). Otherwise throw.

virtual void add_sa_codes(idx_t n, const uint8_t *codes, const idx_t *xids)

Add vectors that are computed with the standalone codec

Parameters:
  • codes – codes to add size n * sa_code_size()

  • xids – corresponding ids, size n

void train_q1(size_t n, const float *x, bool verbose, MetricType metric_type)

Trains the quantizer and calls train_residual to train sub-quantizers.

size_t coarse_code_size() const

compute the number of bytes required to store list ids

void encode_listno(idx_t list_no, uint8_t *code) const
idx_t decode_listno(const uint8_t *code) const

Public Members

ProductQuantizer pq

Like the CPU version, we expose a publically-visible ProductQuantizer for manipulation

int d

vector dimension

idx_t ntotal

total nb of indexed vectors

bool verbose

verbosity level

bool is_trained

set if the Index does not require training, or if training is done already

MetricType metric_type

type of metric this index uses for search

float metric_arg

argument of the metric type

size_t nprobe = 1

number of probes at query time

size_t max_codes = 0

max nb of codes to visit to do a query

Index *quantizer = nullptr

quantizer that maps vectors to inverted lists

size_t nlist = 0

number of inverted lists

char quantizer_trains_alone = 0

= 0: use the quantizer as index in a kmeans training = 1: just pass on the training set to the train() of the quantizer = 2: kmeans training on a flat index + add the centroids to the quantizer

bool own_fields = false

whether object owns the quantizer

ClusteringParameters cp

to override default clustering params

Index *clustering_index = nullptr

to override index used during clustering

Protected Functions

void setIndex_(GpuResources *resources, int dim, idx_t nlist, faiss::MetricType metric, float metricArg, int numSubQuantizers, int bitsPerSubQuantizer, bool useFloat16LookupTables, bool useMMCodeDistance, bool interleavedLayout, float *pqCentroidData, IndicesOptions indicesOptions, MemorySpace space)

Initialize appropriate index.

void verifyPQSettings_() const

Throws errors if configuration settings are improper.

void trainResidualQuantizer_(idx_t n, const float *x)

Trains the PQ quantizer based on the given vector data.

void copyFrom(const faiss::Index *index)

Copy what we need from the CPU equivalent.

void copyTo(faiss::Index *index) const

Copy what we have to the CPU equivalent.

int getCurrentNProbe_(const SearchParameters *params) const

From either the current set nprobe or the SearchParameters if available, return the nprobe that we should use for the current search

void verifyIVFSettings_() const
virtual bool addImplRequiresIDs_() const override

Does addImpl_ require IDs? If so, and no IDs are provided, we will generate them sequentially based on the order in which the IDs are added

virtual void trainQuantizer_(idx_t n, const float *x)
virtual void addImpl_(idx_t n, const float *x, const idx_t *ids) override

Called from GpuIndex for add/add_with_ids.

virtual void searchImpl_(idx_t n, const float *x, int k, float *distances, idx_t *labels, const SearchParameters *params) const override

Called from GpuIndex for search.

Protected Attributes

const GpuIndexIVFPQConfig ivfpqConfig_

Our configuration options that we were initialized with.

bool usePrecomputedTables_

Runtime override: whether or not we use precomputed tables.

int subQuantizers_

Number of sub-quantizers per encoded vector.

int bitsPerCode_

Bits per sub-quantizer code.

size_t reserveMemoryVecs_

Desired inverted list memory reservation.

std::shared_ptr<IVFPQ> index_

The product quantizer instance that we own; contains the inverted lists

const GpuIndexIVFConfig ivfConfig_

Our configuration options.

std::shared_ptr<IVFBase> baseIndex_

For a trained/initialized index, this is a reference to the base class.

std::shared_ptr<GpuResources> resources_

Manages streams, cuBLAS handles and scratch memory for devices.

const GpuIndexConfig config_

Our configuration options.

size_t minPagedSize_

Size above which we page copies from the CPU to GPU.