File GpuIndexIVF.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

namespace gpu
struct GpuIndexIVFConfig : public faiss::gpu::GpuIndexConfig

Subclassed by faiss::gpu::GpuIndexIVFFlatConfig, faiss::gpu::GpuIndexIVFPQConfig, faiss::gpu::GpuIndexIVFScalarQuantizerConfig

Public Members

IndicesOptions indicesOptions = INDICES_64_BIT

Index storage options for the GPU.

GpuIndexFlatConfig flatConfig

Configuration for the coarse quantizer object.

bool allowCpuCoarseQuantizer = false

This flag controls the CPU fallback logic for coarse quantizer component of the index. When set to false (default), the cloner will throw an exception for indices not implemented on GPU. When set to true, it will fallback to a CPU implementation.

class GpuIndexIVF : public faiss::gpu::GpuIndex, public faiss::IndexIVFInterface
#include <GpuIndexIVF.h>

Base class of all GPU IVF index types. This (for now) deliberately does not inherit from IndexIVF, as many of the public data members and functionality in IndexIVF is not supported in the same manner on the GPU.

Subclassed by faiss::gpu::GpuIndexIVFFlat, faiss::gpu::GpuIndexIVFPQ, faiss::gpu::GpuIndexIVFScalarQuantizer

Public Functions

GpuIndexIVF(GpuResourcesProvider *provider, int dims, faiss::MetricType metric, float metricArg, idx_t nlist, GpuIndexIVFConfig config = GpuIndexIVFConfig())

Version that auto-constructs a flat coarse quantizer based on the desired metric

GpuIndexIVF(GpuResourcesProvider *provider, Index *coarseQuantizer, int dims, faiss::MetricType metric, float metricArg, idx_t nlist, GpuIndexIVFConfig config = GpuIndexIVFConfig())

Version that takes a coarse quantizer instance. The GpuIndexIVF does not own the coarseQuantizer instance by default (functions like IndexIVF).

~GpuIndexIVF() override
void copyFrom(const faiss::IndexIVF *index)

Copy what we need from the CPU equivalent.

void copyTo(faiss::IndexIVF *index) const

Copy what we have to the CPU equivalent.

virtual void updateQuantizer() = 0

Should be called if the user ever changes the state of the IVF coarse quantizer manually (e.g., substitutes a new instance or changes vectors in the coarse quantizer outside the scope of training)

virtual idx_t getNumLists() const

Returns the number of inverted lists we’re managing.

virtual idx_t getListLength(idx_t listId) const

Returns the number of vectors present in a particular inverted list.

virtual std::vector<uint8_t> getListVectorData(idx_t listId, bool gpuFormat = false) const

Return the encoded vector data contained in a particular inverted list, for debugging purposes. If gpuFormat is true, the data is returned as it is encoded in the GPU-side representation. Otherwise, it is converted to the CPU format. compliant format, while the native GPU format may differ.

virtual std::vector<idx_t> getListIndices(idx_t listId) const

Return the vector indices contained in a particular inverted list, for debugging purposes.

virtual void search_preassigned(idx_t n, const float *x, idx_t k, const idx_t *assign, const float *centroid_dis, float *distances, idx_t *labels, bool store_pairs, const SearchParametersIVF *params = nullptr, IndexIVFStats *stats = nullptr) const override

search a set of vectors, that are pre-quantized by the IVF quantizer. Fill in the corresponding heaps with the query results. The default implementation uses InvertedListScanners to do the search.

Parameters:
  • n – nb of vectors to query

  • x – query vectors, size nx * d

  • assign – coarse quantization indices, size nx * nprobe

  • centroid_dis – distances to coarse centroids, size nx * nprobe

  • distance – output distances, size n * k

  • labels – output labels, size n * k

  • store_pairs – store inv list index + inv list offset instead in upper/lower 32 bit of result, instead of ids (used for reranking).

  • params – used to override the object’s search parameters

  • stats – search stats to be updated (can be null)

virtual void range_search_preassigned(idx_t nx, const float *x, float radius, const idx_t *keys, const float *coarse_dis, RangeSearchResult *result, bool store_pairs = false, const IVFSearchParameters *params = nullptr, IndexIVFStats *stats = nullptr) const override

Range search a set of vectors, that are pre-quantized by the IVF quantizer. Fill in the RangeSearchResults results. The default implementation uses InvertedListScanners to do the search.

Parameters:
  • n – nb of vectors to query

  • x – query vectors, size nx * d

  • assign – coarse quantization indices, size nx * nprobe

  • centroid_dis – distances to coarse centroids, size nx * nprobe

  • result – Output results

  • store_pairs – store inv list index + inv list offset instead in upper/lower 32 bit of result, instead of ids (used for reranking).

  • params – used to override the object’s search parameters

  • stats – search stats to be updated (can be null)

Protected Functions

int getCurrentNProbe_(const SearchParameters *params) const

From either the current set nprobe or the SearchParameters if available, return the nprobe that we should use for the current search

void verifyIVFSettings_() const
virtual bool addImplRequiresIDs_() const override

Does addImpl_ require IDs? If so, and no IDs are provided, we will generate them sequentially based on the order in which the IDs are added

virtual void trainQuantizer_(idx_t n, const float *x)
virtual void addImpl_(idx_t n, const float *x, const idx_t *ids) override

Called from GpuIndex for add/add_with_ids.

virtual void searchImpl_(idx_t n, const float *x, int k, float *distances, idx_t *labels, const SearchParameters *params) const override

Called from GpuIndex for search.

Protected Attributes

const GpuIndexIVFConfig ivfConfig_

Our configuration options.

std::shared_ptr<IVFBase> baseIndex_

For a trained/initialized index, this is a reference to the base class.

Private Functions

void init_()

Shared initialization functions.