Class faiss::gpu::GpuIndexIVF

class faiss::gpu::GpuIndexIVF : public faiss::gpu::GpuIndex

Subclassed by faiss::gpu::GpuIndexIVFFlat, faiss::gpu::GpuIndexIVFPQ, faiss::gpu::GpuIndexIVFScalarQuantizer

Public Types

using idx_t = int64_t

all indices are this type

using component_t = float
using distance_t = float

Public Functions

GpuIndexIVF(GpuResourcesProvider *provider, int dims, faiss::MetricType metric, float metricArg, int nlist, GpuIndexIVFConfig config = GpuIndexIVFConfig())
~GpuIndexIVF() override
void copyFrom(const faiss::IndexIVF *index)

Copy what we need from the CPU equivalent.

void copyTo(faiss::IndexIVF *index) const

Copy what we have to the CPU equivalent.

int getNumLists() const

Returns the number of inverted lists we’re managing.

virtual int getListLength(int listId) const = 0

Returns the number of vectors present in a particular inverted list.

virtual std::vector<uint8_t> getListVectorData(int listId, bool gpuFormat = false) const = 0

Return the encoded vector data contained in a particular inverted list, for debugging purposes. If gpuFormat is true, the data is returned as it is encoded in the GPU-side representation. Otherwise, it is converted to the CPU format. compliant format, while the native GPU format may differ.

virtual std::vector<Index::idx_t> getListIndices(int listId) const = 0

Return the vector indices contained in a particular inverted list, for debugging purposes.

GpuIndexFlat *getQuantizer()

Return the quantizer we’re using.

void setNumProbes(int nprobe)

Sets the number of list probes per query.

int getNumProbes() const

Returns our current number of list probes per query.

int getDevice() const

Returns the device that this index is resident on.

std::shared_ptr<GpuResources> getResources()

Returns a reference to our GpuResources object that manages memory, stream and handle resources on the GPU

void setMinPagingSize(size_t size)

Set the minimum data size for searches (in MiB) for which we use CPU -> GPU paging

size_t getMinPagingSize() const

Returns the current minimum data size for paged searches.

virtual void add(Index::idx_t, const float *x) override

x can be resident on the CPU or any GPU; copies are performed as needed Handles paged adds if the add set is too large; calls addInternal_

virtual void add_with_ids(Index::idx_t n, const float *x, const Index::idx_t *ids) override

x and ids can be resident on the CPU or any GPU; copies are performed as needed Handles paged adds if the add set is too large; calls addInternal_

virtual void assign(Index::idx_t n, const float *x, Index::idx_t *labels, Index::idx_t k = 1) const override

x and labels can be resident on the CPU or any GPU; copies are performed as needed

virtual void search(Index::idx_t n, const float *x, Index::idx_t k, float *distances, Index::idx_t *labels) const override

x, distances and labels can be resident on the CPU or any GPU; copies are performed as needed

virtual void compute_residual(const float *x, float *residual, Index::idx_t key) const override

Overridden to force GPU indices to provide their own GPU-friendly implementation

virtual void compute_residual_n(Index::idx_t n, const float *xs, float *residuals, const Index::idx_t *keys) const override

Overridden to force GPU indices to provide their own GPU-friendly implementation

virtual void train(idx_t n, const float *x)

Perform training on a representative set of vectors

Parameters
  • n – nb of training vectors

  • x – training vecors, size n * d

virtual void range_search(idx_t n, const float *x, float radius, RangeSearchResult *result) const

query n vectors of dimension d to the index.

return all vectors with distance < radius. Note that many indexes do not implement the range_search (only the k-NN search is mandatory).

Parameters
  • x – input vectors to search, size n * d

  • radius – search radius

  • result – result table

virtual void reset() = 0

removes all elements from the database.

virtual size_t remove_ids(const IDSelector &sel)

removes IDs from the index. Not supported by all indexes. Returns the number of elements removed.

virtual void reconstruct(idx_t key, float *recons) const

Reconstruct a stored vector (or an approximation if lossy coding)

this function may not be defined for some indexes

Parameters
  • key – id of the vector to reconstruct

  • recons – reconstucted vector (size d)

virtual void reconstruct_n(idx_t i0, idx_t ni, float *recons) const

Reconstruct vectors i0 to i0 + ni - 1

this function may not be defined for some indexes

Parameters

recons – reconstucted vector (size ni * d)

virtual void search_and_reconstruct(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, float *recons) const

Similar to search, but also reconstructs the stored vectors (or an approximation in the case of lossy coding) for the search results.

If there are not enough results for a query, the resulting arrays is padded with -1s.

Parameters

recons – reconstructed vectors size (n, k, d)

virtual DistanceComputer *get_distance_computer() const

Get a DistanceComputer (defined in AuxIndexStructures) object for this kind of index.

DistanceComputer is implemented for indexes that support random access of their vectors.

virtual size_t sa_code_size() const

size of the produced codes in bytes

virtual void sa_encode(idx_t n, const float *x, uint8_t *bytes) const

encode a set of vectors

Parameters
  • n – number of vectors

  • x – input vectors, size n * d

  • bytes – output encoded vectors, size n * sa_code_size()

virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const

encode a set of vectors

Parameters
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

Public Members

ClusteringParameters cp

Exposing this like the CPU version for manipulation.

int nlist

Exposing this like the CPU version for query.

int nprobe

Exposing this like the CPU version for manipulation.

GpuIndexFlat *quantizer

Exposeing this like the CPU version for query.

int d

vector dimension

idx_t ntotal

total nb of indexed vectors

bool verbose

verbosity level

bool is_trained

set if the Index does not require training, or if training is done already

MetricType metric_type

type of metric this index uses for search

float metric_arg

argument of the metric type

Protected Functions

virtual bool addImplRequiresIDs_() const override

Does addImpl_ require IDs? If so, and no IDs are provided, we will generate them sequentially based on the order in which the IDs are added

void trainQuantizer_(Index::idx_t n, const float *x)
void copyFrom(const faiss::Index *index)

Copy what we need from the CPU equivalent.

void copyTo(faiss::Index *index) const

Copy what we have to the CPU equivalent.

virtual void addImpl_(int n, const float *x, const Index::idx_t *ids) = 0

Overridden to actually perform the add All data is guaranteed to be resident on our device

virtual void searchImpl_(int n, const float *x, int k, float *distances, Index::idx_t *labels) const = 0

Overridden to actually perform the search All data is guaranteed to be resident on our device

Protected Attributes

const GpuIndexIVFConfig ivfConfig_

Our configuration options.

std::shared_ptr<GpuResources> resources_

Manages streams, cuBLAS handles and scratch memory for devices.

const GpuIndexConfig config_

Our configuration options.

size_t minPagedSize_

Size above which we page copies from the CPU to GPU.

Private Functions

void init_()

Shared initialization functions.