File GpuIndex.h

namespace faiss

Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Definition of inverted lists + a few common classes that implement the interface.

Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

In this file are the implementations of extra metrics beyond L2 and inner product

Implements a few neural net layers, mainly to support QINCo

Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

namespace gpu

Functions

bool should_use_cuvs(GpuIndexConfig config_)

A centralized function that determines whether cuVS should be used based on various conditions (such as unsupported architecture)

GpuIndex *tryCastGpuIndex(faiss::Index *index)

If the given index is a GPU index, this returns the index instance.

bool isGpuIndex(faiss::Index *index)

Is the given index instance a GPU index?

bool isGpuIndexImplemented(faiss::Index *index)

Does the given CPU index instance have a corresponding GPU implementation?

struct GpuIndexConfig

Subclassed by faiss::gpu::GpuIndexBinaryFlatConfig, faiss::gpu::GpuIndexCagraConfig, faiss::gpu::GpuIndexFlatConfig, faiss::gpu::GpuIndexIVFConfig

Public Members

int device = 0

GPU device on which the index is resident.

MemorySpace memorySpace = MemorySpace::Device

What memory space to use for primary storage. On Pascal and above (CC 6+) architectures, allows GPUs to use more memory than is available on the GPU.

bool use_cuvs = false

Should the index dispatch down to cuVS?

class GpuIndex : public faiss::Index

Subclassed by faiss::gpu::GpuIndexCagra, faiss::gpu::GpuIndexFlat, faiss::gpu::GpuIndexIVF

Public Functions

GpuIndex(std::shared_ptr<GpuResources> resources, int dims, faiss::MetricType metric, float metricArg, GpuIndexConfig config)
int getDevice() const

Returns the device that this index is resident on.

std::shared_ptr<GpuResources> getResources()

Returns a reference to our GpuResources object that manages memory, stream and handle resources on the GPU

void setMinPagingSize(size_t size)

Set the minimum data size for searches (in MiB) for which we use CPU -> GPU paging

size_t getMinPagingSize() const

Returns the current minimum data size for paged searches.

virtual void add(idx_t, const float *x) override

x can be resident on the CPU or any GPU; copies are performed as needed Handles paged adds if the add set is too large; calls addInternal_

virtual void add_with_ids(idx_t n, const float *x, const idx_t *ids) override

x and ids can be resident on the CPU or any GPU; copies are performed as needed Handles paged adds if the add set is too large; calls addInternal_

virtual void assign(idx_t n, const float *x, idx_t *labels, idx_t k = 1) const override

x and labels can be resident on the CPU or any GPU; copies are performed as needed

virtual void search(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const SearchParameters *params = nullptr) const override

x, distances and labels can be resident on the CPU or any GPU; copies are performed as needed

virtual void search_and_reconstruct(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, float *recons, const SearchParameters *params = nullptr) const override

x, distances and labels and recons can be resident on the CPU or any GPU; copies are performed as needed

virtual void compute_residual(const float *x, float *residual, idx_t key) const override

Overridden to force GPU indices to provide their own GPU-friendly implementation

virtual void compute_residual_n(idx_t n, const float *xs, float *residuals, const idx_t *keys) const override

Overridden to force GPU indices to provide their own GPU-friendly implementation

Protected Functions

void copyFrom(const faiss::Index *index)

Copy what we need from the CPU equivalent.

void copyTo(faiss::Index *index) const

Copy what we have to the CPU equivalent.

virtual bool addImplRequiresIDs_() const = 0

Does addImpl_ require IDs? If so, and no IDs are provided, we will generate them sequentially based on the order in which the IDs are added

virtual void addImpl_(idx_t n, const float *x, const idx_t *ids) = 0

Overridden to actually perform the add All data is guaranteed to be resident on our device

virtual void searchImpl_(idx_t n, const float *x, int k, float *distances, idx_t *labels, const SearchParameters *params) const = 0

Overridden to actually perform the search All data is guaranteed to be resident on our device

Protected Attributes

std::shared_ptr<GpuResources> resources_

Manages streams, cuBLAS handles and scratch memory for devices.

const GpuIndexConfig config_

Our configuration options.

size_t minPagedSize_

Size above which we page copies from the CPU to GPU.

Private Functions

void addPaged_(idx_t n, const float *x, const idx_t *ids)

Handles paged adds if the add set is too large, passes to addImpl_ to actually perform the add for the current page

void addPage_(idx_t n, const float *x, const idx_t *ids)

Calls addImpl_ for a single page of GPU-resident data.

void searchNonPaged_(idx_t n, const float *x, int k, float *outDistancesData, idx_t *outIndicesData, const SearchParameters *params) const

Calls searchImpl_ for a single page of GPU-resident data.

void searchFromCpuPaged_(idx_t n, const float *x, int k, float *outDistancesData, idx_t *outIndicesData, const SearchParameters *params) const

Calls searchImpl_ for a single page of GPU-resident data, handling paging of the data and copies from the CPU