Class faiss::gpu::GpuIndexIVFScalarQuantizer

class faiss::gpu::GpuIndexIVFScalarQuantizer : public faiss::gpu::GpuIndexIVF

Wrapper around the GPU implementation that looks like faiss::IndexIVFScalarQuantizer

Public Types

using idx_t = int64_t

all indices are this type

using component_t = float
using distance_t = float

Public Functions

GpuIndexIVFScalarQuantizer(GpuResourcesProvider *provider, const faiss::IndexIVFScalarQuantizer *index, GpuIndexIVFScalarQuantizerConfig config = GpuIndexIVFScalarQuantizerConfig())

Construct from a pre-existing faiss::IndexIVFScalarQuantizer instance, copying data over to the given GPU, if the input index is trained.

GpuIndexIVFScalarQuantizer(GpuResourcesProvider *provider, int dims, int nlist, faiss::ScalarQuantizer::QuantizerType qtype, faiss::MetricType metric = MetricType::METRIC_L2, bool encodeResidual = true, GpuIndexIVFScalarQuantizerConfig config = GpuIndexIVFScalarQuantizerConfig())

Constructs a new instance with an empty flat quantizer; the user provides the number of lists desired.

~GpuIndexIVFScalarQuantizer() override
void reserveMemory(size_t numVecs)

Reserve GPU memory in our inverted lists for this number of vectors.

void copyFrom(const faiss::IndexIVFScalarQuantizer *index)

Initialize ourselves from the given CPU index; will overwrite all data in ourselves

void copyTo(faiss::IndexIVFScalarQuantizer *index) const

Copy ourselves to the given CPU index; will overwrite all data in the index instance

size_t reclaimMemory()

After adding vectors, one can call this to reclaim device memory to exactly the amount needed. Returns space reclaimed in bytes

virtual void reset() override

Clears out all inverted lists, but retains the coarse and scalar quantizer information

virtual void train(Index::idx_t n, const float *x) override

Trains the coarse and scalar quantizer based on the given vector data.

virtual int getListLength(int listId) const override

Returns the number of vectors present in a particular inverted list.

virtual std::vector<uint8_t> getListVectorData(int listId, bool gpuFormat = false) const override

Return the encoded vector data contained in a particular inverted list, for debugging purposes. If gpuFormat is true, the data is returned as it is encoded in the GPU-side representation. Otherwise, it is converted to the CPU format. compliant format, while the native GPU format may differ.

virtual std::vector<Index::idx_t> getListIndices(int listId) const override

Return the vector indices contained in a particular inverted list, for debugging purposes.

void copyFrom(const faiss::IndexIVF *index)

Copy what we need from the CPU equivalent.

void copyTo(faiss::IndexIVF *index) const

Copy what we have to the CPU equivalent.

int getNumLists() const

Returns the number of inverted lists we’re managing.

GpuIndexFlat *getQuantizer()

Return the quantizer we’re using.

void setNumProbes(int nprobe)

Sets the number of list probes per query.

int getNumProbes() const

Returns our current number of list probes per query.

int getDevice() const

Returns the device that this index is resident on.

std::shared_ptr<GpuResources> getResources()

Returns a reference to our GpuResources object that manages memory, stream and handle resources on the GPU

void setMinPagingSize(size_t size)

Set the minimum data size for searches (in MiB) for which we use CPU -> GPU paging

size_t getMinPagingSize() const

Returns the current minimum data size for paged searches.

virtual void add(Index::idx_t, const float *x) override

x can be resident on the CPU or any GPU; copies are performed as needed Handles paged adds if the add set is too large; calls addInternal_

virtual void add_with_ids(Index::idx_t n, const float *x, const Index::idx_t *ids) override

x and ids can be resident on the CPU or any GPU; copies are performed as needed Handles paged adds if the add set is too large; calls addInternal_

virtual void assign(Index::idx_t n, const float *x, Index::idx_t *labels, Index::idx_t k = 1) const override

x and labels can be resident on the CPU or any GPU; copies are performed as needed

virtual void search(Index::idx_t n, const float *x, Index::idx_t k, float *distances, Index::idx_t *labels) const override

x, distances and labels can be resident on the CPU or any GPU; copies are performed as needed

virtual void compute_residual(const float *x, float *residual, Index::idx_t key) const override

Overridden to force GPU indices to provide their own GPU-friendly implementation

virtual void compute_residual_n(Index::idx_t n, const float *xs, float *residuals, const Index::idx_t *keys) const override

Overridden to force GPU indices to provide their own GPU-friendly implementation

virtual void range_search(idx_t n, const float *x, float radius, RangeSearchResult *result) const

query n vectors of dimension d to the index.

return all vectors with distance < radius. Note that many indexes do not implement the range_search (only the k-NN search is mandatory).

Parameters
  • x – input vectors to search, size n * d

  • radius – search radius

  • result – result table

virtual size_t remove_ids(const IDSelector &sel)

removes IDs from the index. Not supported by all indexes. Returns the number of elements removed.

virtual void reconstruct(idx_t key, float *recons) const

Reconstruct a stored vector (or an approximation if lossy coding)

this function may not be defined for some indexes

Parameters
  • key – id of the vector to reconstruct

  • recons – reconstucted vector (size d)

virtual void reconstruct_n(idx_t i0, idx_t ni, float *recons) const

Reconstruct vectors i0 to i0 + ni - 1

this function may not be defined for some indexes

Parameters

recons – reconstucted vector (size ni * d)

virtual void search_and_reconstruct(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, float *recons) const

Similar to search, but also reconstructs the stored vectors (or an approximation in the case of lossy coding) for the search results.

If there are not enough results for a query, the resulting arrays is padded with -1s.

Parameters

recons – reconstructed vectors size (n, k, d)

virtual DistanceComputer *get_distance_computer() const

Get a DistanceComputer (defined in AuxIndexStructures) object for this kind of index.

DistanceComputer is implemented for indexes that support random access of their vectors.

virtual size_t sa_code_size() const

size of the produced codes in bytes

virtual void sa_encode(idx_t n, const float *x, uint8_t *bytes) const

encode a set of vectors

Parameters
  • n – number of vectors

  • x – input vectors, size n * d

  • bytes – output encoded vectors, size n * sa_code_size()

virtual void sa_decode(idx_t n, const uint8_t *bytes, float *x) const

encode a set of vectors

Parameters
  • n – number of vectors

  • bytes – input encoded vectors, size n * sa_code_size()

  • x – output vectors, size n * d

Public Members

faiss::ScalarQuantizer sq

Exposed like the CPU version.

bool by_residual

Exposed like the CPU version.

ClusteringParameters cp

Exposing this like the CPU version for manipulation.

int nlist

Exposing this like the CPU version for query.

int nprobe

Exposing this like the CPU version for manipulation.

GpuIndexFlat *quantizer

Exposeing this like the CPU version for query.

int d

vector dimension

idx_t ntotal

total nb of indexed vectors

bool verbose

verbosity level

bool is_trained

set if the Index does not require training, or if training is done already

MetricType metric_type

type of metric this index uses for search

float metric_arg

argument of the metric type

Protected Functions

virtual void addImpl_(int n, const float *x, const Index::idx_t *ids) override

Called from GpuIndex for add/add_with_ids.

virtual void searchImpl_(int n, const float *x, int k, float *distances, Index::idx_t *labels) const override

Called from GpuIndex for search.

void trainResiduals_(Index::idx_t n, const float *x)

Called from train to handle SQ residual training.

void copyFrom(const faiss::Index *index)

Copy what we need from the CPU equivalent.

void copyTo(faiss::Index *index) const

Copy what we have to the CPU equivalent.

virtual bool addImplRequiresIDs_() const override

Does addImpl_ require IDs? If so, and no IDs are provided, we will generate them sequentially based on the order in which the IDs are added

void trainQuantizer_(Index::idx_t n, const float *x)

Protected Attributes

const GpuIndexIVFScalarQuantizerConfig ivfSQConfig_

Our configuration options.

size_t reserveMemoryVecs_

Desired inverted list memory reservation.

std::unique_ptr<IVFFlat> index_

Instance that we own; contains the inverted list.

const GpuIndexIVFConfig ivfConfig_

Our configuration options.

std::shared_ptr<GpuResources> resources_

Manages streams, cuBLAS handles and scratch memory for devices.

const GpuIndexConfig config_

Our configuration options.

size_t minPagedSize_

Size above which we page copies from the CPU to GPU.