File GpuIndexFlat.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

namespace gpu
class GpuIndexFlat : public faiss::gpu::GpuIndex
#include <GpuIndexFlat.h>

Wrapper around the GPU implementation that looks like faiss::IndexFlat; copies over centroid data from a given faiss::IndexFlat

Subclassed by faiss::gpu::GpuIndexFlatIP, faiss::gpu::GpuIndexFlatL2

Public Functions

GpuIndexFlat(GpuResourcesProvider *provider, const faiss::IndexFlat *index, GpuIndexFlatConfig config = GpuIndexFlatConfig())

Construct from a pre-existing faiss::IndexFlat instance, copying data over to the given GPU

GpuIndexFlat(std::shared_ptr<GpuResources> resources, const faiss::IndexFlat *index, GpuIndexFlatConfig config = GpuIndexFlatConfig())
GpuIndexFlat(GpuResourcesProvider *provider, int dims, faiss::MetricType metric, GpuIndexFlatConfig config = GpuIndexFlatConfig())

Construct an empty instance that can be added to.

GpuIndexFlat(std::shared_ptr<GpuResources> resources, int dims, faiss::MetricType metric, GpuIndexFlatConfig config = GpuIndexFlatConfig())
~GpuIndexFlat() override
void copyFrom(const faiss::IndexFlat *index)

Initialize ourselves from the given CPU index; will overwrite all data in ourselves

void copyTo(faiss::IndexFlat *index) const

Copy ourselves to the given CPU index; will overwrite all data in the index instance

size_t getNumVecs() const

Returns the number of vectors we contain.

virtual void reset() override

Clears all vectors from this index.

virtual void train(Index::idx_t n, const float *x) override

This index is not trained, so this does nothing.

virtual void add(Index::idx_t, const float *x) override

Overrides to avoid excessive copies.

virtual void reconstruct(Index::idx_t key, float *out) const override

Reconstruction methods; prefer the batch reconstruct as it will be more efficient

virtual void reconstruct_n(Index::idx_t i0, Index::idx_t num, float *out) const override

Batch reconstruction method.

virtual void compute_residual(const float *x, float *residual, Index::idx_t key) const override

Compute residual.

virtual void compute_residual_n(Index::idx_t n, const float *xs, float *residuals, const Index::idx_t *keys) const override

Compute residual (batch mode)

inline FlatIndex *getGpuData()

For internal access.

Protected Functions

virtual bool addImplRequiresIDs_() const override

Flat index does not require IDs as there is no storage available for them

virtual void addImpl_(int n, const float *x, const Index::idx_t *ids) override

Called from GpuIndex for add.

virtual void searchImpl_(int n, const float *x, int k, float *distances, Index::idx_t *labels) const override

Called from GpuIndex for search.

Protected Attributes

const GpuIndexFlatConfig flatConfig_

Our configuration options.

std::unique_ptr<FlatIndex> data_

Holds our GPU data containing the list of vectors.

struct GpuIndexFlatConfig : public faiss::gpu::GpuIndexConfig

Public Functions

inline GpuIndexFlatConfig()

Public Members

bool useFloat16

Whether or not data is stored as float16.

bool storeTransposed

Whether or not data is stored (transparently) in a transposed layout, enabling use of the NN GEMM call, which is ~10% faster. This will improve the speed of the flat index, but will substantially slow down any add() calls made, as all data must be transposed, and will increase storage requirements (we store data in both transposed and non-transposed layouts).

class GpuIndexFlatIP : public faiss::gpu::GpuIndexFlat
#include <GpuIndexFlat.h>

Wrapper around the GPU implementation that looks like faiss::IndexFlatIP; copies over centroid data from a given faiss::IndexFlat

Public Functions

GpuIndexFlatIP(GpuResourcesProvider *provider, faiss::IndexFlatIP *index, GpuIndexFlatConfig config = GpuIndexFlatConfig())

Construct from a pre-existing faiss::IndexFlatIP instance, copying data over to the given GPU

GpuIndexFlatIP(std::shared_ptr<GpuResources> resources, faiss::IndexFlatIP *index, GpuIndexFlatConfig config = GpuIndexFlatConfig())
GpuIndexFlatIP(GpuResourcesProvider *provider, int dims, GpuIndexFlatConfig config = GpuIndexFlatConfig())

Construct an empty instance that can be added to.

GpuIndexFlatIP(std::shared_ptr<GpuResources> resources, int dims, GpuIndexFlatConfig config = GpuIndexFlatConfig())
void copyFrom(faiss::IndexFlat *index)

Initialize ourselves from the given CPU index; will overwrite all data in ourselves

void copyTo(faiss::IndexFlat *index)

Copy ourselves to the given CPU index; will overwrite all data in the index instance

void copyFrom(const faiss::IndexFlat *index)

Initialize ourselves from the given CPU index; will overwrite all data in ourselves

void copyTo(faiss::IndexFlat *index) const

Copy ourselves to the given CPU index; will overwrite all data in the index instance

size_t getNumVecs() const

Returns the number of vectors we contain.

virtual void reset() override

Clears all vectors from this index.

virtual void train(Index::idx_t n, const float *x) override

This index is not trained, so this does nothing.

virtual void add(Index::idx_t, const float *x) override

Overrides to avoid excessive copies.

virtual void reconstruct(Index::idx_t key, float *out) const override

Reconstruction methods; prefer the batch reconstruct as it will be more efficient

virtual void reconstruct_n(Index::idx_t i0, Index::idx_t num, float *out) const override

Batch reconstruction method.

virtual void compute_residual(const float *x, float *residual, Index::idx_t key) const override

Compute residual.

virtual void compute_residual_n(Index::idx_t n, const float *xs, float *residuals, const Index::idx_t *keys) const override

Compute residual (batch mode)

inline FlatIndex *getGpuData()

For internal access.

Protected Functions

virtual bool addImplRequiresIDs_() const override

Flat index does not require IDs as there is no storage available for them

virtual void addImpl_(int n, const float *x, const Index::idx_t *ids) override

Called from GpuIndex for add.

virtual void searchImpl_(int n, const float *x, int k, float *distances, Index::idx_t *labels) const override

Called from GpuIndex for search.

Protected Attributes

const GpuIndexFlatConfig flatConfig_

Our configuration options.

std::unique_ptr<FlatIndex> data_

Holds our GPU data containing the list of vectors.

class GpuIndexFlatL2 : public faiss::gpu::GpuIndexFlat
#include <GpuIndexFlat.h>

Wrapper around the GPU implementation that looks like faiss::IndexFlatL2; copies over centroid data from a given faiss::IndexFlat

Public Functions

GpuIndexFlatL2(GpuResourcesProvider *provider, faiss::IndexFlatL2 *index, GpuIndexFlatConfig config = GpuIndexFlatConfig())

Construct from a pre-existing faiss::IndexFlatL2 instance, copying data over to the given GPU

GpuIndexFlatL2(std::shared_ptr<GpuResources> resources, faiss::IndexFlatL2 *index, GpuIndexFlatConfig config = GpuIndexFlatConfig())
GpuIndexFlatL2(GpuResourcesProvider *provider, int dims, GpuIndexFlatConfig config = GpuIndexFlatConfig())

Construct an empty instance that can be added to.

GpuIndexFlatL2(std::shared_ptr<GpuResources> resources, int dims, GpuIndexFlatConfig config = GpuIndexFlatConfig())
void copyFrom(faiss::IndexFlat *index)

Initialize ourselves from the given CPU index; will overwrite all data in ourselves

void copyTo(faiss::IndexFlat *index)

Copy ourselves to the given CPU index; will overwrite all data in the index instance

void copyFrom(const faiss::IndexFlat *index)

Initialize ourselves from the given CPU index; will overwrite all data in ourselves

void copyTo(faiss::IndexFlat *index) const

Copy ourselves to the given CPU index; will overwrite all data in the index instance

size_t getNumVecs() const

Returns the number of vectors we contain.

virtual void reset() override

Clears all vectors from this index.

virtual void train(Index::idx_t n, const float *x) override

This index is not trained, so this does nothing.

virtual void add(Index::idx_t, const float *x) override

Overrides to avoid excessive copies.

virtual void reconstruct(Index::idx_t key, float *out) const override

Reconstruction methods; prefer the batch reconstruct as it will be more efficient

virtual void reconstruct_n(Index::idx_t i0, Index::idx_t num, float *out) const override

Batch reconstruction method.

virtual void compute_residual(const float *x, float *residual, Index::idx_t key) const override

Compute residual.

virtual void compute_residual_n(Index::idx_t n, const float *xs, float *residuals, const Index::idx_t *keys) const override

Compute residual (batch mode)

inline FlatIndex *getGpuData()

For internal access.

Protected Functions

virtual bool addImplRequiresIDs_() const override

Flat index does not require IDs as there is no storage available for them

virtual void addImpl_(int n, const float *x, const Index::idx_t *ids) override

Called from GpuIndex for add.

virtual void searchImpl_(int n, const float *x, int k, float *distances, Index::idx_t *labels) const override

Called from GpuIndex for search.

Protected Attributes

const GpuIndexFlatConfig flatConfig_

Our configuration options.

std::unique_ptr<FlatIndex> data_

Holds our GPU data containing the list of vectors.