File GpuClonerOptions.h

namespace faiss

Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Definition of inverted lists + a few common classes that implement the interface.

Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

In this file are the implementations of extra metrics beyond L2 and inner product

Implements a few neural net layers, mainly to support QINCo

Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

namespace gpu
struct GpuClonerOptions
#include <GpuClonerOptions.h>

set some options on how to copy to GPU

Subclassed by faiss::gpu::GpuMultipleClonerOptions, faiss::gpu::ToGpuCloner

Public Members

IndicesOptions indicesOptions = INDICES_64_BIT

how should indices be stored on index types that support indices (anything but GpuIndexFlat*)?

bool useFloat16CoarseQuantizer = false

is the coarse quantizer in float16?

bool useFloat16 = false

for GpuIndexIVFFlat, is storage in float16? for GpuIndexIVFPQ, are intermediate calculations in float16?

bool usePrecomputed = false

use precomputed tables?

long reserveVecs = 0

reserve vectors in the invfiles?

bool storeTransposed = false

For GpuIndexFlat, store data in transposed layout?

bool verbose = false

Set verbose options on the index.

bool use_cuvs = false

use the cuVS implementation

bool allowCpuCoarseQuantizer = false

This flag controls the CPU fallback logic for coarse quantizer component of the index. When set to false (default), the cloner will throw an exception for indices not implemented on GPU. When set to true, it will fallback to a CPU implementation.

struct GpuMultipleClonerOptions : public faiss::gpu::GpuClonerOptions

Subclassed by faiss::gpu::ToGpuClonerMultiple

Public Members

bool shard = false

Whether to shard the index across GPUs, versus replication across GPUs

int shard_type = 1

IndexIVF::copy_subset_to subset type.

bool common_ivf_quantizer = false

set to true if an IndexIVF is to be dispatched to multiple GPUs with a single common IVF quantizer, ie. only the inverted lists are sharded on the sub-indexes (uses an IndexShardsIVF)

IndicesOptions indicesOptions = INDICES_64_BIT

how should indices be stored on index types that support indices (anything but GpuIndexFlat*)?

bool useFloat16CoarseQuantizer = false

is the coarse quantizer in float16?

bool useFloat16 = false

for GpuIndexIVFFlat, is storage in float16? for GpuIndexIVFPQ, are intermediate calculations in float16?

bool usePrecomputed = false

use precomputed tables?

long reserveVecs = 0

reserve vectors in the invfiles?

bool storeTransposed = false

For GpuIndexFlat, store data in transposed layout?

bool verbose = false

Set verbose options on the index.

bool use_cuvs = false

use the cuVS implementation

bool allowCpuCoarseQuantizer = false

This flag controls the CPU fallback logic for coarse quantizer component of the index. When set to false (default), the cloner will throw an exception for indices not implemented on GPU. When set to true, it will fallback to a CPU implementation.