File GpuDistance.h
-
namespace faiss
Implementation of k-means clustering with many variants.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.
IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)
PQ4 SIMD packing and accumulation functions
The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.
The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.
This file contains callbacks for kernels that compute distances.
Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is
x[ i * d + j ]
where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.
I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.
The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.
Definition of inverted lists + a few common classes that implement the interface.
Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.
In this file are the implementations of extra metrics beyond L2 and inner product
Implements a few neural net layers, mainly to support QINCo
Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.
-
namespace gpu
Enums
Functions
-
bool should_use_cuvs(GpuDistanceParams args)
A function that determines whether cuVS should be used based on various conditions (such as unsupported architecture)
-
void bfKnn(GpuResourcesProvider *resources, const GpuDistanceParams &args)
A wrapper for gpu/impl/Distance.cuh to expose direct brute-force k-nearest neighbor searches on an externally-provided region of memory (e.g., from a pytorch tensor). The data (vectors, queries, outDistances, outIndices) can be resident on the GPU or the CPU, but all calculations are performed on the GPU. If the result buffers are on the CPU, results will be copied back when done.
All GPU computation is performed on the current CUDA device, and ordered with respect to resources->getDefaultStreamCurrentDevice().
For each vector in
queries
, searches all ofvectors
to find its k nearest neighbors with respect to the given metric
-
void bfKnn_tiling(GpuResourcesProvider *resources, const GpuDistanceParams &args, size_t vectorsMemoryLimit, size_t queriesMemoryLimit)
-
void bruteForceKnn(GpuResourcesProvider *resources, faiss::MetricType metric, const float *vectors, bool vectorsRowMajor, idx_t numVectors, const float *queries, bool queriesRowMajor, idx_t numQueries, int dims, int k, float *outDistances, idx_t *outIndices)
Deprecated legacy implementation.
-
struct GpuDistanceParams
- #include <GpuDistance.h>
Arguments to brute-force GPU k-nearest neighbor searching.
Public Members
-
faiss::MetricType metric = METRIC_L2
Search parameter: distance metric.
-
float metricArg = 0
Search parameter: distance metric argument (if applicable) For metric == METRIC_Lp, this is the p-value
-
int k = 0
Search parameter: return k nearest neighbors If the value provided is -1, then we report all pairwise distances without top-k filtering
-
int dims = 0
Vector dimensionality.
-
const void *vectors = nullptr
If vectorsRowMajor is true, this is numVectors x dims, with dims innermost; otherwise, dims x numVectors, with numVectors innermost
-
DistanceDataType vectorType = DistanceDataType::F32
-
bool vectorsRowMajor = true
-
const float *vectorNorms = nullptr
Precomputed L2 norms for each vector in
vectors
, which can be optionally provided in advance to speed computation for METRIC_L2
-
const void *queries = nullptr
If queriesRowMajor is true, this is numQueries x dims, with dims innermost; otherwise, dims x numQueries, with numQueries innermost
-
DistanceDataType queryType = DistanceDataType::F32
-
bool queriesRowMajor = true
-
float *outDistances = nullptr
A region of memory size numQueries x k, with k innermost (row major) if k > 0, or if k == -1, a region of memory of size numQueries x numVectors
-
bool ignoreOutDistances = false
Do we only care about the indices reported, rather than the output distances? Not used if k == -1 (all pairwise distances)
-
IndicesDataType outIndicesType = IndicesDataType::I64
A region of memory size numQueries x k, with k innermost (row major). Not used if k == -1 (all pairwise distances)
-
void *outIndices = nullptr
-
int device = -1
On which GPU device should the search run? -1 indicates that the current CUDA thread-local device (via cudaGetDevice/cudaSetDevice) is used Otherwise, an integer 0 <= device < numDevices indicates the device for execution
-
bool use_cuvs = false
Should the index dispatch down to cuVS?
-
faiss::MetricType metric = METRIC_L2
-
bool should_use_cuvs(GpuDistanceParams args)
-
namespace gpu