File IndexIVFFastScan.h
-
namespace faiss
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions
The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.
The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.
The SIMDResultHandler object is intended to be templated and inlined. Methods:
handle(): called when 32 distances are computed and provided in two simd16uint16. (q, b) indicate which entry it is in the block.
set_block_origin(): set the sub-matrix that is being computed
Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is
x[ i * d + j ]
where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.
The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product
Copyright (c) Facebook, Inc. and its affiliates.
This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.
Variables
- FAISS_API IVFFastScanStats IVFFastScan_stats
-
struct IndexIVFFastScan : public faiss::IndexIVF
- #include <IndexIVFFastScan.h>
Fast scan version of IVFPQ and IVFAQ. Works for 4-bit PQ/AQ for now.
The codes in the inverted lists are not stored sequentially but grouped in blocks of size bbs. This makes it possible to very quickly compute distances with SIMD instructions.
Implementations (implem): 0: auto-select implementation (default) 1: orig’s search, re-implemented 2: orig’s search, re-ordered by invlist 10: optimizer int16 search, collect results in heap, no qbs 11: idem, collect results in reservoir 12: optimizer int16 search, collect results in heap, uses qbs 13: idem, collect results in reservoir
Subclassed by faiss::IndexIVFAdditiveQuantizerFastScan, faiss::IndexIVFPQFastScan
Public Functions
-
IndexIVFFastScan(Index *quantizer, size_t d, size_t nlist, size_t code_size, MetricType metric = METRIC_L2)
-
IndexIVFFastScan()
-
void init_fastscan(size_t M, size_t nbits, size_t nlist, MetricType metric, int bbs)
-
void init_code_packer()
-
~IndexIVFFastScan() override
-
virtual void add_with_ids(idx_t n, const float *x, const idx_t *xids) override
default implementation that calls encode_vectors
-
virtual bool lookup_table_is_3d() const = 0
-
virtual void compute_LUT(size_t n, const float *x, const idx_t *coarse_ids, const float *coarse_dis, AlignedTable<float> &dis_tables, AlignedTable<float> &biases) const = 0
-
void compute_LUT_uint8(size_t n, const float *x, const idx_t *coarse_ids, const float *coarse_dis, AlignedTable<uint8_t> &dis_tables, AlignedTable<uint16_t> &biases, float *normalizers) const
-
virtual void search(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const SearchParameters *params = nullptr) const override
assign the vectors, then call search_preassign
-
virtual void range_search(idx_t n, const float *x, float radius, RangeSearchResult *result, const SearchParameters *params = nullptr) const override
will just fail
-
template<bool is_max, class Scaler>
void search_dispatch_implem(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const Scaler &scaler) const
-
template<class C, class Scaler>
void search_implem_1(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const Scaler &scaler) const
-
template<class C, class Scaler>
void search_implem_2(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, const Scaler &scaler) const
-
template<class C, class Scaler>
void search_implem_10(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, int impl, size_t *ndis_out, size_t *nlist_out, const Scaler &scaler) const
-
template<class C, class Scaler>
void search_implem_12(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, int impl, size_t *ndis_out, size_t *nlist_out, const Scaler &scaler) const
-
template<class C, class Scaler>
void search_implem_14(idx_t n, const float *x, idx_t k, float *distances, idx_t *labels, int impl, const Scaler &scaler) const
-
virtual void reconstruct_from_offset(int64_t list_no, int64_t offset, float *recons) const override
Reconstruct a vector given the location in terms of (inv list index + inv list offset) instead of the id.
Useful for reconstructing when the direct_map is not maintained and the inv list offset is computed by search_preassigned() with
store_pairs
set.
-
virtual CodePacker *get_CodePacker() const override
-
void reconstruct_orig_invlists()
-
IndexIVFFastScan(Index *quantizer, size_t d, size_t nlist, size_t code_size, MetricType metric = METRIC_L2)