File NeuralNet.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implements a few neural net layers, mainly to support QINCo

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

struct QINCoStep

Public Functions

QINCoStep(int d, int K, int L, int h)
inline nn::FFN &get_residual_block(int i)
nn::Int32Tensor2D encode(const nn::Tensor2D &xhat, const nn::Tensor2D &x, nn::Tensor2D *residuals = nullptr) const

encode a set of vectors x with intial estimate xhat. Optionally return the delta to be added to xhat to form the new xhat

nn::Tensor2D decode(const nn::Tensor2D &xhat, const nn::Int32Tensor2D &codes) const

Public Members

int d

d: input dim, K: codebook size, L: # of residual blocks, h: hidden dim

int K
int L
int h
nn::Embedding codebook
nn::Linear MLPconcat
std::vector<nn::FFN> residual_blocks
struct NeuralNetCodec

Subclassed by faiss::QINCo

Public Functions

inline NeuralNetCodec(int d, int M)
virtual nn::Tensor2D decode(const nn::Int32Tensor2D &codes) const = 0
virtual nn::Int32Tensor2D encode(const nn::Tensor2D &x) const = 0
inline virtual ~NeuralNetCodec()

Public Members

int d
int M
struct QINCo : public faiss::NeuralNetCodec

Public Functions

QINCo(int d, int K, int L, int M, int h)
inline QINCoStep &get_step(int i)
virtual nn::Tensor2D decode(const nn::Int32Tensor2D &codes) const override
virtual nn::Int32Tensor2D encode(const nn::Tensor2D &x) const override
inline virtual ~QINCo()

Public Members

int K
int L
int h
nn::Embedding codebook0
std::vector<QINCoStep> steps
int d
int M
namespace nn

Typedefs

using Tensor2D = Tensor2DTemplate<float>
using Int32Tensor2D = Tensor2DTemplate<int32_t>
template<typename T>
struct Tensor2DTemplate

Public Functions

Tensor2DTemplate(size_t n0, size_t n1, const T *data = nullptr)
Tensor2DTemplate &operator+=(const Tensor2DTemplate&)
Tensor2DTemplate column(size_t j) const

get column #j as a 1-column Tensor2D

inline size_t numel() const
inline T *data()
inline const T *data() const

Public Members

size_t shape[2]
std::vector<T> v
struct Linear
#include <NeuralNet.h>

minimal translation of nn.Linear

Public Functions

Linear(size_t in_features, size_t out_features, bool bias = true)
Tensor2D operator()(const Tensor2D &x) const

Public Members

size_t in_features
size_t out_features
std::vector<float> weight
std::vector<float> bias
struct Embedding
#include <NeuralNet.h>

minimal translation of nn.Embedding

Public Functions

Embedding(size_t num_embeddings, size_t embedding_dim)
Tensor2D operator()(const Int32Tensor2D&) const
inline float *data()
inline const float *data() const

Public Members

size_t num_embeddings
size_t embedding_dim
std::vector<float> weight
struct FFN
#include <NeuralNet.h>

Feed forward layer that expands to a hidden dimension, applies a ReLU non linearity and maps back to the orignal dimension

Public Functions

inline FFN(int d, int h)
Tensor2D operator()(const Tensor2D &x) const

Public Members

Linear linear1
Linear linear2