File PolysemousTraining.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

struct SimulatedAnnealingParameters
#include <PolysemousTraining.h>

parameters used for the simulated annealing method

Subclassed by faiss::PolysemousTraining, faiss::SimulatedAnnealingOptimizer

Public Functions

inline SimulatedAnnealingParameters()

Public Members

double init_temperature = 0.7
double temperature_decay = 0.9997893011688015
int n_iter = 500000
int n_redo = 2
int seed = 123
int verbose = 0
bool only_bit_flips = false
bool init_random = false
struct PermutationObjective
#include <PolysemousTraining.h>

abstract class for the loss function

Subclassed by faiss::ReproduceDistancesObjective

Public Functions

virtual double compute_cost(const int *perm) const = 0
virtual double cost_update(const int *perm, int iw, int jw) const
inline virtual ~PermutationObjective()

Public Members

int n
struct ReproduceDistancesObjective : public faiss::PermutationObjective

Public Functions

double dis_weight(double x) const
double get_source_dis(int i, int j) const
virtual double compute_cost(const int *perm) const override
virtual double cost_update(const int *perm, int iw, int jw) const override
ReproduceDistancesObjective(int n, const double *source_dis_in, const double *target_dis_in, double dis_weight_factor)
void set_affine_target_dis(const double *source_dis_in)
inline ~ReproduceDistancesObjective() override

Public Members

double dis_weight_factor
std::vector<double> source_dis

“real” corrected distances (size n^2)

const double *target_dis

wanted distances (size n^2)

std::vector<double> weights

weights for each distance (size n^2)

int n

Public Static Functions

static inline double sqr(double x)
static void compute_mean_stdev(const double *tab, size_t n2, double *mean_out, double *stddev_out)
struct SimulatedAnnealingOptimizer : public faiss::SimulatedAnnealingParameters
#include <PolysemousTraining.h>

Simulated annealing optimization algorithm for permutations.

Public Functions

SimulatedAnnealingOptimizer(PermutationObjective *obj, const SimulatedAnnealingParameters &p)

logs values of the cost function

double optimize(int *perm)
double run_optimization(int *best_perm)
virtual ~SimulatedAnnealingOptimizer()

Public Members

PermutationObjective *obj
int n

size of the permutation

FILE *logfile
RandomGenerator *rnd
double init_cost

remember initial cost of optimization

double init_temperature = 0.7
double temperature_decay = 0.9997893011688015
int n_iter = 500000
int n_redo = 2
int seed = 123
int verbose = 0
bool only_bit_flips = false
bool init_random = false
struct PolysemousTraining : public faiss::SimulatedAnnealingParameters
#include <PolysemousTraining.h>

optimizes the order of indices in a ProductQuantizer

Public Types

enum Optimization_type_t

Values:

enumerator OT_None
enumerator OT_ReproduceDistances_affine

default

enumerator OT_Ranking_weighted_diff

same as _2, but use rank of y+ - rank of y-

Public Functions

PolysemousTraining()
void optimize_pq_for_hamming(ProductQuantizer &pq, size_t n, const float *x) const

reorder the centroids so that the Hamming distance becomes a good approximation of the SDC distance (called by train)

void optimize_ranking(ProductQuantizer &pq, size_t n, const float *x) const

called by optimize_pq_for_hamming

void optimize_reproduce_distances(ProductQuantizer &pq) const

called by optimize_pq_for_hamming

size_t memory_usage_per_thread(const ProductQuantizer &pq) const

make sure we don’t blow up the memory

Public Members

Optimization_type_t optimization_type
int ntrain_permutation

use 1/4 of the training points for the optimization, with max. ntrain_permutation. If ntrain_permutation == 0: train on centroids

double dis_weight_factor

decay of exp that weights distance loss

size_t max_memory

refuse to train if it would require more than that amount of RAM

std::string log_pattern
double init_temperature = 0.7
double temperature_decay = 0.9997893011688015
int n_iter = 500000
int n_redo = 2
int seed = 123
int verbose = 0
bool only_bit_flips = false
bool init_random = false