File StackDeviceMemory.h

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Implementation of k-means clustering with many variants.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. IDSelector is intended to define a subset of vectors to handle (for removal or as subset to search)

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. PQ4 SIMD packing and accumulation functions

The basic kernel accumulates nq query vectors with bbs = nb * 2 * 16 vectors and produces an output matrix for that. It is interesting for nq * nb <= 4, otherwise register spilling becomes too large.

The implementation of these functions is spread over 3 cpp files to reduce parallel compile times. Templates are instantiated explicitly.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. This file contains callbacks for kernels that compute distances.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

namespace gpu
class StackDeviceMemory
#include <StackDeviceMemory.h>

Device memory manager that provides temporary memory allocations out of a region of memory, for a single device

Public Functions

StackDeviceMemory(GpuResources *res, int device, size_t allocPerDevice)

Allocate a new region of memory that we manage.

StackDeviceMemory(int device, void *p, size_t size, bool isOwner)

Manage a region of memory for a particular device, with or without ownership

~StackDeviceMemory()
int getDevice() const
void *allocMemory(cudaStream_t stream, size_t size)

All allocations requested should be a multiple of 16 bytes.

void deallocMemory(int device, cudaStream_t, size_t size, void *p)
size_t getSizeAvailable() const
std::string toString() const

Protected Attributes

int device_

Our device.

Stack stack_

Memory stack.

struct Range
#include <StackDeviceMemory.h>

Previous allocation ranges and the streams for which synchronization is required

Public Functions

inline Range(char *s, char *e, cudaStream_t str)

Public Members

char *start_
char *end_
cudaStream_t stream_
struct Stack

Public Functions

Stack(GpuResources *res, int device, size_t size)

Constructor that allocates memory via cudaMalloc.

~Stack()
size_t getSizeAvailable() const

Returns how much size is available for an allocation without calling cudaMalloc

char *getAlloc(size_t size, cudaStream_t stream)

Obtains an allocation; all allocations are guaranteed to be 16 byte aligned

void returnAlloc(char *p, size_t size, cudaStream_t stream)

Returns an allocation.

std::string toString() const

Returns the stack state.

Public Members

GpuResources *res_

Our GpuResources object.

int device_

Device this allocation is on.

char *alloc_

Where our temporary memory buffer is allocated; we allocate starting 16 bytes into this

size_t allocSize_

Total size of our allocation.

char *start_

Our temporary memory region; [start_, end_) is valid.

char *end_
char *head_

Stack head within [start, end)

std::list<Range> lastUsers_

List of previous last users of allocations on our stack, for possible synchronization purposes

size_t highWaterMemoryUsed_

What’s the high water mark in terms of memory used from the temporary buffer?