File DeviceUtils.h

Defines

CUDA_VERIFY(X)

Wrapper to test return status of CUDA functions.

CUDA_TEST_ERROR()

Wrapper to synchronously probe for CUDA errors.

namespace faiss

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Throughout the library, vectors are provided as float * pointers. Most algorithms can be optimized when several vectors are processed (added/searched) together in a batch. In this case, they are passed in as a matrix. When n vectors of size d are provided as float * x, component j of vector i is

x[ i * d + j ]

where 0 <= i < n and 0 <= j < d. In other words, matrices are always compact. When specifying the size of the matrix, we call it an n*d matrix, which implies a row-major storage.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. I/O functions can read/write to a filename, a file handle or to an object that abstracts the medium.

The read functions return objects that should be deallocated with delete. All references within these objectes are owned by the object.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Definition of inverted lists + a few common classes that implement the interface.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Since IVF (inverted file) indexes are of so much use for large-scale use cases, we group a few functions related to them in this small library. Most functions work both on IndexIVFs and IndexIVFs embedded within an IndexPreTransform.

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. In this file are the implementations of extra metrics beyond L2 and inner product

Copyright (c) Facebook, Inc. and its affiliates.

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Defines a few objects that apply transformations to a set of vectors Often these are pre-processing steps.

namespace gpu

Functions

int getCurrentDevice()

Returns the current thread-local GPU device.

void setCurrentDevice(int device)

Sets the current thread-local GPU device.

int getNumDevices()

Returns the number of available GPU devices.

void profilerStart()

Starts the CUDA profiler (exposed via SWIG)

void profilerStop()

Stops the CUDA profiler (exposed via SWIG)

void synchronizeAllDevices()

Synchronizes the CPU against all devices (equivalent to cudaDeviceSynchronize for each device)

const cudaDeviceProp &getDeviceProperties(int device)

Returns a cached cudaDeviceProp for the given device.

const cudaDeviceProp &getCurrentDeviceProperties()

Returns the cached cudaDeviceProp for the current device.

int getMaxThreads(int device)

Returns the maximum number of threads available for the given GPU device

int getMaxThreadsCurrentDevice()

Equivalent to getMaxThreads(getCurrentDevice())

size_t getMaxSharedMemPerBlock(int device)

Returns the maximum smem available for the given GPU device.

size_t getMaxSharedMemPerBlockCurrentDevice()

Equivalent to getMaxSharedMemPerBlock(getCurrentDevice())

int getDeviceForAddress(const void *p)

For a given pointer, returns whether or not it is located on a device (deviceId >= 0) or the host (-1).

bool getFullUnifiedMemSupport(int device)

Does the given device support full unified memory sharing host memory?

bool getFullUnifiedMemSupportCurrentDevice()

Equivalent to getFullUnifiedMemSupport(getCurrentDevice())

bool getTensorCoreSupport(int device)

Does the given device support tensor core operations?

bool getTensorCoreSupportCurrentDevice()

Equivalent to getTensorCoreSupport(getCurrentDevice())

int getMaxKSelection()

Returns the maximum k-selection value supported based on the CUDA SDK that we were compiled with. .cu files can use DeviceDefs.cuh, but this is for non-CUDA files

template<typename L1, typename L2>
void streamWaitBase(const L1 &listWaiting, const L2 &listWaitOn)

Call for a collection of streams to wait on.

template<typename L1>
void streamWait(const L1 &a, const std::initializer_list<cudaStream_t> &b)

These versions allow usage of initializer_list as arguments, since otherwise {…} doesn’t have a type

template<typename L2>
void streamWait(const std::initializer_list<cudaStream_t> &a, const L2 &b)
inline void streamWait(const std::initializer_list<cudaStream_t> &a, const std::initializer_list<cudaStream_t> &b)
class CublasHandleScope
#include <DeviceUtils.h>

RAII object to manage a cublasHandle_t.

Public Functions

CublasHandleScope()
~CublasHandleScope()
inline cublasHandle_t get()

Private Members

cublasHandle_t blasHandle_
class CudaEvent

Public Functions

explicit CudaEvent(cudaStream_t stream, bool timer = false)

Creates an event and records it in this stream.

CudaEvent(const CudaEvent &event) = delete
CudaEvent(CudaEvent &&event) noexcept
~CudaEvent()
inline cudaEvent_t get()
void streamWaitOnEvent(cudaStream_t stream)

Wait on this event in this stream.

void cpuWaitOnEvent()

Have the CPU wait for the completion of this event.

CudaEvent &operator=(CudaEvent &&event) noexcept
CudaEvent &operator=(CudaEvent &event) = delete

Private Members

cudaEvent_t event_
class DeviceScope
#include <DeviceUtils.h>

RAII object to set the current device, and restore the previous device upon destruction

Public Functions

explicit DeviceScope(int device)
~DeviceScope()

Private Members

int prevDevice_