Class faiss::gpu::StandardGpuResources

class StandardGpuResources : public faiss::gpu::GpuResourcesProvider

Default implementation of GpuResources that allocates a cuBLAS stream and 2 streams for use, as well as temporary memory. Internally, the Faiss GPU code uses the instance managed by getResources, but this is the user-facing object that is internally reference counted.

Public Functions

StandardGpuResources()
~StandardGpuResources() override
virtual std::shared_ptr<GpuResources> getResources() override

Returns the shared resources object.

void noTempMemory()

Disable allocation of temporary memory; all temporary memory requests will call cudaMalloc / cudaFree at the point of use

void setTempMemory(size_t size)

Specify that we wish to use a certain fixed size of memory on all devices as temporary memory. This is the upper bound for the GPU memory that we will reserve. We will never go above 1.5 GiB on any GPU; smaller GPUs (with <= 4 GiB or <= 8 GiB) will use less memory than that. To avoid any temporary memory allocation, pass 0.

void setPinnedMemory(size_t size)

Set amount of pinned memory to allocate, for async GPU <-> CPU transfers

void setDefaultStream(int device, cudaStream_t stream)

Called to change the stream for work ordering. We do not own stream; i.e., it will not be destroyed when the GpuResources object gets cleaned up. We are guaranteed that all Faiss GPU work is ordered with respect to this stream upon exit from an index or other Faiss GPU call.

void revertDefaultStream(int device)

Revert the default stream to the original stream managed by this resources object, in case someone called setDefaultStream.

void setDefaultNullStreamAllDevices()

Called to change the work ordering streams to the null stream for all devices

std::map<int, std::map<std::string, std::pair<int, size_t>>> getMemoryInfo() const

Export a description of memory used for Python.

cudaStream_t getDefaultStream(int device)

Returns the current default stream.

size_t getTempMemoryAvailable(int device) const

Returns the current amount of temp memory available.

void syncDefaultStreamCurrentDevice()

Synchronize our default stream with the CPU.

void setLogMemoryAllocations(bool enable)

If enabled, will print every GPU memory allocation and deallocation to standard output

Private Members

std::shared_ptr<StandardGpuResourcesImpl> res_