1===================== 2DRM Memory Management 3===================== 4 5Modern Linux systems require large amount of graphics memory to store 6frame buffers, textures, vertices and other graphics-related data. Given 7the very dynamic nature of many of that data, managing graphics memory 8efficiently is thus crucial for the graphics stack and plays a central 9role in the DRM infrastructure. 10 11The DRM core includes two memory managers, namely Translation Table Maps 12(TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory 13manager to be developed and tried to be a one-size-fits-them all 14solution. It provides a single userspace API to accommodate the need of 15all hardware, supporting both Unified Memory Architecture (UMA) devices 16and devices with dedicated video RAM (i.e. most discrete video cards). 17This resulted in a large, complex piece of code that turned out to be 18hard to use for driver development. 19 20GEM started as an Intel-sponsored project in reaction to TTM's 21complexity. Its design philosophy is completely different: instead of 22providing a solution to every graphics memory-related problems, GEM 23identified common code between drivers and created a support library to 24share it. GEM has simpler initialization and execution requirements than 25TTM, but has no video RAM management capabilities and is thus limited to 26UMA devices. 27 28The Translation Table Manager (TTM) 29----------------------------------- 30 31TTM design background and information belongs here. 32 33TTM initialization 34~~~~~~~~~~~~~~~~~~ 35 36 **Warning** 37 38 This section is outdated. 39 40Drivers wishing to support TTM must fill out a drm_bo_driver 41structure. The structure contains several fields with function pointers 42for initializing the TTM, allocating and freeing memory, waiting for 43command completion and fence synchronization, and memory migration. See 44the radeon_ttm.c file for an example of usage. 45 46The ttm_global_reference structure is made up of several fields: 47 48:: 49 50 struct ttm_global_reference { 51 enum ttm_global_types global_type; 52 size_t size; 53 void *object; 54 int (*init) (struct ttm_global_reference *); 55 void (*release) (struct ttm_global_reference *); 56 }; 57 58 59There should be one global reference structure for your memory manager 60as a whole, and there will be others for each object created by the 61memory manager at runtime. Your global TTM should have a type of 62TTM_GLOBAL_TTM_MEM. The size field for the global object should be 63sizeof(struct ttm_mem_global), and the init and release hooks should 64point at your driver-specific init and release routines, which probably 65eventually call ttm_mem_global_init and ttm_mem_global_release, 66respectively. 67 68Once your global TTM accounting structure is set up and initialized by 69calling ttm_global_item_ref() on it, you need to create a buffer 70object TTM to provide a pool for buffer object allocation by clients and 71the kernel itself. The type of this object should be 72TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct 73ttm_bo_global). Again, driver-specific init and release functions may 74be provided, likely eventually calling ttm_bo_global_init() and 75ttm_bo_global_release(), respectively. Also, like the previous 76object, ttm_global_item_ref() is used to create an initial reference 77count for the TTM, which will call your initialization function. 78 79The Graphics Execution Manager (GEM) 80------------------------------------ 81 82The GEM design approach has resulted in a memory manager that doesn't 83provide full coverage of all (or even all common) use cases in its 84userspace or kernel API. GEM exposes a set of standard memory-related 85operations to userspace and a set of helper functions to drivers, and 86let drivers implement hardware-specific operations with their own 87private API. 88 89The GEM userspace API is described in the `GEM - the Graphics Execution 90Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While 91slightly outdated, the document provides a good overview of the GEM API 92principles. Buffer allocation and read and write operations, described 93as part of the common GEM API, are currently implemented using 94driver-specific ioctls. 95 96GEM is data-agnostic. It manages abstract buffer objects without knowing 97what individual buffers contain. APIs that require knowledge of buffer 98contents or purpose, such as buffer allocation or synchronization 99primitives, are thus outside of the scope of GEM and must be implemented 100using driver-specific ioctls. 101 102On a fundamental level, GEM involves several operations: 103 104- Memory allocation and freeing 105- Command execution 106- Aperture management at command execution time 107 108Buffer object allocation is relatively straightforward and largely 109provided by Linux's shmem layer, which provides memory to back each 110object. 111 112Device-specific operations, such as command execution, pinning, buffer 113read & write, mapping, and domain ownership transfers are left to 114driver-specific ioctls. 115 116GEM Initialization 117~~~~~~~~~~~~~~~~~~ 118 119Drivers that use GEM must set the DRIVER_GEM bit in the struct 120:c:type:`struct drm_driver <drm_driver>` driver_features 121field. The DRM core will then automatically initialize the GEM core 122before calling the load operation. Behind the scene, this will create a 123DRM Memory Manager object which provides an address space pool for 124object allocation. 125 126In a KMS configuration, drivers need to allocate and initialize a 127command ring buffer following core GEM initialization if required by the 128hardware. UMA devices usually have what is called a "stolen" memory 129region, which provides space for the initial framebuffer and large, 130contiguous memory regions required by the device. This space is 131typically not managed by GEM, and must be initialized separately into 132its own DRM MM object. 133 134GEM Objects Creation 135~~~~~~~~~~~~~~~~~~~~ 136 137GEM splits creation of GEM objects and allocation of the memory that 138backs them in two distinct operations. 139 140GEM objects are represented by an instance of struct :c:type:`struct 141drm_gem_object <drm_gem_object>`. Drivers usually need to 142extend GEM objects with private information and thus create a 143driver-specific GEM object structure type that embeds an instance of 144struct :c:type:`struct drm_gem_object <drm_gem_object>`. 145 146To create a GEM object, a driver allocates memory for an instance of its 147specific GEM object type and initializes the embedded struct 148:c:type:`struct drm_gem_object <drm_gem_object>` with a call 149to :c:func:`drm_gem_object_init()`. The function takes a pointer 150to the DRM device, a pointer to the GEM object and the buffer object 151size in bytes. 152 153GEM uses shmem to allocate anonymous pageable memory. 154:c:func:`drm_gem_object_init()` will create an shmfs file of the 155requested size and store it into the struct :c:type:`struct 156drm_gem_object <drm_gem_object>` filp field. The memory is 157used as either main storage for the object when the graphics hardware 158uses system memory directly or as a backing store otherwise. 159 160Drivers are responsible for the actual physical pages allocation by 161calling :c:func:`shmem_read_mapping_page_gfp()` for each page. 162Note that they can decide to allocate pages when initializing the GEM 163object, or to delay allocation until the memory is needed (for instance 164when a page fault occurs as a result of a userspace memory access or 165when the driver needs to start a DMA transfer involving the memory). 166 167Anonymous pageable memory allocation is not always desired, for instance 168when the hardware requires physically contiguous system memory as is 169often the case in embedded devices. Drivers can create GEM objects with 170no shmfs backing (called private GEM objects) by initializing them with 171a call to :c:func:`drm_gem_private_object_init()` instead of 172:c:func:`drm_gem_object_init()`. Storage for private GEM objects 173must be managed by drivers. 174 175GEM Objects Lifetime 176~~~~~~~~~~~~~~~~~~~~ 177 178All GEM objects are reference-counted by the GEM core. References can be 179acquired and release by :c:func:`calling 180drm_gem_object_reference()` and 181:c:func:`drm_gem_object_unreference()` respectively. The caller 182must hold the :c:type:`struct drm_device <drm_device>` 183struct_mutex lock when calling 184:c:func:`drm_gem_object_reference()`. As a convenience, GEM 185provides :c:func:`drm_gem_object_unreference_unlocked()` 186functions that can be called without holding the lock. 187 188When the last reference to a GEM object is released the GEM core calls 189the :c:type:`struct drm_driver <drm_driver>` gem_free_object 190operation. That operation is mandatory for GEM-enabled drivers and must 191free the GEM object and all associated resources. 192 193void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are 194responsible for freeing all GEM object resources. This includes the 195resources created by the GEM core, which need to be released with 196:c:func:`drm_gem_object_release()`. 197 198GEM Objects Naming 199~~~~~~~~~~~~~~~~~~ 200 201Communication between userspace and the kernel refers to GEM objects 202using local handles, global names or, more recently, file descriptors. 203All of those are 32-bit integer values; the usual Linux kernel limits 204apply to the file descriptors. 205 206GEM handles are local to a DRM file. Applications get a handle to a GEM 207object through a driver-specific ioctl, and can use that handle to refer 208to the GEM object in other standard or driver-specific ioctls. Closing a 209DRM file handle frees all its GEM handles and dereferences the 210associated GEM objects. 211 212To create a handle for a GEM object drivers call 213:c:func:`drm_gem_handle_create()`. The function takes a pointer 214to the DRM file and the GEM object and returns a locally unique handle. 215When the handle is no longer needed drivers delete it with a call to 216:c:func:`drm_gem_handle_delete()`. Finally the GEM object 217associated with a handle can be retrieved by a call to 218:c:func:`drm_gem_object_lookup()`. 219 220Handles don't take ownership of GEM objects, they only take a reference 221to the object that will be dropped when the handle is destroyed. To 222avoid leaking GEM objects, drivers must make sure they drop the 223reference(s) they own (such as the initial reference taken at object 224creation time) as appropriate, without any special consideration for the 225handle. For example, in the particular case of combined GEM object and 226handle creation in the implementation of the dumb_create operation, 227drivers must drop the initial reference to the GEM object before 228returning the handle. 229 230GEM names are similar in purpose to handles but are not local to DRM 231files. They can be passed between processes to reference a GEM object 232globally. Names can't be used directly to refer to objects in the DRM 233API, applications must convert handles to names and names to handles 234using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls 235respectively. The conversion is handled by the DRM core without any 236driver-specific support. 237 238GEM also supports buffer sharing with dma-buf file descriptors through 239PRIME. GEM-based drivers must use the provided helpers functions to 240implement the exporting and importing correctly. See ?. Since sharing 241file descriptors is inherently more secure than the easily guessable and 242global GEM names it is the preferred buffer sharing mechanism. Sharing 243buffers through GEM names is only supported for legacy userspace. 244Furthermore PRIME also allows cross-device buffer sharing since it is 245based on dma-bufs. 246 247GEM Objects Mapping 248~~~~~~~~~~~~~~~~~~~ 249 250Because mapping operations are fairly heavyweight GEM favours 251read/write-like access to buffers, implemented through driver-specific 252ioctls, over mapping buffers to userspace. However, when random access 253to the buffer is needed (to perform software rendering for instance), 254direct access to the object can be more efficient. 255 256The mmap system call can't be used directly to map GEM objects, as they 257don't have their own file handle. Two alternative methods currently 258co-exist to map GEM objects to userspace. The first method uses a 259driver-specific ioctl to perform the mapping operation, calling 260:c:func:`do_mmap()` under the hood. This is often considered 261dubious, seems to be discouraged for new GEM-enabled drivers, and will 262thus not be described here. 263 264The second method uses the mmap system call on the DRM file handle. void 265\*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t 266offset); DRM identifies the GEM object to be mapped by a fake offset 267passed through the mmap offset argument. Prior to being mapped, a GEM 268object must thus be associated with a fake offset. To do so, drivers 269must call :c:func:`drm_gem_create_mmap_offset()` on the object. 270 271Once allocated, the fake offset value must be passed to the application 272in a driver-specific way and can then be used as the mmap offset 273argument. 274 275The GEM core provides a helper method :c:func:`drm_gem_mmap()` to 276handle object mapping. The method can be set directly as the mmap file 277operation handler. It will look up the GEM object based on the offset 278value and set the VMA operations to the :c:type:`struct drm_driver 279<drm_driver>` gem_vm_ops field. Note that 280:c:func:`drm_gem_mmap()` doesn't map memory to userspace, but 281relies on the driver-provided fault handler to map pages individually. 282 283To use :c:func:`drm_gem_mmap()`, drivers must fill the struct 284:c:type:`struct drm_driver <drm_driver>` gem_vm_ops field 285with a pointer to VM operations. 286 287struct vm_operations_struct \*gem_vm_ops struct 288vm_operations_struct { void (\*open)(struct vm_area_struct \* area); 289void (\*close)(struct vm_area_struct \* area); int (\*fault)(struct 290vm_area_struct \*vma, struct vm_fault \*vmf); }; 291 292The open and close operations must update the GEM object reference 293count. Drivers can use the :c:func:`drm_gem_vm_open()` and 294:c:func:`drm_gem_vm_close()` helper functions directly as open 295and close handlers. 296 297The fault operation handler is responsible for mapping individual pages 298to userspace when a page fault occurs. Depending on the memory 299allocation scheme, drivers can allocate pages at fault time, or can 300decide to allocate memory for the GEM object at the time the object is 301created. 302 303Drivers that want to map the GEM object upfront instead of handling page 304faults can implement their own mmap file operation handler. 305 306Memory Coherency 307~~~~~~~~~~~~~~~~ 308 309When mapped to the device or used in a command buffer, backing pages for 310an object are flushed to memory and marked write combined so as to be 311coherent with the GPU. Likewise, if the CPU accesses an object after the 312GPU has finished rendering to the object, then the object must be made 313coherent with the CPU's view of memory, usually involving GPU cache 314flushing of various kinds. This core CPU<->GPU coherency management is 315provided by a device-specific ioctl, which evaluates an object's current 316domain and performs any necessary flushing or synchronization to put the 317object into the desired coherency domain (note that the object may be 318busy, i.e. an active render target; in that case, setting the domain 319blocks the client and waits for rendering to complete before performing 320any necessary flushing operations). 321 322Command Execution 323~~~~~~~~~~~~~~~~~ 324 325Perhaps the most important GEM function for GPU devices is providing a 326command execution interface to clients. Client programs construct 327command buffers containing references to previously allocated memory 328objects, and then submit them to GEM. At that point, GEM takes care to 329bind all the objects into the GTT, execute the buffer, and provide 330necessary synchronization between clients accessing the same buffers. 331This often involves evicting some objects from the GTT and re-binding 332others (a fairly expensive operation), and providing relocation support 333which hides fixed GTT offsets from clients. Clients must take care not 334to submit command buffers that reference more objects than can fit in 335the GTT; otherwise, GEM will reject them and no rendering will occur. 336Similarly, if several objects in the buffer require fence registers to 337be allocated for correct rendering (e.g. 2D blits on pre-965 chips), 338care must be taken not to require more fence registers than are 339available to the client. Such resource management should be abstracted 340from the client in libdrm. 341 342GEM Function Reference 343---------------------- 344 345.. kernel-doc:: drivers/gpu/drm/drm_gem.c 346 :export: 347 348.. kernel-doc:: include/drm/drm_gem.h 349 :internal: 350 351VMA Offset Manager 352------------------ 353 354.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c 355 :doc: vma offset manager 356 357.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c 358 :export: 359 360.. kernel-doc:: include/drm/drm_vma_manager.h 361 :internal: 362 363PRIME Buffer Sharing 364-------------------- 365 366PRIME is the cross device buffer sharing framework in drm, originally 367created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME 368buffers are dma-buf based file descriptors. 369 370Overview and Driver Interface 371~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 372 373Similar to GEM global names, PRIME file descriptors are also used to 374share buffer objects across processes. They offer additional security: 375as file descriptors must be explicitly sent over UNIX domain sockets to 376be shared between applications, they can't be guessed like the globally 377unique GEM names. 378 379Drivers that support the PRIME API must set the DRIVER_PRIME bit in the 380struct :c:type:`struct drm_driver <drm_driver>` 381driver_features field, and implement the prime_handle_to_fd and 382prime_fd_to_handle operations. 383 384int (\*prime_handle_to_fd)(struct drm_device \*dev, struct drm_file 385\*file_priv, uint32_t handle, uint32_t flags, int \*prime_fd); int 386(\*prime_fd_to_handle)(struct drm_device \*dev, struct drm_file 387\*file_priv, int prime_fd, uint32_t \*handle); Those two operations 388convert a handle to a PRIME file descriptor and vice versa. Drivers must 389use the kernel dma-buf buffer sharing framework to manage the PRIME file 390descriptors. Similar to the mode setting API PRIME is agnostic to the 391underlying buffer object manager, as long as handles are 32bit unsigned 392integers. 393 394While non-GEM drivers must implement the operations themselves, GEM 395drivers must use the :c:func:`drm_gem_prime_handle_to_fd()` and 396:c:func:`drm_gem_prime_fd_to_handle()` helper functions. Those 397helpers rely on the driver gem_prime_export and gem_prime_import 398operations to create a dma-buf instance from a GEM object (dma-buf 399exporter role) and to create a GEM object from a dma-buf instance 400(dma-buf importer role). 401 402struct dma_buf \* (\*gem_prime_export)(struct drm_device \*dev, 403struct drm_gem_object \*obj, int flags); struct drm_gem_object \* 404(\*gem_prime_import)(struct drm_device \*dev, struct dma_buf 405\*dma_buf); These two operations are mandatory for GEM drivers that 406support PRIME. 407 408PRIME Helper Functions 409~~~~~~~~~~~~~~~~~~~~~~ 410 411.. kernel-doc:: drivers/gpu/drm/drm_prime.c 412 :doc: PRIME Helpers 413 414PRIME Function References 415------------------------- 416 417.. kernel-doc:: drivers/gpu/drm/drm_prime.c 418 :export: 419 420DRM MM Range Allocator 421---------------------- 422 423Overview 424~~~~~~~~ 425 426.. kernel-doc:: drivers/gpu/drm/drm_mm.c 427 :doc: Overview 428 429LRU Scan/Eviction Support 430~~~~~~~~~~~~~~~~~~~~~~~~~ 431 432.. kernel-doc:: drivers/gpu/drm/drm_mm.c 433 :doc: lru scan roaster 434 435DRM MM Range Allocator Function References 436------------------------------------------ 437 438.. kernel-doc:: drivers/gpu/drm/drm_mm.c 439 :export: 440 441.. kernel-doc:: include/drm/drm_mm.h 442 :internal: 443 444CMA Helper Functions Reference 445------------------------------ 446 447.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c 448 :doc: cma helpers 449 450.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c 451 :export: 452 453.. kernel-doc:: include/drm/drm_gem_cma_helper.h 454 :internal: 455