1===================== 2DRM Memory Management 3===================== 4 5Modern Linux systems require large amount of graphics memory to store 6frame buffers, textures, vertices and other graphics-related data. Given 7the very dynamic nature of many of that data, managing graphics memory 8efficiently is thus crucial for the graphics stack and plays a central 9role in the DRM infrastructure. 10 11The DRM core includes two memory managers, namely Translation Table Maps 12(TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory 13manager to be developed and tried to be a one-size-fits-them all 14solution. It provides a single userspace API to accommodate the need of 15all hardware, supporting both Unified Memory Architecture (UMA) devices 16and devices with dedicated video RAM (i.e. most discrete video cards). 17This resulted in a large, complex piece of code that turned out to be 18hard to use for driver development. 19 20GEM started as an Intel-sponsored project in reaction to TTM's 21complexity. Its design philosophy is completely different: instead of 22providing a solution to every graphics memory-related problems, GEM 23identified common code between drivers and created a support library to 24share it. GEM has simpler initialization and execution requirements than 25TTM, but has no video RAM management capabilities and is thus limited to 26UMA devices. 27 28The Translation Table Manager (TTM) 29=================================== 30 31TTM design background and information belongs here. 32 33TTM initialization 34------------------ 35 36 **Warning** 37 This section is outdated. 38 39Drivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver 40<ttm_bo_driver>` structure to ttm_bo_device_init, together with an 41initialized global reference to the memory manager. The ttm_bo_driver 42structure contains several fields with function pointers for 43initializing the TTM, allocating and freeing memory, waiting for command 44completion and fence synchronization, and memory migration. 45 46The :c:type:`struct drm_global_reference <drm_global_reference>` is made 47up of several fields: 48 49.. code-block:: c 50 51 struct drm_global_reference { 52 enum ttm_global_types global_type; 53 size_t size; 54 void *object; 55 int (*init) (struct drm_global_reference *); 56 void (*release) (struct drm_global_reference *); 57 }; 58 59 60There should be one global reference structure for your memory manager 61as a whole, and there will be others for each object created by the 62memory manager at runtime. Your global TTM should have a type of 63TTM_GLOBAL_TTM_MEM. The size field for the global object should be 64sizeof(struct ttm_mem_global), and the init and release hooks should 65point at your driver-specific init and release routines, which probably 66eventually call ttm_mem_global_init and ttm_mem_global_release, 67respectively. 68 69Once your global TTM accounting structure is set up and initialized by 70calling ttm_global_item_ref() on it, you need to create a buffer 71object TTM to provide a pool for buffer object allocation by clients and 72the kernel itself. The type of this object should be 73TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct 74ttm_bo_global). Again, driver-specific init and release functions may 75be provided, likely eventually calling ttm_bo_global_init() and 76ttm_bo_global_release(), respectively. Also, like the previous 77object, ttm_global_item_ref() is used to create an initial reference 78count for the TTM, which will call your initialization function. 79 80See the radeon_ttm.c file for an example of usage. 81 82.. kernel-doc:: drivers/gpu/drm/drm_global.c 83 :export: 84 85 86The Graphics Execution Manager (GEM) 87==================================== 88 89The GEM design approach has resulted in a memory manager that doesn't 90provide full coverage of all (or even all common) use cases in its 91userspace or kernel API. GEM exposes a set of standard memory-related 92operations to userspace and a set of helper functions to drivers, and 93let drivers implement hardware-specific operations with their own 94private API. 95 96The GEM userspace API is described in the `GEM - the Graphics Execution 97Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While 98slightly outdated, the document provides a good overview of the GEM API 99principles. Buffer allocation and read and write operations, described 100as part of the common GEM API, are currently implemented using 101driver-specific ioctls. 102 103GEM is data-agnostic. It manages abstract buffer objects without knowing 104what individual buffers contain. APIs that require knowledge of buffer 105contents or purpose, such as buffer allocation or synchronization 106primitives, are thus outside of the scope of GEM and must be implemented 107using driver-specific ioctls. 108 109On a fundamental level, GEM involves several operations: 110 111- Memory allocation and freeing 112- Command execution 113- Aperture management at command execution time 114 115Buffer object allocation is relatively straightforward and largely 116provided by Linux's shmem layer, which provides memory to back each 117object. 118 119Device-specific operations, such as command execution, pinning, buffer 120read & write, mapping, and domain ownership transfers are left to 121driver-specific ioctls. 122 123GEM Initialization 124------------------ 125 126Drivers that use GEM must set the DRIVER_GEM bit in the struct 127:c:type:`struct drm_driver <drm_driver>` driver_features 128field. The DRM core will then automatically initialize the GEM core 129before calling the load operation. Behind the scene, this will create a 130DRM Memory Manager object which provides an address space pool for 131object allocation. 132 133In a KMS configuration, drivers need to allocate and initialize a 134command ring buffer following core GEM initialization if required by the 135hardware. UMA devices usually have what is called a "stolen" memory 136region, which provides space for the initial framebuffer and large, 137contiguous memory regions required by the device. This space is 138typically not managed by GEM, and must be initialized separately into 139its own DRM MM object. 140 141GEM Objects Creation 142-------------------- 143 144GEM splits creation of GEM objects and allocation of the memory that 145backs them in two distinct operations. 146 147GEM objects are represented by an instance of struct :c:type:`struct 148drm_gem_object <drm_gem_object>`. Drivers usually need to 149extend GEM objects with private information and thus create a 150driver-specific GEM object structure type that embeds an instance of 151struct :c:type:`struct drm_gem_object <drm_gem_object>`. 152 153To create a GEM object, a driver allocates memory for an instance of its 154specific GEM object type and initializes the embedded struct 155:c:type:`struct drm_gem_object <drm_gem_object>` with a call 156to :c:func:`drm_gem_object_init()`. The function takes a pointer 157to the DRM device, a pointer to the GEM object and the buffer object 158size in bytes. 159 160GEM uses shmem to allocate anonymous pageable memory. 161:c:func:`drm_gem_object_init()` will create an shmfs file of the 162requested size and store it into the struct :c:type:`struct 163drm_gem_object <drm_gem_object>` filp field. The memory is 164used as either main storage for the object when the graphics hardware 165uses system memory directly or as a backing store otherwise. 166 167Drivers are responsible for the actual physical pages allocation by 168calling :c:func:`shmem_read_mapping_page_gfp()` for each page. 169Note that they can decide to allocate pages when initializing the GEM 170object, or to delay allocation until the memory is needed (for instance 171when a page fault occurs as a result of a userspace memory access or 172when the driver needs to start a DMA transfer involving the memory). 173 174Anonymous pageable memory allocation is not always desired, for instance 175when the hardware requires physically contiguous system memory as is 176often the case in embedded devices. Drivers can create GEM objects with 177no shmfs backing (called private GEM objects) by initializing them with 178a call to :c:func:`drm_gem_private_object_init()` instead of 179:c:func:`drm_gem_object_init()`. Storage for private GEM objects 180must be managed by drivers. 181 182GEM Objects Lifetime 183-------------------- 184 185All GEM objects are reference-counted by the GEM core. References can be 186acquired and release by :c:func:`calling 187drm_gem_object_reference()` and 188:c:func:`drm_gem_object_unreference()` respectively. The caller 189must hold the :c:type:`struct drm_device <drm_device>` 190struct_mutex lock when calling 191:c:func:`drm_gem_object_reference()`. As a convenience, GEM 192provides :c:func:`drm_gem_object_unreference_unlocked()` 193functions that can be called without holding the lock. 194 195When the last reference to a GEM object is released the GEM core calls 196the :c:type:`struct drm_driver <drm_driver>` gem_free_object 197operation. That operation is mandatory for GEM-enabled drivers and must 198free the GEM object and all associated resources. 199 200void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are 201responsible for freeing all GEM object resources. This includes the 202resources created by the GEM core, which need to be released with 203:c:func:`drm_gem_object_release()`. 204 205GEM Objects Naming 206------------------ 207 208Communication between userspace and the kernel refers to GEM objects 209using local handles, global names or, more recently, file descriptors. 210All of those are 32-bit integer values; the usual Linux kernel limits 211apply to the file descriptors. 212 213GEM handles are local to a DRM file. Applications get a handle to a GEM 214object through a driver-specific ioctl, and can use that handle to refer 215to the GEM object in other standard or driver-specific ioctls. Closing a 216DRM file handle frees all its GEM handles and dereferences the 217associated GEM objects. 218 219To create a handle for a GEM object drivers call 220:c:func:`drm_gem_handle_create()`. The function takes a pointer 221to the DRM file and the GEM object and returns a locally unique handle. 222When the handle is no longer needed drivers delete it with a call to 223:c:func:`drm_gem_handle_delete()`. Finally the GEM object 224associated with a handle can be retrieved by a call to 225:c:func:`drm_gem_object_lookup()`. 226 227Handles don't take ownership of GEM objects, they only take a reference 228to the object that will be dropped when the handle is destroyed. To 229avoid leaking GEM objects, drivers must make sure they drop the 230reference(s) they own (such as the initial reference taken at object 231creation time) as appropriate, without any special consideration for the 232handle. For example, in the particular case of combined GEM object and 233handle creation in the implementation of the dumb_create operation, 234drivers must drop the initial reference to the GEM object before 235returning the handle. 236 237GEM names are similar in purpose to handles but are not local to DRM 238files. They can be passed between processes to reference a GEM object 239globally. Names can't be used directly to refer to objects in the DRM 240API, applications must convert handles to names and names to handles 241using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls 242respectively. The conversion is handled by the DRM core without any 243driver-specific support. 244 245GEM also supports buffer sharing with dma-buf file descriptors through 246PRIME. GEM-based drivers must use the provided helpers functions to 247implement the exporting and importing correctly. See ?. Since sharing 248file descriptors is inherently more secure than the easily guessable and 249global GEM names it is the preferred buffer sharing mechanism. Sharing 250buffers through GEM names is only supported for legacy userspace. 251Furthermore PRIME also allows cross-device buffer sharing since it is 252based on dma-bufs. 253 254GEM Objects Mapping 255------------------- 256 257Because mapping operations are fairly heavyweight GEM favours 258read/write-like access to buffers, implemented through driver-specific 259ioctls, over mapping buffers to userspace. However, when random access 260to the buffer is needed (to perform software rendering for instance), 261direct access to the object can be more efficient. 262 263The mmap system call can't be used directly to map GEM objects, as they 264don't have their own file handle. Two alternative methods currently 265co-exist to map GEM objects to userspace. The first method uses a 266driver-specific ioctl to perform the mapping operation, calling 267:c:func:`do_mmap()` under the hood. This is often considered 268dubious, seems to be discouraged for new GEM-enabled drivers, and will 269thus not be described here. 270 271The second method uses the mmap system call on the DRM file handle. void 272\*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t 273offset); DRM identifies the GEM object to be mapped by a fake offset 274passed through the mmap offset argument. Prior to being mapped, a GEM 275object must thus be associated with a fake offset. To do so, drivers 276must call :c:func:`drm_gem_create_mmap_offset()` on the object. 277 278Once allocated, the fake offset value must be passed to the application 279in a driver-specific way and can then be used as the mmap offset 280argument. 281 282The GEM core provides a helper method :c:func:`drm_gem_mmap()` to 283handle object mapping. The method can be set directly as the mmap file 284operation handler. It will look up the GEM object based on the offset 285value and set the VMA operations to the :c:type:`struct drm_driver 286<drm_driver>` gem_vm_ops field. Note that 287:c:func:`drm_gem_mmap()` doesn't map memory to userspace, but 288relies on the driver-provided fault handler to map pages individually. 289 290To use :c:func:`drm_gem_mmap()`, drivers must fill the struct 291:c:type:`struct drm_driver <drm_driver>` gem_vm_ops field 292with a pointer to VM operations. 293 294The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>` 295made up of several fields, the more interesting ones being: 296 297.. code-block:: c 298 299 struct vm_operations_struct { 300 void (*open)(struct vm_area_struct * area); 301 void (*close)(struct vm_area_struct * area); 302 int (*fault)(struct vm_fault *vmf); 303 }; 304 305 306The open and close operations must update the GEM object reference 307count. Drivers can use the :c:func:`drm_gem_vm_open()` and 308:c:func:`drm_gem_vm_close()` helper functions directly as open 309and close handlers. 310 311The fault operation handler is responsible for mapping individual pages 312to userspace when a page fault occurs. Depending on the memory 313allocation scheme, drivers can allocate pages at fault time, or can 314decide to allocate memory for the GEM object at the time the object is 315created. 316 317Drivers that want to map the GEM object upfront instead of handling page 318faults can implement their own mmap file operation handler. 319 320For platforms without MMU the GEM core provides a helper method 321:c:func:`drm_gem_cma_get_unmapped_area`. The mmap() routines will call 322this to get a proposed address for the mapping. 323 324To use :c:func:`drm_gem_cma_get_unmapped_area`, drivers must fill the 325struct :c:type:`struct file_operations <file_operations>` get_unmapped_area 326field with a pointer on :c:func:`drm_gem_cma_get_unmapped_area`. 327 328More detailed information about get_unmapped_area can be found in 329Documentation/nommu-mmap.txt 330 331Memory Coherency 332---------------- 333 334When mapped to the device or used in a command buffer, backing pages for 335an object are flushed to memory and marked write combined so as to be 336coherent with the GPU. Likewise, if the CPU accesses an object after the 337GPU has finished rendering to the object, then the object must be made 338coherent with the CPU's view of memory, usually involving GPU cache 339flushing of various kinds. This core CPU<->GPU coherency management is 340provided by a device-specific ioctl, which evaluates an object's current 341domain and performs any necessary flushing or synchronization to put the 342object into the desired coherency domain (note that the object may be 343busy, i.e. an active render target; in that case, setting the domain 344blocks the client and waits for rendering to complete before performing 345any necessary flushing operations). 346 347Command Execution 348----------------- 349 350Perhaps the most important GEM function for GPU devices is providing a 351command execution interface to clients. Client programs construct 352command buffers containing references to previously allocated memory 353objects, and then submit them to GEM. At that point, GEM takes care to 354bind all the objects into the GTT, execute the buffer, and provide 355necessary synchronization between clients accessing the same buffers. 356This often involves evicting some objects from the GTT and re-binding 357others (a fairly expensive operation), and providing relocation support 358which hides fixed GTT offsets from clients. Clients must take care not 359to submit command buffers that reference more objects than can fit in 360the GTT; otherwise, GEM will reject them and no rendering will occur. 361Similarly, if several objects in the buffer require fence registers to 362be allocated for correct rendering (e.g. 2D blits on pre-965 chips), 363care must be taken not to require more fence registers than are 364available to the client. Such resource management should be abstracted 365from the client in libdrm. 366 367GEM Function Reference 368---------------------- 369 370.. kernel-doc:: drivers/gpu/drm/drm_gem.c 371 :export: 372 373.. kernel-doc:: include/drm/drm_gem.h 374 :internal: 375 376GEM CMA Helper Functions Reference 377---------------------------------- 378 379.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c 380 :doc: cma helpers 381 382.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c 383 :export: 384 385.. kernel-doc:: include/drm/drm_gem_cma_helper.h 386 :internal: 387 388VMA Offset Manager 389================== 390 391.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c 392 :doc: vma offset manager 393 394.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c 395 :export: 396 397.. kernel-doc:: include/drm/drm_vma_manager.h 398 :internal: 399 400PRIME Buffer Sharing 401==================== 402 403PRIME is the cross device buffer sharing framework in drm, originally 404created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME 405buffers are dma-buf based file descriptors. 406 407Overview and Driver Interface 408----------------------------- 409 410Similar to GEM global names, PRIME file descriptors are also used to 411share buffer objects across processes. They offer additional security: 412as file descriptors must be explicitly sent over UNIX domain sockets to 413be shared between applications, they can't be guessed like the globally 414unique GEM names. 415 416Drivers that support the PRIME API must set the DRIVER_PRIME bit in the 417struct :c:type:`struct drm_driver <drm_driver>` 418driver_features field, and implement the prime_handle_to_fd and 419prime_fd_to_handle operations. 420 421int (\*prime_handle_to_fd)(struct drm_device \*dev, struct drm_file 422\*file_priv, uint32_t handle, uint32_t flags, int \*prime_fd); int 423(\*prime_fd_to_handle)(struct drm_device \*dev, struct drm_file 424\*file_priv, int prime_fd, uint32_t \*handle); Those two operations 425convert a handle to a PRIME file descriptor and vice versa. Drivers must 426use the kernel dma-buf buffer sharing framework to manage the PRIME file 427descriptors. Similar to the mode setting API PRIME is agnostic to the 428underlying buffer object manager, as long as handles are 32bit unsigned 429integers. 430 431While non-GEM drivers must implement the operations themselves, GEM 432drivers must use the :c:func:`drm_gem_prime_handle_to_fd()` and 433:c:func:`drm_gem_prime_fd_to_handle()` helper functions. Those 434helpers rely on the driver gem_prime_export and gem_prime_import 435operations to create a dma-buf instance from a GEM object (dma-buf 436exporter role) and to create a GEM object from a dma-buf instance 437(dma-buf importer role). 438 439struct dma_buf \* (\*gem_prime_export)(struct drm_device \*dev, 440struct drm_gem_object \*obj, int flags); struct drm_gem_object \* 441(\*gem_prime_import)(struct drm_device \*dev, struct dma_buf 442\*dma_buf); These two operations are mandatory for GEM drivers that 443support PRIME. 444 445PRIME Helper Functions 446---------------------- 447 448.. kernel-doc:: drivers/gpu/drm/drm_prime.c 449 :doc: PRIME Helpers 450 451PRIME Function References 452------------------------- 453 454.. kernel-doc:: drivers/gpu/drm/drm_prime.c 455 :export: 456 457DRM MM Range Allocator 458====================== 459 460Overview 461-------- 462 463.. kernel-doc:: drivers/gpu/drm/drm_mm.c 464 :doc: Overview 465 466LRU Scan/Eviction Support 467------------------------- 468 469.. kernel-doc:: drivers/gpu/drm/drm_mm.c 470 :doc: lru scan roster 471 472DRM MM Range Allocator Function References 473------------------------------------------ 474 475.. kernel-doc:: drivers/gpu/drm/drm_mm.c 476 :export: 477 478.. kernel-doc:: include/drm/drm_mm.h 479 :internal: 480 481DRM Cache Handling 482================== 483 484.. kernel-doc:: drivers/gpu/drm/drm_cache.c 485 :export: 486