xref: /openbmc/linux/Documentation/gpu/drm-mm.rst (revision b78412b8)
1=====================
2DRM Memory Management
3=====================
4
5Modern Linux systems require large amount of graphics memory to store
6frame buffers, textures, vertices and other graphics-related data. Given
7the very dynamic nature of many of that data, managing graphics memory
8efficiently is thus crucial for the graphics stack and plays a central
9role in the DRM infrastructure.
10
11The DRM core includes two memory managers, namely Translation Table Maps
12(TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory
13manager to be developed and tried to be a one-size-fits-them all
14solution. It provides a single userspace API to accommodate the need of
15all hardware, supporting both Unified Memory Architecture (UMA) devices
16and devices with dedicated video RAM (i.e. most discrete video cards).
17This resulted in a large, complex piece of code that turned out to be
18hard to use for driver development.
19
20GEM started as an Intel-sponsored project in reaction to TTM's
21complexity. Its design philosophy is completely different: instead of
22providing a solution to every graphics memory-related problems, GEM
23identified common code between drivers and created a support library to
24share it. GEM has simpler initialization and execution requirements than
25TTM, but has no video RAM management capabilities and is thus limited to
26UMA devices.
27
28The Translation Table Manager (TTM)
29===================================
30
31TTM design background and information belongs here.
32
33TTM initialization
34------------------
35
36    **Warning**
37    This section is outdated.
38
39Drivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver
40<ttm_bo_driver>` structure to ttm_bo_device_init, together with an
41initialized global reference to the memory manager.  The ttm_bo_driver
42structure contains several fields with function pointers for
43initializing the TTM, allocating and freeing memory, waiting for command
44completion and fence synchronization, and memory migration.
45
46The :c:type:`struct drm_global_reference <drm_global_reference>` is made
47up of several fields:
48
49.. code-block:: c
50
51              struct drm_global_reference {
52                      enum ttm_global_types global_type;
53                      size_t size;
54                      void *object;
55                      int (*init) (struct drm_global_reference *);
56                      void (*release) (struct drm_global_reference *);
57              };
58
59
60There should be one global reference structure for your memory manager
61as a whole, and there will be others for each object created by the
62memory manager at runtime. Your global TTM should have a type of
63TTM_GLOBAL_TTM_MEM. The size field for the global object should be
64sizeof(struct ttm_mem_global), and the init and release hooks should
65point at your driver-specific init and release routines, which probably
66eventually call ttm_mem_global_init and ttm_mem_global_release,
67respectively.
68
69Once your global TTM accounting structure is set up and initialized by
70calling ttm_global_item_ref() on it, you need to create a buffer
71object TTM to provide a pool for buffer object allocation by clients and
72the kernel itself. The type of this object should be
73TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct
74ttm_bo_global). Again, driver-specific init and release functions may
75be provided, likely eventually calling ttm_bo_global_init() and
76ttm_bo_global_release(), respectively. Also, like the previous
77object, ttm_global_item_ref() is used to create an initial reference
78count for the TTM, which will call your initialization function.
79
80See the radeon_ttm.c file for an example of usage.
81
82.. kernel-doc:: drivers/gpu/drm/drm_global.c
83   :export:
84
85
86The Graphics Execution Manager (GEM)
87====================================
88
89The GEM design approach has resulted in a memory manager that doesn't
90provide full coverage of all (or even all common) use cases in its
91userspace or kernel API. GEM exposes a set of standard memory-related
92operations to userspace and a set of helper functions to drivers, and
93let drivers implement hardware-specific operations with their own
94private API.
95
96The GEM userspace API is described in the `GEM - the Graphics Execution
97Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While
98slightly outdated, the document provides a good overview of the GEM API
99principles. Buffer allocation and read and write operations, described
100as part of the common GEM API, are currently implemented using
101driver-specific ioctls.
102
103GEM is data-agnostic. It manages abstract buffer objects without knowing
104what individual buffers contain. APIs that require knowledge of buffer
105contents or purpose, such as buffer allocation or synchronization
106primitives, are thus outside of the scope of GEM and must be implemented
107using driver-specific ioctls.
108
109On a fundamental level, GEM involves several operations:
110
111-  Memory allocation and freeing
112-  Command execution
113-  Aperture management at command execution time
114
115Buffer object allocation is relatively straightforward and largely
116provided by Linux's shmem layer, which provides memory to back each
117object.
118
119Device-specific operations, such as command execution, pinning, buffer
120read & write, mapping, and domain ownership transfers are left to
121driver-specific ioctls.
122
123GEM Initialization
124------------------
125
126Drivers that use GEM must set the DRIVER_GEM bit in the struct
127:c:type:`struct drm_driver <drm_driver>` driver_features
128field. The DRM core will then automatically initialize the GEM core
129before calling the load operation. Behind the scene, this will create a
130DRM Memory Manager object which provides an address space pool for
131object allocation.
132
133In a KMS configuration, drivers need to allocate and initialize a
134command ring buffer following core GEM initialization if required by the
135hardware. UMA devices usually have what is called a "stolen" memory
136region, which provides space for the initial framebuffer and large,
137contiguous memory regions required by the device. This space is
138typically not managed by GEM, and must be initialized separately into
139its own DRM MM object.
140
141GEM Objects Creation
142--------------------
143
144GEM splits creation of GEM objects and allocation of the memory that
145backs them in two distinct operations.
146
147GEM objects are represented by an instance of struct :c:type:`struct
148drm_gem_object <drm_gem_object>`. Drivers usually need to
149extend GEM objects with private information and thus create a
150driver-specific GEM object structure type that embeds an instance of
151struct :c:type:`struct drm_gem_object <drm_gem_object>`.
152
153To create a GEM object, a driver allocates memory for an instance of its
154specific GEM object type and initializes the embedded struct
155:c:type:`struct drm_gem_object <drm_gem_object>` with a call
156to :c:func:`drm_gem_object_init()`. The function takes a pointer
157to the DRM device, a pointer to the GEM object and the buffer object
158size in bytes.
159
160GEM uses shmem to allocate anonymous pageable memory.
161:c:func:`drm_gem_object_init()` will create an shmfs file of the
162requested size and store it into the struct :c:type:`struct
163drm_gem_object <drm_gem_object>` filp field. The memory is
164used as either main storage for the object when the graphics hardware
165uses system memory directly or as a backing store otherwise.
166
167Drivers are responsible for the actual physical pages allocation by
168calling :c:func:`shmem_read_mapping_page_gfp()` for each page.
169Note that they can decide to allocate pages when initializing the GEM
170object, or to delay allocation until the memory is needed (for instance
171when a page fault occurs as a result of a userspace memory access or
172when the driver needs to start a DMA transfer involving the memory).
173
174Anonymous pageable memory allocation is not always desired, for instance
175when the hardware requires physically contiguous system memory as is
176often the case in embedded devices. Drivers can create GEM objects with
177no shmfs backing (called private GEM objects) by initializing them with
178a call to :c:func:`drm_gem_private_object_init()` instead of
179:c:func:`drm_gem_object_init()`. Storage for private GEM objects
180must be managed by drivers.
181
182GEM Objects Lifetime
183--------------------
184
185All GEM objects are reference-counted by the GEM core. References can be
186acquired and release by :c:func:`calling drm_gem_object_get()` and
187:c:func:`drm_gem_object_put()` respectively. The caller must hold the
188:c:type:`struct drm_device <drm_device>` struct_mutex lock when calling
189:c:func:`drm_gem_object_get()`. As a convenience, GEM provides
190:c:func:`drm_gem_object_put_unlocked()` functions that can be called without
191holding the lock.
192
193When the last reference to a GEM object is released the GEM core calls
194the :c:type:`struct drm_driver <drm_driver>` gem_free_object_unlocked
195operation. That operation is mandatory for GEM-enabled drivers and must
196free the GEM object and all associated resources.
197
198void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are
199responsible for freeing all GEM object resources. This includes the
200resources created by the GEM core, which need to be released with
201:c:func:`drm_gem_object_release()`.
202
203GEM Objects Naming
204------------------
205
206Communication between userspace and the kernel refers to GEM objects
207using local handles, global names or, more recently, file descriptors.
208All of those are 32-bit integer values; the usual Linux kernel limits
209apply to the file descriptors.
210
211GEM handles are local to a DRM file. Applications get a handle to a GEM
212object through a driver-specific ioctl, and can use that handle to refer
213to the GEM object in other standard or driver-specific ioctls. Closing a
214DRM file handle frees all its GEM handles and dereferences the
215associated GEM objects.
216
217To create a handle for a GEM object drivers call
218:c:func:`drm_gem_handle_create()`. The function takes a pointer
219to the DRM file and the GEM object and returns a locally unique handle.
220When the handle is no longer needed drivers delete it with a call to
221:c:func:`drm_gem_handle_delete()`. Finally the GEM object
222associated with a handle can be retrieved by a call to
223:c:func:`drm_gem_object_lookup()`.
224
225Handles don't take ownership of GEM objects, they only take a reference
226to the object that will be dropped when the handle is destroyed. To
227avoid leaking GEM objects, drivers must make sure they drop the
228reference(s) they own (such as the initial reference taken at object
229creation time) as appropriate, without any special consideration for the
230handle. For example, in the particular case of combined GEM object and
231handle creation in the implementation of the dumb_create operation,
232drivers must drop the initial reference to the GEM object before
233returning the handle.
234
235GEM names are similar in purpose to handles but are not local to DRM
236files. They can be passed between processes to reference a GEM object
237globally. Names can't be used directly to refer to objects in the DRM
238API, applications must convert handles to names and names to handles
239using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls
240respectively. The conversion is handled by the DRM core without any
241driver-specific support.
242
243GEM also supports buffer sharing with dma-buf file descriptors through
244PRIME. GEM-based drivers must use the provided helpers functions to
245implement the exporting and importing correctly. See ?. Since sharing
246file descriptors is inherently more secure than the easily guessable and
247global GEM names it is the preferred buffer sharing mechanism. Sharing
248buffers through GEM names is only supported for legacy userspace.
249Furthermore PRIME also allows cross-device buffer sharing since it is
250based on dma-bufs.
251
252GEM Objects Mapping
253-------------------
254
255Because mapping operations are fairly heavyweight GEM favours
256read/write-like access to buffers, implemented through driver-specific
257ioctls, over mapping buffers to userspace. However, when random access
258to the buffer is needed (to perform software rendering for instance),
259direct access to the object can be more efficient.
260
261The mmap system call can't be used directly to map GEM objects, as they
262don't have their own file handle. Two alternative methods currently
263co-exist to map GEM objects to userspace. The first method uses a
264driver-specific ioctl to perform the mapping operation, calling
265:c:func:`do_mmap()` under the hood. This is often considered
266dubious, seems to be discouraged for new GEM-enabled drivers, and will
267thus not be described here.
268
269The second method uses the mmap system call on the DRM file handle. void
270\*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t
271offset); DRM identifies the GEM object to be mapped by a fake offset
272passed through the mmap offset argument. Prior to being mapped, a GEM
273object must thus be associated with a fake offset. To do so, drivers
274must call :c:func:`drm_gem_create_mmap_offset()` on the object.
275
276Once allocated, the fake offset value must be passed to the application
277in a driver-specific way and can then be used as the mmap offset
278argument.
279
280The GEM core provides a helper method :c:func:`drm_gem_mmap()` to
281handle object mapping. The method can be set directly as the mmap file
282operation handler. It will look up the GEM object based on the offset
283value and set the VMA operations to the :c:type:`struct drm_driver
284<drm_driver>` gem_vm_ops field. Note that
285:c:func:`drm_gem_mmap()` doesn't map memory to userspace, but
286relies on the driver-provided fault handler to map pages individually.
287
288To use :c:func:`drm_gem_mmap()`, drivers must fill the struct
289:c:type:`struct drm_driver <drm_driver>` gem_vm_ops field
290with a pointer to VM operations.
291
292The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>`
293made up of several fields, the more interesting ones being:
294
295.. code-block:: c
296
297	struct vm_operations_struct {
298		void (*open)(struct vm_area_struct * area);
299		void (*close)(struct vm_area_struct * area);
300		int (*fault)(struct vm_fault *vmf);
301	};
302
303
304The open and close operations must update the GEM object reference
305count. Drivers can use the :c:func:`drm_gem_vm_open()` and
306:c:func:`drm_gem_vm_close()` helper functions directly as open
307and close handlers.
308
309The fault operation handler is responsible for mapping individual pages
310to userspace when a page fault occurs. Depending on the memory
311allocation scheme, drivers can allocate pages at fault time, or can
312decide to allocate memory for the GEM object at the time the object is
313created.
314
315Drivers that want to map the GEM object upfront instead of handling page
316faults can implement their own mmap file operation handler.
317
318For platforms without MMU the GEM core provides a helper method
319:c:func:`drm_gem_cma_get_unmapped_area`. The mmap() routines will call
320this to get a proposed address for the mapping.
321
322To use :c:func:`drm_gem_cma_get_unmapped_area`, drivers must fill the
323struct :c:type:`struct file_operations <file_operations>` get_unmapped_area
324field with a pointer on :c:func:`drm_gem_cma_get_unmapped_area`.
325
326More detailed information about get_unmapped_area can be found in
327Documentation/nommu-mmap.txt
328
329Memory Coherency
330----------------
331
332When mapped to the device or used in a command buffer, backing pages for
333an object are flushed to memory and marked write combined so as to be
334coherent with the GPU. Likewise, if the CPU accesses an object after the
335GPU has finished rendering to the object, then the object must be made
336coherent with the CPU's view of memory, usually involving GPU cache
337flushing of various kinds. This core CPU<->GPU coherency management is
338provided by a device-specific ioctl, which evaluates an object's current
339domain and performs any necessary flushing or synchronization to put the
340object into the desired coherency domain (note that the object may be
341busy, i.e. an active render target; in that case, setting the domain
342blocks the client and waits for rendering to complete before performing
343any necessary flushing operations).
344
345Command Execution
346-----------------
347
348Perhaps the most important GEM function for GPU devices is providing a
349command execution interface to clients. Client programs construct
350command buffers containing references to previously allocated memory
351objects, and then submit them to GEM. At that point, GEM takes care to
352bind all the objects into the GTT, execute the buffer, and provide
353necessary synchronization between clients accessing the same buffers.
354This often involves evicting some objects from the GTT and re-binding
355others (a fairly expensive operation), and providing relocation support
356which hides fixed GTT offsets from clients. Clients must take care not
357to submit command buffers that reference more objects than can fit in
358the GTT; otherwise, GEM will reject them and no rendering will occur.
359Similarly, if several objects in the buffer require fence registers to
360be allocated for correct rendering (e.g. 2D blits on pre-965 chips),
361care must be taken not to require more fence registers than are
362available to the client. Such resource management should be abstracted
363from the client in libdrm.
364
365GEM Function Reference
366----------------------
367
368.. kernel-doc:: include/drm/drm_gem.h
369   :internal:
370
371.. kernel-doc:: drivers/gpu/drm/drm_gem.c
372   :export:
373
374GEM CMA Helper Functions Reference
375----------------------------------
376
377.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
378   :doc: cma helpers
379
380.. kernel-doc:: include/drm/drm_gem_cma_helper.h
381   :internal:
382
383.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
384   :export:
385
386VMA Offset Manager
387==================
388
389.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
390   :doc: vma offset manager
391
392.. kernel-doc:: include/drm/drm_vma_manager.h
393   :internal:
394
395.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
396   :export:
397
398PRIME Buffer Sharing
399====================
400
401PRIME is the cross device buffer sharing framework in drm, originally
402created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME
403buffers are dma-buf based file descriptors.
404
405Overview and Driver Interface
406-----------------------------
407
408Similar to GEM global names, PRIME file descriptors are also used to
409share buffer objects across processes. They offer additional security:
410as file descriptors must be explicitly sent over UNIX domain sockets to
411be shared between applications, they can't be guessed like the globally
412unique GEM names.
413
414Drivers that support the PRIME API must set the DRIVER_PRIME bit in the
415struct :c:type:`struct drm_driver <drm_driver>`
416driver_features field, and implement the prime_handle_to_fd and
417prime_fd_to_handle operations.
418
419int (\*prime_handle_to_fd)(struct drm_device \*dev, struct drm_file
420\*file_priv, uint32_t handle, uint32_t flags, int \*prime_fd); int
421(\*prime_fd_to_handle)(struct drm_device \*dev, struct drm_file
422\*file_priv, int prime_fd, uint32_t \*handle); Those two operations
423convert a handle to a PRIME file descriptor and vice versa. Drivers must
424use the kernel dma-buf buffer sharing framework to manage the PRIME file
425descriptors. Similar to the mode setting API PRIME is agnostic to the
426underlying buffer object manager, as long as handles are 32bit unsigned
427integers.
428
429While non-GEM drivers must implement the operations themselves, GEM
430drivers must use the :c:func:`drm_gem_prime_handle_to_fd()` and
431:c:func:`drm_gem_prime_fd_to_handle()` helper functions. Those
432helpers rely on the driver gem_prime_export and gem_prime_import
433operations to create a dma-buf instance from a GEM object (dma-buf
434exporter role) and to create a GEM object from a dma-buf instance
435(dma-buf importer role).
436
437struct dma_buf \* (\*gem_prime_export)(struct drm_device \*dev,
438struct drm_gem_object \*obj, int flags); struct drm_gem_object \*
439(\*gem_prime_import)(struct drm_device \*dev, struct dma_buf
440\*dma_buf); These two operations are mandatory for GEM drivers that
441support PRIME.
442
443PRIME Helper Functions
444----------------------
445
446.. kernel-doc:: drivers/gpu/drm/drm_prime.c
447   :doc: PRIME Helpers
448
449PRIME Function References
450-------------------------
451
452.. kernel-doc:: include/drm/drm_prime.h
453   :internal:
454
455.. kernel-doc:: drivers/gpu/drm/drm_prime.c
456   :export:
457
458DRM MM Range Allocator
459======================
460
461Overview
462--------
463
464.. kernel-doc:: drivers/gpu/drm/drm_mm.c
465   :doc: Overview
466
467LRU Scan/Eviction Support
468-------------------------
469
470.. kernel-doc:: drivers/gpu/drm/drm_mm.c
471   :doc: lru scan roster
472
473DRM MM Range Allocator Function References
474------------------------------------------
475
476.. kernel-doc:: include/drm/drm_mm.h
477   :internal:
478
479.. kernel-doc:: drivers/gpu/drm/drm_mm.c
480   :export:
481
482DRM Cache Handling
483==================
484
485.. kernel-doc:: drivers/gpu/drm/drm_cache.c
486   :export:
487
488DRM Sync Objects
489===========================
490
491.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
492   :doc: Overview
493
494.. kernel-doc:: include/drm/drm_syncobj.h
495   :internal:
496
497.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
498   :export:
499