xref: /openbmc/linux/Documentation/gpu/drm-mm.rst (revision 2d68bb26)
1=====================
2DRM Memory Management
3=====================
4
5Modern Linux systems require large amount of graphics memory to store
6frame buffers, textures, vertices and other graphics-related data. Given
7the very dynamic nature of many of that data, managing graphics memory
8efficiently is thus crucial for the graphics stack and plays a central
9role in the DRM infrastructure.
10
11The DRM core includes two memory managers, namely Translation Table Maps
12(TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory
13manager to be developed and tried to be a one-size-fits-them all
14solution. It provides a single userspace API to accommodate the need of
15all hardware, supporting both Unified Memory Architecture (UMA) devices
16and devices with dedicated video RAM (i.e. most discrete video cards).
17This resulted in a large, complex piece of code that turned out to be
18hard to use for driver development.
19
20GEM started as an Intel-sponsored project in reaction to TTM's
21complexity. Its design philosophy is completely different: instead of
22providing a solution to every graphics memory-related problems, GEM
23identified common code between drivers and created a support library to
24share it. GEM has simpler initialization and execution requirements than
25TTM, but has no video RAM management capabilities and is thus limited to
26UMA devices.
27
28The Translation Table Manager (TTM)
29===================================
30
31TTM design background and information belongs here.
32
33TTM initialization
34------------------
35
36    **Warning**
37    This section is outdated.
38
39Drivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver
40<ttm_bo_driver>` structure to ttm_bo_device_init, together with an
41initialized global reference to the memory manager.  The ttm_bo_driver
42structure contains several fields with function pointers for
43initializing the TTM, allocating and freeing memory, waiting for command
44completion and fence synchronization, and memory migration.
45
46The :c:type:`struct drm_global_reference <drm_global_reference>` is made
47up of several fields:
48
49.. code-block:: c
50
51              struct drm_global_reference {
52                      enum ttm_global_types global_type;
53                      size_t size;
54                      void *object;
55                      int (*init) (struct drm_global_reference *);
56                      void (*release) (struct drm_global_reference *);
57              };
58
59
60There should be one global reference structure for your memory manager
61as a whole, and there will be others for each object created by the
62memory manager at runtime. Your global TTM should have a type of
63TTM_GLOBAL_TTM_MEM. The size field for the global object should be
64sizeof(struct ttm_mem_global), and the init and release hooks should
65point at your driver-specific init and release routines, which probably
66eventually call ttm_mem_global_init and ttm_mem_global_release,
67respectively.
68
69Once your global TTM accounting structure is set up and initialized by
70calling ttm_global_item_ref() on it, you need to create a buffer
71object TTM to provide a pool for buffer object allocation by clients and
72the kernel itself. The type of this object should be
73TTM_GLOBAL_TTM_BO, and its size should be sizeof(struct
74ttm_bo_global). Again, driver-specific init and release functions may
75be provided, likely eventually calling ttm_bo_global_ref_init() and
76ttm_bo_global_ref_release(), respectively. Also, like the previous
77object, ttm_global_item_ref() is used to create an initial reference
78count for the TTM, which will call your initialization function.
79
80See the radeon_ttm.c file for an example of usage.
81
82The Graphics Execution Manager (GEM)
83====================================
84
85The GEM design approach has resulted in a memory manager that doesn't
86provide full coverage of all (or even all common) use cases in its
87userspace or kernel API. GEM exposes a set of standard memory-related
88operations to userspace and a set of helper functions to drivers, and
89let drivers implement hardware-specific operations with their own
90private API.
91
92The GEM userspace API is described in the `GEM - the Graphics Execution
93Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While
94slightly outdated, the document provides a good overview of the GEM API
95principles. Buffer allocation and read and write operations, described
96as part of the common GEM API, are currently implemented using
97driver-specific ioctls.
98
99GEM is data-agnostic. It manages abstract buffer objects without knowing
100what individual buffers contain. APIs that require knowledge of buffer
101contents or purpose, such as buffer allocation or synchronization
102primitives, are thus outside of the scope of GEM and must be implemented
103using driver-specific ioctls.
104
105On a fundamental level, GEM involves several operations:
106
107-  Memory allocation and freeing
108-  Command execution
109-  Aperture management at command execution time
110
111Buffer object allocation is relatively straightforward and largely
112provided by Linux's shmem layer, which provides memory to back each
113object.
114
115Device-specific operations, such as command execution, pinning, buffer
116read & write, mapping, and domain ownership transfers are left to
117driver-specific ioctls.
118
119GEM Initialization
120------------------
121
122Drivers that use GEM must set the DRIVER_GEM bit in the struct
123:c:type:`struct drm_driver <drm_driver>` driver_features
124field. The DRM core will then automatically initialize the GEM core
125before calling the load operation. Behind the scene, this will create a
126DRM Memory Manager object which provides an address space pool for
127object allocation.
128
129In a KMS configuration, drivers need to allocate and initialize a
130command ring buffer following core GEM initialization if required by the
131hardware. UMA devices usually have what is called a "stolen" memory
132region, which provides space for the initial framebuffer and large,
133contiguous memory regions required by the device. This space is
134typically not managed by GEM, and must be initialized separately into
135its own DRM MM object.
136
137GEM Objects Creation
138--------------------
139
140GEM splits creation of GEM objects and allocation of the memory that
141backs them in two distinct operations.
142
143GEM objects are represented by an instance of struct :c:type:`struct
144drm_gem_object <drm_gem_object>`. Drivers usually need to
145extend GEM objects with private information and thus create a
146driver-specific GEM object structure type that embeds an instance of
147struct :c:type:`struct drm_gem_object <drm_gem_object>`.
148
149To create a GEM object, a driver allocates memory for an instance of its
150specific GEM object type and initializes the embedded struct
151:c:type:`struct drm_gem_object <drm_gem_object>` with a call
152to :c:func:`drm_gem_object_init()`. The function takes a pointer
153to the DRM device, a pointer to the GEM object and the buffer object
154size in bytes.
155
156GEM uses shmem to allocate anonymous pageable memory.
157:c:func:`drm_gem_object_init()` will create an shmfs file of the
158requested size and store it into the struct :c:type:`struct
159drm_gem_object <drm_gem_object>` filp field. The memory is
160used as either main storage for the object when the graphics hardware
161uses system memory directly or as a backing store otherwise.
162
163Drivers are responsible for the actual physical pages allocation by
164calling :c:func:`shmem_read_mapping_page_gfp()` for each page.
165Note that they can decide to allocate pages when initializing the GEM
166object, or to delay allocation until the memory is needed (for instance
167when a page fault occurs as a result of a userspace memory access or
168when the driver needs to start a DMA transfer involving the memory).
169
170Anonymous pageable memory allocation is not always desired, for instance
171when the hardware requires physically contiguous system memory as is
172often the case in embedded devices. Drivers can create GEM objects with
173no shmfs backing (called private GEM objects) by initializing them with
174a call to :c:func:`drm_gem_private_object_init()` instead of
175:c:func:`drm_gem_object_init()`. Storage for private GEM objects
176must be managed by drivers.
177
178GEM Objects Lifetime
179--------------------
180
181All GEM objects are reference-counted by the GEM core. References can be
182acquired and release by :c:func:`calling drm_gem_object_get()` and
183:c:func:`drm_gem_object_put()` respectively. The caller must hold the
184:c:type:`struct drm_device <drm_device>` struct_mutex lock when calling
185:c:func:`drm_gem_object_get()`. As a convenience, GEM provides
186:c:func:`drm_gem_object_put_unlocked()` functions that can be called without
187holding the lock.
188
189When the last reference to a GEM object is released the GEM core calls
190the :c:type:`struct drm_driver <drm_driver>` gem_free_object_unlocked
191operation. That operation is mandatory for GEM-enabled drivers and must
192free the GEM object and all associated resources.
193
194void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are
195responsible for freeing all GEM object resources. This includes the
196resources created by the GEM core, which need to be released with
197:c:func:`drm_gem_object_release()`.
198
199GEM Objects Naming
200------------------
201
202Communication between userspace and the kernel refers to GEM objects
203using local handles, global names or, more recently, file descriptors.
204All of those are 32-bit integer values; the usual Linux kernel limits
205apply to the file descriptors.
206
207GEM handles are local to a DRM file. Applications get a handle to a GEM
208object through a driver-specific ioctl, and can use that handle to refer
209to the GEM object in other standard or driver-specific ioctls. Closing a
210DRM file handle frees all its GEM handles and dereferences the
211associated GEM objects.
212
213To create a handle for a GEM object drivers call
214:c:func:`drm_gem_handle_create()`. The function takes a pointer
215to the DRM file and the GEM object and returns a locally unique handle.
216When the handle is no longer needed drivers delete it with a call to
217:c:func:`drm_gem_handle_delete()`. Finally the GEM object
218associated with a handle can be retrieved by a call to
219:c:func:`drm_gem_object_lookup()`.
220
221Handles don't take ownership of GEM objects, they only take a reference
222to the object that will be dropped when the handle is destroyed. To
223avoid leaking GEM objects, drivers must make sure they drop the
224reference(s) they own (such as the initial reference taken at object
225creation time) as appropriate, without any special consideration for the
226handle. For example, in the particular case of combined GEM object and
227handle creation in the implementation of the dumb_create operation,
228drivers must drop the initial reference to the GEM object before
229returning the handle.
230
231GEM names are similar in purpose to handles but are not local to DRM
232files. They can be passed between processes to reference a GEM object
233globally. Names can't be used directly to refer to objects in the DRM
234API, applications must convert handles to names and names to handles
235using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls
236respectively. The conversion is handled by the DRM core without any
237driver-specific support.
238
239GEM also supports buffer sharing with dma-buf file descriptors through
240PRIME. GEM-based drivers must use the provided helpers functions to
241implement the exporting and importing correctly. See ?. Since sharing
242file descriptors is inherently more secure than the easily guessable and
243global GEM names it is the preferred buffer sharing mechanism. Sharing
244buffers through GEM names is only supported for legacy userspace.
245Furthermore PRIME also allows cross-device buffer sharing since it is
246based on dma-bufs.
247
248GEM Objects Mapping
249-------------------
250
251Because mapping operations are fairly heavyweight GEM favours
252read/write-like access to buffers, implemented through driver-specific
253ioctls, over mapping buffers to userspace. However, when random access
254to the buffer is needed (to perform software rendering for instance),
255direct access to the object can be more efficient.
256
257The mmap system call can't be used directly to map GEM objects, as they
258don't have their own file handle. Two alternative methods currently
259co-exist to map GEM objects to userspace. The first method uses a
260driver-specific ioctl to perform the mapping operation, calling
261:c:func:`do_mmap()` under the hood. This is often considered
262dubious, seems to be discouraged for new GEM-enabled drivers, and will
263thus not be described here.
264
265The second method uses the mmap system call on the DRM file handle. void
266\*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t
267offset); DRM identifies the GEM object to be mapped by a fake offset
268passed through the mmap offset argument. Prior to being mapped, a GEM
269object must thus be associated with a fake offset. To do so, drivers
270must call :c:func:`drm_gem_create_mmap_offset()` on the object.
271
272Once allocated, the fake offset value must be passed to the application
273in a driver-specific way and can then be used as the mmap offset
274argument.
275
276The GEM core provides a helper method :c:func:`drm_gem_mmap()` to
277handle object mapping. The method can be set directly as the mmap file
278operation handler. It will look up the GEM object based on the offset
279value and set the VMA operations to the :c:type:`struct drm_driver
280<drm_driver>` gem_vm_ops field. Note that
281:c:func:`drm_gem_mmap()` doesn't map memory to userspace, but
282relies on the driver-provided fault handler to map pages individually.
283
284To use :c:func:`drm_gem_mmap()`, drivers must fill the struct
285:c:type:`struct drm_driver <drm_driver>` gem_vm_ops field
286with a pointer to VM operations.
287
288The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>`
289made up of several fields, the more interesting ones being:
290
291.. code-block:: c
292
293	struct vm_operations_struct {
294		void (*open)(struct vm_area_struct * area);
295		void (*close)(struct vm_area_struct * area);
296		vm_fault_t (*fault)(struct vm_fault *vmf);
297	};
298
299
300The open and close operations must update the GEM object reference
301count. Drivers can use the :c:func:`drm_gem_vm_open()` and
302:c:func:`drm_gem_vm_close()` helper functions directly as open
303and close handlers.
304
305The fault operation handler is responsible for mapping individual pages
306to userspace when a page fault occurs. Depending on the memory
307allocation scheme, drivers can allocate pages at fault time, or can
308decide to allocate memory for the GEM object at the time the object is
309created.
310
311Drivers that want to map the GEM object upfront instead of handling page
312faults can implement their own mmap file operation handler.
313
314For platforms without MMU the GEM core provides a helper method
315:c:func:`drm_gem_cma_get_unmapped_area`. The mmap() routines will call
316this to get a proposed address for the mapping.
317
318To use :c:func:`drm_gem_cma_get_unmapped_area`, drivers must fill the
319struct :c:type:`struct file_operations <file_operations>` get_unmapped_area
320field with a pointer on :c:func:`drm_gem_cma_get_unmapped_area`.
321
322More detailed information about get_unmapped_area can be found in
323Documentation/nommu-mmap.txt
324
325Memory Coherency
326----------------
327
328When mapped to the device or used in a command buffer, backing pages for
329an object are flushed to memory and marked write combined so as to be
330coherent with the GPU. Likewise, if the CPU accesses an object after the
331GPU has finished rendering to the object, then the object must be made
332coherent with the CPU's view of memory, usually involving GPU cache
333flushing of various kinds. This core CPU<->GPU coherency management is
334provided by a device-specific ioctl, which evaluates an object's current
335domain and performs any necessary flushing or synchronization to put the
336object into the desired coherency domain (note that the object may be
337busy, i.e. an active render target; in that case, setting the domain
338blocks the client and waits for rendering to complete before performing
339any necessary flushing operations).
340
341Command Execution
342-----------------
343
344Perhaps the most important GEM function for GPU devices is providing a
345command execution interface to clients. Client programs construct
346command buffers containing references to previously allocated memory
347objects, and then submit them to GEM. At that point, GEM takes care to
348bind all the objects into the GTT, execute the buffer, and provide
349necessary synchronization between clients accessing the same buffers.
350This often involves evicting some objects from the GTT and re-binding
351others (a fairly expensive operation), and providing relocation support
352which hides fixed GTT offsets from clients. Clients must take care not
353to submit command buffers that reference more objects than can fit in
354the GTT; otherwise, GEM will reject them and no rendering will occur.
355Similarly, if several objects in the buffer require fence registers to
356be allocated for correct rendering (e.g. 2D blits on pre-965 chips),
357care must be taken not to require more fence registers than are
358available to the client. Such resource management should be abstracted
359from the client in libdrm.
360
361GEM Function Reference
362----------------------
363
364.. kernel-doc:: include/drm/drm_gem.h
365   :internal:
366
367.. kernel-doc:: drivers/gpu/drm/drm_gem.c
368   :export:
369
370GEM CMA Helper Functions Reference
371----------------------------------
372
373.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
374   :doc: cma helpers
375
376.. kernel-doc:: include/drm/drm_gem_cma_helper.h
377   :internal:
378
379.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
380   :export:
381
382VRAM Helper Function Reference
383==============================
384
385.. kernel-doc:: drivers/gpu/drm/drm_vram_helper_common.c
386   :doc: overview
387
388.. kernel-doc:: include/drm/drm_gem_vram_helper.h
389   :internal:
390
391GEM VRAM Helper Functions Reference
392-----------------------------------
393
394.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
395   :doc: overview
396
397.. kernel-doc:: include/drm/drm_gem_vram_helper.h
398   :internal:
399
400.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
401   :export:
402
403GEM TTM Helper Functions Reference
404-----------------------------------
405
406.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
407   :doc: overview
408
409.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
410   :export:
411
412VMA Offset Manager
413==================
414
415.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
416   :doc: vma offset manager
417
418.. kernel-doc:: include/drm/drm_vma_manager.h
419   :internal:
420
421.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
422   :export:
423
424.. _prime_buffer_sharing:
425
426PRIME Buffer Sharing
427====================
428
429PRIME is the cross device buffer sharing framework in drm, originally
430created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME
431buffers are dma-buf based file descriptors.
432
433Overview and Lifetime Rules
434---------------------------
435
436.. kernel-doc:: drivers/gpu/drm/drm_prime.c
437   :doc: overview and lifetime rules
438
439PRIME Helper Functions
440----------------------
441
442.. kernel-doc:: drivers/gpu/drm/drm_prime.c
443   :doc: PRIME Helpers
444
445PRIME Function References
446-------------------------
447
448.. kernel-doc:: include/drm/drm_prime.h
449   :internal:
450
451.. kernel-doc:: drivers/gpu/drm/drm_prime.c
452   :export:
453
454DRM MM Range Allocator
455======================
456
457Overview
458--------
459
460.. kernel-doc:: drivers/gpu/drm/drm_mm.c
461   :doc: Overview
462
463LRU Scan/Eviction Support
464-------------------------
465
466.. kernel-doc:: drivers/gpu/drm/drm_mm.c
467   :doc: lru scan roster
468
469DRM MM Range Allocator Function References
470------------------------------------------
471
472.. kernel-doc:: include/drm/drm_mm.h
473   :internal:
474
475.. kernel-doc:: drivers/gpu/drm/drm_mm.c
476   :export:
477
478DRM Cache Handling
479==================
480
481.. kernel-doc:: drivers/gpu/drm/drm_cache.c
482   :export:
483
484DRM Sync Objects
485===========================
486
487.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
488   :doc: Overview
489
490.. kernel-doc:: include/drm/drm_syncobj.h
491   :internal:
492
493.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
494   :export:
495
496GPU Scheduler
497=============
498
499Overview
500--------
501
502.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
503   :doc: Overview
504
505Scheduler Function References
506-----------------------------
507
508.. kernel-doc:: include/drm/gpu_scheduler.h
509   :internal:
510
511.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
512   :export:
513