1===================
2Userland interfaces
3===================
4
5The DRM core exports several interfaces to applications, generally
6intended to be used through corresponding libdrm wrapper functions. In
7addition, drivers export device-specific interfaces for use by userspace
8drivers & device-aware applications through ioctls and sysfs files.
9
10External interfaces include: memory mapping, context management, DMA
11operations, AGP management, vblank control, fence management, memory
12management, and output management.
13
14Cover generic ioctls and sysfs layout here. We only need high-level
15info, since man pages should cover the rest.
16
17libdrm Device Lookup
18====================
19
20.. kernel-doc:: drivers/gpu/drm/drm_ioctl.c
21   :doc: getunique and setversion story
22
23
24Primary Nodes, DRM Master and Authentication
25============================================
26
27.. kernel-doc:: drivers/gpu/drm/drm_auth.c
28   :doc: master and authentication
29
30.. kernel-doc:: drivers/gpu/drm/drm_auth.c
31   :export:
32
33.. kernel-doc:: include/drm/drm_auth.h
34   :internal:
35
36Render nodes
37============
38
39DRM core provides multiple character-devices for user-space to use.
40Depending on which device is opened, user-space can perform a different
41set of operations (mainly ioctls). The primary node is always created
42and called card<num>. Additionally, a currently unused control node,
43called controlD<num> is also created. The primary node provides all
44legacy operations and historically was the only interface used by
45userspace. With KMS, the control node was introduced. However, the
46planned KMS control interface has never been written and so the control
47node stays unused to date.
48
49With the increased use of offscreen renderers and GPGPU applications,
50clients no longer require running compositors or graphics servers to
51make use of a GPU. But the DRM API required unprivileged clients to
52authenticate to a DRM-Master prior to getting GPU access. To avoid this
53step and to grant clients GPU access without authenticating, render
54nodes were introduced. Render nodes solely serve render clients, that
55is, no modesetting or privileged ioctls can be issued on render nodes.
56Only non-global rendering commands are allowed. If a driver supports
57render nodes, it must advertise it via the DRIVER_RENDER DRM driver
58capability. If not supported, the primary node must be used for render
59clients together with the legacy drmAuth authentication procedure.
60
61If a driver advertises render node support, DRM core will create a
62separate render node called renderD<num>. There will be one render node
63per device. No ioctls except PRIME-related ioctls will be allowed on
64this node. Especially GEM_OPEN will be explicitly prohibited. Render
65nodes are designed to avoid the buffer-leaks, which occur if clients
66guess the flink names or mmap offsets on the legacy interface.
67Additionally to this basic interface, drivers must mark their
68driver-dependent render-only ioctls as DRM_RENDER_ALLOW so render
69clients can use them. Driver authors must be careful not to allow any
70privileged ioctls on render nodes.
71
72With render nodes, user-space can now control access to the render node
73via basic file-system access-modes. A running graphics server which
74authenticates clients on the privileged primary/legacy node is no longer
75required. Instead, a client can open the render node and is immediately
76granted GPU access. Communication between clients (or servers) is done
77via PRIME. FLINK from render node to legacy node is not supported. New
78clients must not use the insecure FLINK interface.
79
80Besides dropping all modeset/global ioctls, render nodes also drop the
81DRM-Master concept. There is no reason to associate render clients with
82a DRM-Master as they are independent of any graphics server. Besides,
83they must work without any running master, anyway. Drivers must be able
84to run without a master object if they support render nodes. If, on the
85other hand, a driver requires shared state between clients which is
86visible to user-space and accessible beyond open-file boundaries, they
87cannot support render nodes.
88
89VBlank event handling
90=====================
91
92The DRM core exposes two vertical blank related ioctls:
93
94DRM_IOCTL_WAIT_VBLANK
95    This takes a struct drm_wait_vblank structure as its argument, and
96    it is used to block or request a signal when a specified vblank
97    event occurs.
98
99DRM_IOCTL_MODESET_CTL
100    This was only used for user-mode-settind drivers around modesetting
101    changes to allow the kernel to update the vblank interrupt after
102    mode setting, since on many devices the vertical blank counter is
103    reset to 0 at some point during modeset. Modern drivers should not
104    call this any more since with kernel mode setting it is a no-op.
105
106This second part of the GPU Driver Developer's Guide documents driver
107code, implementation details and also all the driver-specific userspace
108interfaces. Especially since all hardware-acceleration interfaces to
109userspace are driver specific for efficiency and other reasons these
110interfaces can be rather substantial. Hence every driver has its own
111chapter.
112