1config DRM_I915_USERFAULT_AUTOSUSPEND
2	int "Runtime autosuspend delay for userspace GGTT mmaps (ms)"
3	default 250 # milliseconds
4	help
5	  On runtime suspend, as we suspend the device, we have to revoke
6	  userspace GGTT mmaps and force userspace to take a pagefault on
7	  their next access. The revocation and subsequent recreation of
8	  the GGTT mmap can be very slow and so we impose a small hysteris
9	  that complements the runtime-pm autosuspend and provides a lower
10	  floor on the autosuspend delay.
11
12	  May be 0 to disable the extra delay and solely use the device level
13	  runtime pm autosuspend delay tunable.
14
15config DRM_I915_HEARTBEAT_INTERVAL
16	int "Interval between heartbeat pulses (ms)"
17	default 2500 # milliseconds
18	help
19	  The driver sends a periodic heartbeat down all active engines to
20	  check the health of the GPU and undertake regular house-keeping of
21	  internal driver state.
22
23	  This is adjustable via
24	  /sys/class/drm/card?/engine/*/heartbeat_interval_ms
25
26	  May be 0 to disable heartbeats and therefore disable automatic GPU
27	  hang detection.
28
29config DRM_I915_PREEMPT_TIMEOUT
30	int "Preempt timeout (ms, jiffy granularity)"
31	default 640 # milliseconds
32	help
33	  How long to wait (in milliseconds) for a preemption event to occur
34	  when submitting a new context via execlists. If the current context
35	  does not hit an arbitration point and yield to HW before the timer
36	  expires, the HW will be reset to allow the more important context
37	  to execute.
38
39	  This is adjustable via
40	  /sys/class/drm/card?/engine/*/preempt_timeout_ms
41
42	  May be 0 to disable the timeout.
43
44	  The compiled in default may get overridden at driver probe time on
45	  certain platforms and certain engines which will be reflected in the
46	  sysfs control.
47
48config DRM_I915_MAX_REQUEST_BUSYWAIT
49	int "Busywait for request completion limit (ns)"
50	default 8000 # nanoseconds
51	help
52	  Before sleeping waiting for a request (GPU operation) to complete,
53	  we may spend some time polling for its completion. As the IRQ may
54	  take a non-negligible time to setup, we do a short spin first to
55	  check if the request will complete in the time it would have taken
56	  us to enable the interrupt.
57
58	  This is adjustable via
59	  /sys/class/drm/card?/engine/*/max_busywait_duration_ns
60
61	  May be 0 to disable the initial spin. In practice, we estimate
62	  the cost of enabling the interrupt (if currently disabled) to be
63	  a few microseconds.
64
65config DRM_I915_STOP_TIMEOUT
66	int "How long to wait for an engine to quiesce gracefully before reset (ms)"
67	default 100 # milliseconds
68	help
69	  By stopping submission and sleeping for a short time before resetting
70	  the GPU, we allow the innocent contexts also on the system to quiesce.
71	  It is then less likely for a hanging context to cause collateral
72	  damage as the system is reset in order to recover. The corollary is
73	  that the reset itself may take longer and so be more disruptive to
74	  interactive or low latency workloads.
75
76	  This is adjustable via
77	  /sys/class/drm/card?/engine/*/stop_timeout_ms
78
79config DRM_I915_TIMESLICE_DURATION
80	int "Scheduling quantum for userspace batches (ms, jiffy granularity)"
81	default 1 # milliseconds
82	help
83	  When two user batches of equal priority are executing, we will
84	  alternate execution of each batch to ensure forward progress of
85	  all users. This is necessary in some cases where there may be
86	  an implicit dependency between those batches that requires
87	  concurrent execution in order for them to proceed, e.g. they
88	  interact with each other via userspace semaphores. Each context
89	  is scheduled for execution for the timeslice duration, before
90	  switching to the next context.
91
92	  This is adjustable via
93	  /sys/class/drm/card?/engine/*/timeslice_duration_ms
94
95	  May be 0 to disable timeslicing.
96