Lines Matching refs:fences

153    :doc: DMA fences overview
227 * Future fences, used in HWC1 to signal when a buffer isn't used by the display
231 * Proxy fences, proposed to handle &drm_syncobj for which the fence has not yet
234 * Userspace fences or gpu futexes, fine-grained locking within a command buffer
240 batch DMA fences for memory management instead of context preemption DMA
241 fences which get reattached when the compute job is rescheduled.
244 fences and controls when they fire. Mixing indefinite fences with normal
245 in-kernel DMA fences does not work, even when a fallback timeout is included to
251 * Only userspace knows about all dependencies in indefinite fences and when
255 for memory management needs, which means we must support indefinite fences being
256 dependent upon DMA fences. If the kernel also support indefinite fences in the
267 userspace [label="userspace controlled fences"]
282 fences in the kernel. This means:
284 * No future fences, proxy fences or userspace fences imported as DMA fences,
287 * No DMA fences that signal end of batchbuffer for command submission where
296 implications for DMA fences.
300 But memory allocations are not allowed to gate completion of DMA fences, which
301 means any workload using recoverable page faults cannot use DMA fences for
302 synchronization. Synchronization fences controlled by userspace must be used
306 Linux rely on DMA fences, which means without an entirely new userspace stack
307 built on top of userspace fences, they cannot benefit from recoverable page
342 requiring DMA fences or jobs requiring page fault handling: This means all DMA
343 fences must complete before a compute job with page fault handling can be
351 fences. This results very wide impact on the kernel, since resolving the page
357 GPUs do not have any impact. This allows us to keep using DMA fences internally
362 Fences` discussions: Infinite fences from compute workloads are allowed to
363 depend on DMA fences, but not the other way around. And not even the page fault