1========================= 2I915 DG1/LMEM RFC Section 3========================= 4 5Upstream plan 6============= 7For upstream the overall plan for landing all the DG1 stuff and turning it for 8real, with all the uAPI bits is: 9 10* Merge basic HW enabling of DG1(still without pciid) 11* Merge the uAPI bits behind special CONFIG_BROKEN(or so) flag 12 * At this point we can still make changes, but importantly this lets us 13 start running IGTs which can utilize local-memory in CI 14* Convert over to TTM, make sure it all keeps working. Some of the work items: 15 * TTM shrinker for discrete 16 * dma_resv_lockitem for full dma_resv_lock, i.e not just trylock 17 * Use TTM CPU pagefault handler 18 * Route shmem backend over to TTM SYSTEM for discrete 19 * TTM purgeable object support 20 * Move i915 buddy allocator over to TTM 21 * MMAP ioctl mode(see `I915 MMAP`_) 22 * SET/GET ioctl caching(see `I915 SET/GET CACHING`_) 23* Send RFC(with mesa-dev on cc) for final sign off on the uAPI 24* Add pciid for DG1 and turn on uAPI for real 25 26New object placement and region query uAPI 27========================================== 28Starting from DG1 we need to give userspace the ability to allocate buffers from 29device local-memory. Currently the driver supports gem_create, which can place 30buffers in system memory via shmem, and the usual assortment of other 31interfaces, like dumb buffers and userptr. 32 33To support this new capability, while also providing a uAPI which will work 34beyond just DG1, we propose to offer three new bits of uAPI: 35 36DRM_I915_QUERY_MEMORY_REGIONS 37----------------------------- 38New query ID which allows userspace to discover the list of supported memory 39regions(like system-memory and local-memory) for a given device. We identify 40each region with a class and instance pair, which should be unique. The class 41here would be DEVICE or SYSTEM, and the instance would be zero, on platforms 42like DG1. 43 44Side note: The class/instance design is borrowed from our existing engine uAPI, 45where we describe every physical engine in terms of its class, and the 46particular instance, since we can have more than one per class. 47 48In the future we also want to expose more information which can further 49describe the capabilities of a region. 50 51.. kernel-doc:: include/uapi/drm/i915_drm.h 52 :functions: drm_i915_gem_memory_class drm_i915_gem_memory_class_instance drm_i915_memory_region_info drm_i915_query_memory_regions 53 54GEM_CREATE_EXT 55-------------- 56New ioctl which is basically just gem_create but now allows userspace to provide 57a chain of possible extensions. Note that if we don't provide any extensions and 58set flags=0 then we get the exact same behaviour as gem_create. 59 60Side note: We also need to support PXP[1] in the near future, which is also 61applicable to integrated platforms, and adds its own gem_create_ext extension, 62which basically lets userspace mark a buffer as "protected". 63 64.. kernel-doc:: include/uapi/drm/i915_drm.h 65 :functions: drm_i915_gem_create_ext 66 67I915_GEM_CREATE_EXT_MEMORY_REGIONS 68---------------------------------- 69Implemented as an extension for gem_create_ext, we would now allow userspace to 70optionally provide an immutable list of preferred placements at creation time, 71in priority order, for a given buffer object. For the placements we expect 72them each to use the class/instance encoding, as per the output of the regions 73query. Having the list in priority order will be useful in the future when 74placing an object, say during eviction. 75 76.. kernel-doc:: include/uapi/drm/i915_drm.h 77 :functions: drm_i915_gem_create_ext_memory_regions 78 79One fair criticism here is that this seems a little over-engineered[2]. If we 80just consider DG1 then yes, a simple gem_create.flags or something is totally 81all that's needed to tell the kernel to allocate the buffer in local-memory or 82whatever. However looking to the future we need uAPI which can also support 83upcoming Xe HP multi-tile architecture in a sane way, where there can be 84multiple local-memory instances for a given device, and so using both class and 85instance in our uAPI to describe regions is desirable, although specifically 86for DG1 it's uninteresting, since we only have a single local-memory instance. 87 88Existing uAPI issues 89==================== 90Some potential issues we still need to resolve. 91 92I915 MMAP 93--------- 94In i915 there are multiple ways to MMAP GEM object, including mapping the same 95object using different mapping types(WC vs WB), i.e multiple active mmaps per 96object. TTM expects one MMAP at most for the lifetime of the object. If it 97turns out that we have to backpedal here, there might be some potential 98userspace fallout. 99 100I915 SET/GET CACHING 101-------------------- 102In i915 we have set/get_caching ioctl. TTM doesn't let us to change this, but 103DG1 doesn't support non-snooped pcie transactions, so we can just always 104allocate as WB for smem-only buffers. If/when our hw gains support for 105non-snooped pcie transactions then we must fix this mode at allocation time as 106a new GEM extension. 107 108This is related to the mmap problem, because in general (meaning, when we're 109not running on intel cpus) the cpu mmap must not, ever, be inconsistent with 110allocation mode. 111 112Possible idea is to let the kernel picks the mmap mode for userspace from the 113following table: 114 115smem-only: WB. Userspace does not need to call clflush. 116 117smem+lmem: We only ever allow a single mode, so simply allocate this as uncached 118memory, and always give userspace a WC mapping. GPU still does snooped access 119here(assuming we can't turn it off like on DG1), which is a bit inefficient. 120 121lmem only: always WC 122 123This means on discrete you only get a single mmap mode, all others must be 124rejected. That's probably going to be a new default mode or something like 125that. 126 127Links 128===== 129[1] https://patchwork.freedesktop.org/series/86798/ 130 131[2] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5599#note_553791 132