Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35 |
|
#
ca6c1e21 |
| 16-Apr-2022 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: use the new drm_exec object for CS v3
Use the new component here as well and remove the old handling.
v2: drop dupplicate handling v3: fix memory leak pointed out by Tatsuyuki
Signed-o
drm/amdgpu: use the new drm_exec object for CS v3
Use the new component here as well and remove the old handling.
v2: drop dupplicate handling v3: fix memory leak pointed out by Tatsuyuki
Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230711133122.3710-7-christian.koenig@amd.com
show more ...
|
#
a3185f91 |
| 09-May-2022 |
Christian König <christian.koenig@amd.com> |
drm/ttm: merge ttm_bo_api.h and ttm_bo_driver.h v2
Merge and cleanup the two headers into a single description of the object API. Also move all the documentation to the implementation and drop unnec
drm/ttm: merge ttm_bo_api.h and ttm_bo_driver.h v2
Merge and cleanup the two headers into a single description of the object API. Also move all the documentation to the implementation and drop unnecessary includes from the header.
No functional change.
v2: minimal checkpatch.pl cleanup
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221125102137.1801-4-christian.koenig@amd.com
show more ...
|
#
4458da0b |
| 10-Nov-2022 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: fix userptr HMM range handling v2
The basic problem here is that it's not allowed to page fault while holding the reservation lock.
So it can happen that multiple processes try to valid
drm/amdgpu: fix userptr HMM range handling v2
The basic problem here is that it's not allowed to page fault while holding the reservation lock.
So it can happen that multiple processes try to validate an userptr at the same time.
Work around that by putting the HMM range object into the mutex protected bo list for now.
v2: make sure range is set to NULL in case of an error
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> CC: stable@vger.kernel.org Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
fec8fdb5 |
| 10-Nov-2022 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: fix userptr HMM range handling v2
The basic problem here is that it's not allowed to page fault while holding the reservation lock.
So it can happen that multiple processes try to valid
drm/amdgpu: fix userptr HMM range handling v2
The basic problem here is that it's not allowed to page fault while holding the reservation lock.
So it can happen that multiple processes try to validate an userptr at the same time.
Work around that by putting the HMM range object into the mutex protected bo list for now.
v2: make sure range is set to NULL in case of an error
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> CC: stable@vger.kernel.org Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
90af0ca0 |
| 20-Jul-2022 |
Luben Tuikov <luben.tuikov@amd.com> |
drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2
Protect the struct amdgpu_bo_list with a mutex. This is used during command submission in order to avoid buffer object corruption as recor
drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2
Protect the struct amdgpu_bo_list with a mutex. This is used during command submission in order to avoid buffer object corruption as recorded in the link below.
v2 (chk): Keep the mutex looked for the whole CS to avoid using the list from multiple CS threads at the same time.
Suggested-by: Christian König <christian.koenig@amd.com> Cc: Alex Deucher <Alexander.Deucher@amd.com> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> Cc: Vitaly Prosyak <Vitaly.Prosyak@amd.com> Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2048 Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Signed-off-by: Christian König <christian.koenig@amd.com> Tested-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
show more ...
|
#
5df79aeb |
| 20-Jul-2022 |
Luben Tuikov <luben.tuikov@amd.com> |
drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2
Protect the struct amdgpu_bo_list with a mutex. This is used during command submission in order to avoid buffer object corruption as recor
drm/amdgpu: Protect the amdgpu_bo_list list with a mutex v2
Protect the struct amdgpu_bo_list with a mutex. This is used during command submission in order to avoid buffer object corruption as recorded in the link below.
v2 (chk): Keep the mutex looked for the whole CS to avoid using the list from multiple CS threads at the same time.
Suggested-by: Christian König <christian.koenig@amd.com> Cc: Alex Deucher <Alexander.Deucher@amd.com> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> Cc: Vitaly Prosyak <Vitaly.Prosyak@amd.com> Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2048 Signed-off-by: Luben Tuikov <luben.tuikov@amd.com> Signed-off-by: Christian König <christian.koenig@amd.com> Tested-by: Luben Tuikov <luben.tuikov@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
Revision tags: v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5 |
|
#
047a1b87 |
| 23-Nov-2021 |
Christian König <christian.koenig@amd.com> |
dma-buf & drm/amdgpu: remove dma_resv workaround
Rework the internals of the dma_resv object to allow adding more than one write fence and remember for each fence what purpose it had.
This allows r
dma-buf & drm/amdgpu: remove dma_resv workaround
Rework the internals of the dma_resv object to allow adding more than one write fence and remember for each fence what purpose it had.
This allows removing the workaround from amdgpu which used a container for this instead.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: amd-gfx@lists.freedesktop.org Link: https://patchwork.freedesktop.org/patch/msgid/20220407085946.744568-4-christian.koenig@amd.com
show more ...
|
Revision tags: v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9 |
|
#
335aea75 |
| 27-Sep-2021 |
Arnd Bergmann <arnd@arndb.de> |
drm/amdgpu: fix warning for overflow check
The overflow check in amdgpu_bo_list_create() causes a warning with clang-14 on 64-bit architectures, since the limit can never be exceeded.
drivers/gpu/d
drm/amdgpu: fix warning for overflow check
The overflow check in amdgpu_bo_list_create() causes a warning with clang-14 on 64-bit architectures, since the limit can never be exceeded.
drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c:74:18: error: result of comparison of constant 256204778801521549 with expression of type 'unsigned int' is always false [-Werror,-Wtautological-constant-out-of-range-compare] if (num_entries > (SIZE_MAX - sizeof(struct amdgpu_bo_list)) ~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The check remains useful for 32-bit architectures, so just avoid the warning by using size_t as the type for the count.
Fixes: 920990cb080a ("drm/amdgpu: allocate the bo_list array after the list") Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
79841980 |
| 27-Sep-2021 |
Arnd Bergmann <arnd@arndb.de> |
drm/amdgpu: fix warning for overflow check
[ Upstream commit 335aea75b0d95518951cad7c4c676e6f1c02c150 ]
The overflow check in amdgpu_bo_list_create() causes a warning with clang-14 on 64-bit archit
drm/amdgpu: fix warning for overflow check
[ Upstream commit 335aea75b0d95518951cad7c4c676e6f1c02c150 ]
The overflow check in amdgpu_bo_list_create() causes a warning with clang-14 on 64-bit architectures, since the limit can never be exceeded.
drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c:74:18: error: result of comparison of constant 256204778801521549 with expression of type 'unsigned int' is always false [-Werror,-Wtautological-constant-out-of-range-compare] if (num_entries > (SIZE_MAX - sizeof(struct amdgpu_bo_list)) ~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The check remains useful for 32-bit architectures, so just avoid the warning by using size_t as the type for the count.
Fixes: 920990cb080a ("drm/amdgpu: allocate the bo_list array after the list") Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
Revision tags: v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43 |
|
#
8c505bdc |
| 09-Jun-2021 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: rework dma_resv handling v3
Drop the workaround and instead implement a better solution.
Basically we are now chaining all submissions using a dma_fence_chain container and adding them
drm/amdgpu: rework dma_resv handling v3
Drop the workaround and instead implement a better solution.
Basically we are now chaining all submissions using a dma_fence_chain container and adding them as exclusive fence to the dma_resv object.
This way other drivers can still sync to the single exclusive fence while amdgpu only sync to fences from different processes.
v3: add the shared fence first before the exclusive one
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210614174536.5188-2-christian.koenig@amd.com
show more ...
|
Revision tags: v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10, v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8, v5.3.7, v5.3.6, v5.3.5, v5.3.4, v5.3.3, v5.3.2, v5.3.1, v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12, v5.2.11, v5.2.10, v5.2.9, v5.2.8, v5.2.7, v5.2.6, v5.2.5, v5.2.4, v5.2.3, v5.2.2, v5.2.1, v5.2, v5.1.16, v5.1.15, v5.1.14, v5.1.13, v5.1.12, v5.1.11, v5.1.10, v5.1.9, v5.1.8, v5.1.7, v5.1.6, v5.1.5, v5.1.4, v5.1.3, v5.1.2, v5.1.1, v5.0.14, v5.1, v5.0.13, v5.0.12, v5.0.11, v5.0.10, v5.0.9, v5.0.8, v5.0.7, v5.0.6, v5.0.5, v5.0.4, v5.0.3, v4.19.29, v5.0.2, v4.19.28, v5.0.1, v4.19.27, v5.0, v4.19.26, v4.19.25, v4.19.24, v4.19.23, v4.19.22, v4.19.21, v4.19.20, v4.19.19, v4.19.18, v4.19.17, v4.19.16, v4.19.15, v4.19.14, v4.19.13, v4.19.12, v4.19.11, v4.19.10 |
|
#
899fbde1 |
| 13-Dec-2018 |
Philip Yang <Philip.Yang@amd.com> |
drm/amdgpu: replace get_user_pages with HMM mirror helpers
Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pages. Then us
drm/amdgpu: replace get_user_pages with HMM mirror helpers
Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pages. Then use hmm_vma_range_done() to check if those pages are updated before amdgpu_cs_submit for gfx or before user queues are resumed for kfd.
If userptr pages are updated, for gfx, amdgpu_cs_ioctl will restart from scratch, for kfd, restore worker is rescheduled to retry.
HMM simplify the CPU page table concurrent update check, so remove guptasklock, mmu_invalidations, last_set_pages fields from amdgpu_ttm_tt struct.
HMM does not pin the page (increase page ref count), so remove related operations like release_pages(), put_page(), mark_page_dirty().
Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
318c3f4b |
| 28-Mar-2019 |
Alex Deucher <alexander.deucher@amd.com> |
Revert "drm/amdgpu: replace get_user_pages with HMM mirror helpers"
This reverts commit 915d3eecfa23693bac9e54cdacf84fb4efdcc5c4.
This depends on an HMM fix which is not upstream yet.
Signed-off-b
Revert "drm/amdgpu: replace get_user_pages with HMM mirror helpers"
This reverts commit 915d3eecfa23693bac9e54cdacf84fb4efdcc5c4.
This depends on an HMM fix which is not upstream yet.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
915d3eec |
| 13-Dec-2018 |
Philip Yang <Philip.Yang@amd.com> |
drm/amdgpu: replace get_user_pages with HMM mirror helpers
Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pages. Then us
drm/amdgpu: replace get_user_pages with HMM mirror helpers
Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pages. Then use hmm_vma_range_done() to check if those pages are updated before amdgpu_cs_submit for gfx or before user queues are resumed for kfd.
If userptr pages are updated, for gfx, amdgpu_cs_ioctl will restart from scratch, for kfd, restore worker is rescheduled to retry.
HMM simplify the CPU page table concurrent update check, so remove guptasklock, mmu_invalidations, last_set_pages fields from amdgpu_ttm_tt struct.
HMM does not pin the page (increase page ref count), so remove related operations like release_pages(), put_page(), mark_page_dirty().
Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
Revision tags: v4.19.9, v4.19.8, v4.19.7, v4.19.6, v4.19.5, v4.19.4, v4.18.20, v4.19.3, v4.18.19, v4.19.2, v4.18.18, v4.18.17, v4.19.1, v4.19, v4.18.16, v4.18.15, v4.18.14, v4.18.13, v4.18.12, v4.18.11, v4.18.10, v4.18.9 |
|
#
e83dfe4d |
| 10-Sep-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: remove amdgpu_bo_list_entry.robj (v2)
We can get that just by casting tv.bo.
v2: squash in kfd fix (Alex)
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunmin
drm/amdgpu: remove amdgpu_bo_list_entry.robj (v2)
We can get that just by casting tv.bo.
v2: squash in kfd fix (Alex)
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
Revision tags: v4.18.7, v4.18.6, v4.18.5, v4.17.18, v4.18.4, v4.18.3, v4.17.17, v4.18.2, v4.17.16, v4.17.15, v4.18.1, v4.18, v4.17.14, v4.17.13, v4.17.12 |
|
#
920990cb |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: allocate the bo_list array after the list
This avoids multiple allocations for the head and the array.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Z
drm/amdgpu: allocate the bo_list array after the list
This avoids multiple allocations for the head and the array.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
39f7f69a |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: add bo_list iterators
Add helpers to iterate over all entries in a bo_list.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> A
drm/amdgpu: add bo_list iterators
Add helpers to iterate over all entries in a bo_list.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
a0f20845 |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: nuke amdgpu_bo_list_free
The RCU grace period is harmless and avoiding it is not worth the effort of doubling the implementation.
Signed-off-by: Christian König <christian.koenig@amd.co
drm/amdgpu: nuke amdgpu_bo_list_free
The RCU grace period is harmless and avoiding it is not worth the effort of doubling the implementation.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
81c6dabc |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: always recreate bo_list
The bo_list handle is allocated by OP_CREATE, so in OP_UPDATE here we just re-create the bo_list object and replace the handle. This way we don't need locking to
drm/amdgpu: always recreate bo_list
The bo_list handle is allocated by OP_CREATE, so in OP_UPDATE here we just re-create the bo_list object and replace the handle. This way we don't need locking to protect the bo_list because it's always re-created when changed.
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
4a8c21a1 |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: move bo_list defines to amdgpu_bo_list.h
Further demangle amdgpu.h
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Acked-by:
drm/amdgpu: move bo_list defines to amdgpu_bo_list.h
Further demangle amdgpu.h
Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
Revision tags: v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10, v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8, v5.3.7, v5.3.6, v5.3.5, v5.3.4, v5.3.3, v5.3.2, v5.3.1, v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12, v5.2.11, v5.2.10, v5.2.9, v5.2.8, v5.2.7, v5.2.6, v5.2.5, v5.2.4, v5.2.3, v5.2.2, v5.2.1, v5.2, v5.1.16, v5.1.15, v5.1.14, v5.1.13, v5.1.12, v5.1.11, v5.1.10, v5.1.9, v5.1.8, v5.1.7, v5.1.6, v5.1.5, v5.1.4, v5.1.3, v5.1.2, v5.1.1, v5.0.14, v5.1, v5.0.13, v5.0.12, v5.0.11, v5.0.10, v5.0.9, v5.0.8, v5.0.7, v5.0.6, v5.0.5, v5.0.4, v5.0.3, v4.19.29, v5.0.2, v4.19.28, v5.0.1, v4.19.27, v5.0, v4.19.26, v4.19.25, v4.19.24, v4.19.23, v4.19.22, v4.19.21, v4.19.20, v4.19.19, v4.19.18, v4.19.17, v4.19.16, v4.19.15, v4.19.14, v4.19.13, v4.19.12, v4.19.11, v4.19.10 |
|
#
899fbde1 |
| 13-Dec-2018 |
Philip Yang <Philip.Yang@amd.com> |
drm/amdgpu: replace get_user_pages with HMM mirror helpers Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pa
drm/amdgpu: replace get_user_pages with HMM mirror helpers Use HMM helper function hmm_vma_fault() to get physical pages backing userptr and start CPU page table update track of those pages. Then use hmm_vma_range_done() to check if those pages are updated before amdgpu_cs_submit for gfx or before user queues are resumed for kfd. If userptr pages are updated, for gfx, amdgpu_cs_ioctl will restart from scratch, for kfd, restore worker is rescheduled to retry. HMM simplify the CPU page table concurrent update check, so remove guptasklock, mmu_invalidations, last_set_pages fields from amdgpu_ttm_tt struct. HMM does not pin the page (increase page ref count), so remove related operations like release_pages(), put_page(), mark_page_dirty(). Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
Revision tags: v4.19.9, v4.19.8, v4.19.7, v4.19.6, v4.19.5, v4.19.4, v4.18.20, v4.19.3, v4.18.19, v4.19.2, v4.18.18, v4.18.17, v4.19.1, v4.19, v4.18.16, v4.18.15, v4.18.14, v4.18.13, v4.18.12, v4.18.11, v4.18.10, v4.18.9 |
|
#
e83dfe4d |
| 10-Sep-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: remove amdgpu_bo_list_entry.robj (v2) We can get that just by casting tv.bo. v2: squash in kfd fix (Alex) Signed-off-by: Christian König <christian.koenig@amd.c
drm/amdgpu: remove amdgpu_bo_list_entry.robj (v2) We can get that just by casting tv.bo. v2: squash in kfd fix (Alex) Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
Revision tags: v4.18.7, v4.18.6, v4.18.5, v4.17.18, v4.18.4, v4.18.3, v4.17.17, v4.18.2, v4.17.16, v4.17.15, v4.18.1, v4.18, v4.17.14, v4.17.13, v4.17.12 |
|
#
920990cb |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: allocate the bo_list array after the list This avoids multiple allocations for the head and the array. Signed-off-by: Christian König <christian.koenig@amd.com> Revi
drm/amdgpu: allocate the bo_list array after the list This avoids multiple allocations for the head and the array. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
39f7f69a |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: add bo_list iterators Add helpers to iterate over all entries in a bo_list. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <da
drm/amdgpu: add bo_list iterators Add helpers to iterate over all entries in a bo_list. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Acked-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
a0f20845 |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: nuke amdgpu_bo_list_free The RCU grace period is harmless and avoiding it is not worth the effort of doubling the implementation. Signed-off-by: Christian König <chr
drm/amdgpu: nuke amdgpu_bo_list_free The RCU grace period is harmless and avoiding it is not worth the effort of doubling the implementation. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Reviewed-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|
#
81c6dabc |
| 30-Jul-2018 |
Christian König <christian.koenig@amd.com> |
drm/amdgpu: always recreate bo_list The bo_list handle is allocated by OP_CREATE, so in OP_UPDATE here we just re-create the bo_list object and replace the handle. This way we don't
drm/amdgpu: always recreate bo_list The bo_list handle is allocated by OP_CREATE, so in OP_UPDATE here we just re-create the bo_list object and replace the handle. This way we don't need locking to protect the bo_list because it's always re-created when changed. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
show more ...
|