Home
last modified time | relevance | path

Searched hist:"295 b6c02" (Results 1 – 3 of 3) sorted by relevance

/openbmc/linux/drivers/gpu/drm/etnaviv/
H A Detnaviv_buffer.c295b6c02 Fri Jun 16 06:02:57 CDT 2023 Lucas Stach <l.stach@pengutronix.de> drm/etnaviv: slow down FE idle polling

Currently the FE is spinning way too fast when polling for new work in
the FE idleloop. As each poll fetches 16 bytes from memory, a GPU running
at 1GHz with the current setting of 200 wait cycle between fetches causes
80 MB/s of memory traffic just to check for new work when the GPU is
otherwise idle, which is more FE traffic than in some GPU loaded cases.

Significantly increase the number of wait cycles to slow down the poll
interval to ~30µs, limiting the FE idle memory traffic to 512 KB/s, while
providing a max latency which should not hurt most use-cases. The FE WAIT
command seems to have some unknown discrete steps in the wait cycles so
we may over/undershoot the target a bit, but that should be harmless.

If the GPU core base frequency is unknown keep the 200 wait cycles as
a sane default.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Sui Jingfeng <suijingfeng@loongson.cn>
Tested-by: Sui Jingfeng <suijingfeng@loongson.cn>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
H A Detnaviv_gpu.h295b6c02 Fri Jun 16 06:02:57 CDT 2023 Lucas Stach <l.stach@pengutronix.de> drm/etnaviv: slow down FE idle polling

Currently the FE is spinning way too fast when polling for new work in
the FE idleloop. As each poll fetches 16 bytes from memory, a GPU running
at 1GHz with the current setting of 200 wait cycle between fetches causes
80 MB/s of memory traffic just to check for new work when the GPU is
otherwise idle, which is more FE traffic than in some GPU loaded cases.

Significantly increase the number of wait cycles to slow down the poll
interval to ~30µs, limiting the FE idle memory traffic to 512 KB/s, while
providing a max latency which should not hurt most use-cases. The FE WAIT
command seems to have some unknown discrete steps in the wait cycles so
we may over/undershoot the target a bit, but that should be harmless.

If the GPU core base frequency is unknown keep the 200 wait cycles as
a sane default.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Sui Jingfeng <suijingfeng@loongson.cn>
Tested-by: Sui Jingfeng <suijingfeng@loongson.cn>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
H A Detnaviv_gpu.c295b6c02 Fri Jun 16 06:02:57 CDT 2023 Lucas Stach <l.stach@pengutronix.de> drm/etnaviv: slow down FE idle polling

Currently the FE is spinning way too fast when polling for new work in
the FE idleloop. As each poll fetches 16 bytes from memory, a GPU running
at 1GHz with the current setting of 200 wait cycle between fetches causes
80 MB/s of memory traffic just to check for new work when the GPU is
otherwise idle, which is more FE traffic than in some GPU loaded cases.

Significantly increase the number of wait cycles to slow down the poll
interval to ~30µs, limiting the FE idle memory traffic to 512 KB/s, while
providing a max latency which should not hurt most use-cases. The FE WAIT
command seems to have some unknown discrete steps in the wait cycles so
we may over/undershoot the target a bit, but that should be harmless.

If the GPU core base frequency is unknown keep the 200 wait cycles as
a sane default.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Sui Jingfeng <suijingfeng@loongson.cn>
Tested-by: Sui Jingfeng <suijingfeng@loongson.cn>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>