0b483871 | 14-Jul-2023 |
Rob Herring <robh@kernel.org> |
memory: Explicitly include correct DT includes
The DT of_device.h and of_platform.h date back to the separate of_platform_bus_type before it as merged into the regular platform bus. As part of that
memory: Explicitly include correct DT includes
The DT of_device.h and of_platform.h date back to the separate of_platform_bus_type before it as merged into the regular platform bus. As part of that merge prepping Arm DT support 13 years ago, they "temporarily" include each other. They also include platform_device.h and of.h. As a result, there's a pretty much random mix of those include files used throughout the tree. In order to detangle these headers and replace the implicit includes with struct declarations, users need to explicitly include the correct includes.
Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20230714174717.4059518-1-robh@kernel.org Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
show more ...
|
6e1547f9 | 14-Jul-2023 |
Thierry Reding <treding@nvidia.com> |
memory: tegra: Prefer octal over symbolic permissions
checkpatch recommends using octal permissions instead of symbolic permissions. Switch the debugfs files to use the former to silence these warni
memory: tegra: Prefer octal over symbolic permissions
checkpatch recommends using octal permissions instead of symbolic permissions. Switch the debugfs files to use the former to silence these warnings.
Signed-off-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20230714150116.2823766-1-thierry.reding@gmail.com Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
show more ...
|
0a7e4578 | 21-Jun-2023 |
Sumit Gupta <sumitg@nvidia.com> |
memory: tegra: add check if MRQ_EMC_DVFS_LATENCY is supported
Add check to ensure that "MRQ_EMC_DVFS_LATENCY" is supported by the BPMP-FW before making the MRQ request. Currently, if the BPMP-FW doe
memory: tegra: add check if MRQ_EMC_DVFS_LATENCY is supported
Add check to ensure that "MRQ_EMC_DVFS_LATENCY" is supported by the BPMP-FW before making the MRQ request. Currently, if the BPMP-FW doesn't support this MRQ, then the "tegra186_emc_probe" fails. Due to this the Memory Interconnect initialization also doesn't happen. Memory Interconnect is not dependent on this MRQ and can initialize even when this MRQ is not supported in any platform. The check ensures that the MRQ is called only when it is supported by the BPMP-FW and Interconnect initializes independent of this MRQ. Also, moved the code to new function for better readability.
Signed-off-by: Sumit Gupta <sumitg@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20230621134400.23070-4-sumitg@nvidia.com Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
show more ...
|
b18e5259 | 21-Jun-2023 |
Sumit Gupta <sumitg@nvidia.com> |
memory: tegra: Add clients used by DRM in Tegra234
Add entries for VIC, NVDEC, NVENC, NVJPG memory controller clients into the 'tegra_234_mc_clients' table.
Signed-off-by: Johnny Liu <johnliu@nvidi
memory: tegra: Add clients used by DRM in Tegra234
Add entries for VIC, NVDEC, NVENC, NVJPG memory controller clients into the 'tegra_234_mc_clients' table.
Signed-off-by: Johnny Liu <johnliu@nvidia.com> Signed-off-by: Sumit Gupta <sumitg@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20230621134400.23070-3-sumitg@nvidia.com Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
show more ...
|
6d0c4aa5 | 21-Jun-2023 |
Sumit Gupta <sumitg@nvidia.com> |
memory: tegra: sort tegra234_mc_clients table as per register offsets
Sort the MC client entries in "tegra234_mc_clients" table as per the override and security register offsets. This will help to a
memory: tegra: sort tegra234_mc_clients table as per register offsets
Sort the MC client entries in "tegra234_mc_clients" table as per the override and security register offsets. This will help to avoid creating duplicate entries.
Signed-off-by: Sumit Gupta <sumitg@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20230621134400.23070-2-sumitg@nvidia.com Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
show more ...
|
faafd6ca | 21-Jun-2023 |
Sumit Gupta <sumitg@nvidia.com> |
memory: tegra: make icc_set_bw return zero if BWMGR not supported
Return zero from icc_set_bw() to MC client driver if MRQ_BWMGR_INT is not supported by the BPMP-FW. Currently, 'EINVAL' is returned
memory: tegra: make icc_set_bw return zero if BWMGR not supported
Return zero from icc_set_bw() to MC client driver if MRQ_BWMGR_INT is not supported by the BPMP-FW. Currently, 'EINVAL' is returned which causes error message in client drivers even when the platform doesn't support scaling.
Fixes: 9365bf006f53 ("PCI: tegra194: Add interconnect support in Tegra234") Signed-off-by: Sumit Gupta <sumitg@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20230621134400.23070-5-sumitg@nvidia.com Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
show more ...
|
e852af72 | 11-May-2023 |
Sumit Gupta <sumitg@nvidia.com> |
memory: tegra: Make CPU cluster BW request a multiple of MC channels
Make CPU cluster's bandwidth (BW) request a multiple of MC channels. CPU OPP tables have BW info per MC channel. But, the actual
memory: tegra: Make CPU cluster BW request a multiple of MC channels
Make CPU cluster's bandwidth (BW) request a multiple of MC channels. CPU OPP tables have BW info per MC channel. But, the actual BW depends on the number of MC channels which can change as per the boot config. Get the number of MC channels which are actually enabled in current boot configuration and multiply the BW request from a CPU cluster with the number of enabled MC channels. This is not required to be done for other MC clients.
Signed-off-by: Sumit Gupta <sumitg@nvidia.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Thierry Reding <treding@nvidia.com>
show more ...
|
80b19e09 | 11-May-2023 |
Sumit Gupta <sumitg@nvidia.com> |
memory: tegra: Add software memory clients in Tegra234
Add dummy memory controller clients to represent CPU clusters. They will be used by the CPUFREQ driver to scale DRAM FREQ with the CPU FREQ.
S
memory: tegra: Add software memory clients in Tegra234
Add dummy memory controller clients to represent CPU clusters. They will be used by the CPUFREQ driver to scale DRAM FREQ with the CPU FREQ.
Signed-off-by: Sumit Gupta <sumitg@nvidia.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Thierry Reding <treding@nvidia.com>
show more ...
|
aecc83f1 | 11-May-2023 |
Sumit Gupta <sumitg@nvidia.com> |
memory: tegra: Add memory clients for Tegra234
Add few isochronous (ISO) and non-ISO memory clients. ISO clients have guaranteed bandwidth requirement. PCIe clients added to the memory client table
memory: tegra: Add memory clients for Tegra234
Add few isochronous (ISO) and non-ISO memory clients. ISO clients have guaranteed bandwidth requirement. PCIe clients added to the memory client table represent each controller in Tegra234.
Signed-off-by: Sumit Gupta <sumitg@nvidia.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Thierry Reding <treding@nvidia.com>
show more ...
|
be4c5c6e | 22-Mar-2023 |
Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt> |
memory: tegra: read values from correct device
When reading MR18 for Dev1 the code was incorrectly reading the value corresponding to Dev0, so fix this by adjusting the index according to the Tegra
memory: tegra: read values from correct device
When reading MR18 for Dev1 the code was incorrectly reading the value corresponding to Dev0, so fix this by adjusting the index according to the Tegra X1 TRM.
Signed-off-by: Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20230322234050.47332-1-diogo.ivo@tecnico.ulisboa.pt Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
show more ...
|
2ae66ecc | 19-Mar-2023 |
Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt> |
memory: tegra: remove redundant variable initialization
tegra210_emc_table_device_init() sets count = 0 twice, so remove the second instance.
Signed-off-by: Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt>
memory: tegra: remove redundant variable initialization
tegra210_emc_table_device_init() sets count = 0 twice, so remove the second instance.
Signed-off-by: Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt> Link: https://lore.kernel.org/r/20230319171303.120777-1-diogo.ivo@tecnico.ulisboa.pt Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
show more ...
|
9db481c9 | 06-Mar-2023 |
Johan Hovold <johan+linaro@kernel.org> |
memory: tegra30-emc: fix interconnect registration race
The current interconnect provider registration interface is inherently racy as nodes are not added until the after adding the provider. This c
memory: tegra30-emc: fix interconnect registration race
The current interconnect provider registration interface is inherently racy as nodes are not added until the after adding the provider. This can specifically cause racing DT lookups to fail.
Switch to using the new API where the provider is not registered until after it has been fully initialised.
Fixes: d5ef16ba5fbe ("memory: tegra20: Support interconnect framework") Cc: stable@vger.kernel.org # 5.11 Cc: Dmitry Osipenko <digetx@gmail.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Link: https://lore.kernel.org/r/20230306075651.2449-21-johan+linaro@kernel.org Signed-off-by: Georgi Djakov <djakov@kernel.org>
show more ...
|
c5587f61 | 06-Mar-2023 |
Johan Hovold <johan+linaro@kernel.org> |
memory: tegra20-emc: fix interconnect registration race
The current interconnect provider registration interface is inherently racy as nodes are not added until the after adding the provider. This c
memory: tegra20-emc: fix interconnect registration race
The current interconnect provider registration interface is inherently racy as nodes are not added until the after adding the provider. This can specifically cause racing DT lookups to fail.
Switch to using the new API where the provider is not registered until after it has been fully initialised.
Fixes: d5ef16ba5fbe ("memory: tegra20: Support interconnect framework") Cc: stable@vger.kernel.org # 5.11 Cc: Dmitry Osipenko <digetx@gmail.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Link: https://lore.kernel.org/r/20230306075651.2449-20-johan+linaro@kernel.org Signed-off-by: Georgi Djakov <djakov@kernel.org>
show more ...
|
abd9f1b4 | 06-Mar-2023 |
Johan Hovold <johan+linaro@kernel.org> |
memory: tegra124-emc: fix interconnect registration race
The current interconnect provider registration interface is inherently racy as nodes are not added until the after adding the provider. This
memory: tegra124-emc: fix interconnect registration race
The current interconnect provider registration interface is inherently racy as nodes are not added until the after adding the provider. This can specifically cause racing DT lookups to fail.
Switch to using the new API where the provider is not registered until after it has been fully initialised.
Fixes: 380def2d4cf2 ("memory: tegra124: Support interconnect framework") Cc: stable@vger.kernel.org # 5.12 Cc: Dmitry Osipenko <digetx@gmail.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Link: https://lore.kernel.org/r/20230306075651.2449-19-johan+linaro@kernel.org Signed-off-by: Georgi Djakov <djakov@kernel.org>
show more ...
|