1.. SPDX-License-Identifier: GPL-2.0
2
3=====================================
4Generic System Interconnect Subsystem
5=====================================
6
7Introduction
8------------
9
10This framework is designed to provide a standard kernel interface to control
11the settings of the interconnects on an SoC. These settings can be throughput,
12latency and priority between multiple interconnected devices or functional
13blocks. This can be controlled dynamically in order to save power or provide
14maximum performance.
15
16The interconnect bus is hardware with configurable parameters, which can be
17set on a data path according to the requests received from various drivers.
18An example of interconnect buses are the interconnects between various
19components or functional blocks in chipsets. There can be multiple interconnects
20on an SoC that can be multi-tiered.
21
22Below is a simplified diagram of a real-world SoC interconnect bus topology.
23
24::
25
26 +----------------+    +----------------+
27 | HW Accelerator |--->|      M NoC     |<---------------+
28 +----------------+    +----------------+                |
29                         |      |                    +------------+
30  +-----+  +-------------+      V       +------+     |            |
31  | DDR |  |                +--------+  | PCIe |     |            |
32  +-----+  |                | Slaves |  +------+     |            |
33    ^ ^    |                +--------+     |         |   C NoC    |
34    | |    V                               V         |            |
35 +------------------+   +------------------------+   |            |   +-----+
36 |                  |-->|                        |-->|            |-->| CPU |
37 |                  |-->|                        |<--|            |   +-----+
38 |     Mem NoC      |   |         S NoC          |   +------------+
39 |                  |<--|                        |---------+    |
40 |                  |<--|                        |<------+ |    |   +--------+
41 +------------------+   +------------------------+       | |    +-->| Slaves |
42   ^  ^    ^    ^          ^                             | |        +--------+
43   |  |    |    |          |                             | V
44 +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
45 | CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
46 +------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
47           |
48       +-------+
49       | Modem |
50       +-------+
51
52Terminology
53-----------
54
55Interconnect provider is the software definition of the interconnect hardware.
56The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
57and Mem NoC.
58
59Interconnect node is the software definition of the interconnect hardware
60port. Each interconnect provider consists of multiple interconnect nodes,
61which are connected to other SoC components including other interconnect
62providers. The point on the diagram where the CPUs connect to the memory is
63called an interconnect node, which belongs to the Mem NoC interconnect provider.
64
65Interconnect endpoints are the first or the last element of the path. Every
66endpoint is a node, but not every node is an endpoint.
67
68Interconnect path is everything between two endpoints including all the nodes
69that have to be traversed to reach from a source to destination node. It may
70include multiple master-slave pairs across several interconnect providers.
71
72Interconnect consumers are the entities which make use of the data paths exposed
73by the providers. The consumers send requests to providers requesting various
74throughput, latency and priority. Usually the consumers are device drivers, that
75send request based on their needs. An example for a consumer is a video decoder
76that supports various formats and image sizes.
77
78Interconnect providers
79----------------------
80
81Interconnect provider is an entity that implements methods to initialize and
82configure interconnect bus hardware. The interconnect provider drivers should
83be registered with the interconnect provider core.
84
85.. kernel-doc:: include/linux/interconnect-provider.h
86
87Interconnect consumers
88----------------------
89
90Interconnect consumers are the clients which use the interconnect APIs to
91get paths between endpoints and set their bandwidth/latency/QoS requirements
92for these interconnect paths.  These interfaces are not currently
93documented.
94
95Interconnect debugfs interfaces
96-------------------------------
97
98Like several other subsystems interconnect will create some files for debugging
99and introspection. Files in debugfs are not considered ABI so application
100software shouldn't rely on format details change between kernel versions.
101
102``/sys/kernel/debug/interconnect/interconnect_summary``:
103
104Show all interconnect nodes in the system with their aggregated bandwidth
105request. Indented under each node show bandwidth requests from each device.
106
107``/sys/kernel/debug/interconnect/interconnect_graph``:
108
109Show the interconnect graph in the graphviz dot format. It shows all
110interconnect nodes and links in the system and groups together nodes from the
111same provider as subgraphs. The format is human-readable and can also be piped
112through dot to generate diagrams in many graphical formats::
113
114        $ cat /sys/kernel/debug/interconnect/interconnect_graph | \
115                dot -Tsvg > interconnect_graph.svg
116
117The ``test-client`` directory provides interfaces for issuing BW requests to
118any arbitrary path. Note that for safety reasons, this feature is disabled by
119default without a Kconfig to enable it. Enabling it requires code changes to
120``#define INTERCONNECT_ALLOW_WRITE_DEBUGFS``. Example usage::
121
122        cd /sys/kernel/debug/interconnect/test-client/
123
124        # Configure node endpoints for the path from CPU to DDR on
125        # qcom/sm8550.
126        echo chm_apps > src_node
127        echo ebi > dst_node
128
129        # Get path between src_node and dst_node. This is only
130        # necessary after updating the node endpoints.
131        echo 1 > get
132
133        # Set desired BW to 1GBps avg and 2GBps peak.
134        echo 1000000 > avg_bw
135        echo 2000000 > peak_bw
136
137        # Vote for avg_bw and peak_bw on the latest path from "get".
138        # Voting for multiple paths is possible by repeating this
139        # process for different nodes endpoints.
140        echo 1 > commit
141