xref: /openbmc/docs/designs/mctp/mctp-userspace.md (revision f4febd00)
1# OpenBMC platform communication channel: MCTP & PLDM in userspace
2
3Author: Jeremy Kerr <jk@ozlabs.org> <jk>
4
5Please refer to the [MCTP Overview](mctp.md) document for general MCTP design
6description, background and requirements.
7
8This document describes a userspace implementation of MCTP infrastructure,
9allowing a straightforward mechanism of supporting MCTP messaging within an
10OpenBMC system.
11
12## Proposed Design
13
14The MCTP core specification just provides the packetisation, routing and
15addressing mechanisms. The actual transmit/receive of those packets is up to the
16hardware binding of the MCTP transport.
17
18For OpenBMC, we would introduce a MCTP daemon, which implements the transport
19over a configurable hardware channel (eg., Serial UART, I2C or PCIe), and
20provides a socket-based interface for other processes to send and receive
21complete MCTP messages. This daemon is responsible for the packetisation and
22routing of MCTP messages from external endpoints, and handling the forwarding
23these messages to and from individual handler applications. This includes
24handling local MCTP-stack configuration, like local EID assignments.
25
26This daemon has a few components:
27
281.  the core MCTP stack
29
302.  one or more binding implementations (eg, MCTP-over-serial), which interact
31    with the hardware channel(s).
32
333.  an interface to handler applications over a unix-domain socket.
34
35The proposed implementation here is to produce an MCTP "library" which provides
36the packetisation and routing functions, between:
37
38- an "upper" messaging transmit/receive interface, for tx/rx of a full message
39  to a specific endpoint (ie, (1) above)
40
41- a "lower" hardware binding for transmit/receive of individual packets,
42  providing a method for the core to tx/rx each packet to hardware, and defines
43  the parameters of the common packetisation code (ie. (2) above).
44
45The lower interface would be plugged in to one of a number of hardware-specific
46binding implementations. Most of these would be included in the library source
47tree, but others can be plugged-in too, perhaps where the physical layer
48implementation does not make sense to include in the platform-agnostic library.
49
50The reason for a library is to allow the same MCTP implementation to be used in
51both OpenBMC and host firmware; the library should be bidirectional. To allow
52this, the library would be written in portable C (structured in a way that can
53be compiled as "extern C" in C++ codebases), and be able to be configured to
54suit those runtime environments (for example, POSIX IO may not be available on
55all platforms; we should be able to compile the library to suit). The licence
56for the library should also allow this re-use; a dual Apache & GPLv2+ licence
57may be best.
58
59These "lower" binding implementations may have very different methods of
60transferring packets to the physical layer. For example, a serial binding
61implementation for running on a Linux environment may be implemented through
62read()/write() syscalls to a PTY device. An I2C binding for use in low-level
63host firmware environments may interact directly with hardware registers to
64perform packet transfers.
65
66The application-specific handlers implement the actual functionality provided
67over the MCTP channel, and connect to the central daemon over a UNIX domain
68socket. Each of these would register with the MCTP daemon to receive MCTP
69messages of a certain type, and would transmit MCTP messages of that same type.
70
71The daemon's sockets to these handlers is configured for non-blocking IO, to
72allow the daemon to be decoupled from any blocking behaviour of handlers. The
73daemon would use a message queue to enable message reception/transmission to a
74blocked daemon, but this would be of a limited size. Handlers whose sockets
75exceed this queue would be disconnected from the daemon.
76
77One design intention of the multiplexer daemon is to allow a future kernel-based
78MCTP implementation without requiring major structural changes to handler
79applications. The socket-based interface facilitates this, as the unix-domain
80socket interface could be fairly easily swapped out with a new kernel-based
81socket type.
82
83MCTP is intended to be an optional component of OpenBMC. Platforms using OpenBMC
84are free to adopt it as they see fit.
85
86### Demultiplexer daemon interface
87
88MCTP handlers (ie, clients of the demultiplexer) connect using a unix-domain
89socket, at the abstract socket address:
90
91```
92\0mctp-demux
93```
94
95The socket type used should be `SOCK_SEQPACKET`.
96
97Once connected, the client sends a single byte message, indicating what type of
98MCTP messages should be forwarded to the client. Types must be greater than
99zero.
100
101Subsequent messages sent over the socket are MCTP messages sent/received by the
102demultiplexer, that match the specified MCTP message type. Clients should use
103the send/recv syscalls to interact with the socket.
104
105Each message has a fixed small header:
106
107```
108uint8_t eid
109```
110
111For messages coming from the demux daemon, this indicates the source EID of the
112outgoing MCTP message. For messages going to the demux daemon, this indicates
113the destination EID.
114
115The rest of the message data is the complete MCTP message, including MCTP
116message type field.
117
118The daemon does not provide a facility for clients to specify or retrieve values
119for the tag field in individual MCTP packets.
120
121## Alternatives Considered
122
123In terms of an MCTP daemon structure, an alternative is to have the MCTP
124implementation contained within a single process, using the libmctp API directly
125for passing messages from the core code to application-level handlers. The
126drawback of this approach is that this single process needs to implement all
127possible functionality that is available over MCTP, which may be quite a
128disjoint set. This would likely lead to unnecessary restrictions on the
129implementation of those application-level handlers (programming language,
130frameworks used, etc). Also, this single-process approach would likely need more
131significant modifications if/when MCTP protocol support is moved to the kernel.
132
133The interface between the demultiplexer daemon and clients is currently defined
134as a socket-based interface. However, an alternative here would be to pass MCTP
135messages over dbus instead. The reason for the choice of sockets rather than
136dbus is that the former allows a direct transition to a kernel-based socket API
137when suitable.
138
139## Testing
140
141For the core MCTP library, we are able to run tests there in complete isolation
142(I have already been able to run a prototype MCTP stack through the afl fuzzer)
143to ensure that the core transport protocol works.
144
145For MCTP hardware bindings, we would develop channel-specific tests that would
146be run in CI on both host and BMC.
147
148For the OpenBMC MCTP daemon implementation, testing models would depend on the
149structure we adopt in the design section.
150