xref: /openbmc/docs/designs/mctp/mctp.md (revision b38b0a4c)
1# OpenBMC platform communication channel: MCTP & PLDM
2
3Author: Jeremy Kerr <jk@ozlabs.org> <jk>
4
5## Problem Description
6
7Currently, we have a few different methods of communication between host
8and BMC. This is primarily IPMI-based, but also includes a few
9hardware-specific side-channels, like hiomap. On OpenPOWER hardware at
10least, we've definitely started to hit some of the limitations of IPMI
11(for example, we have need for >255 sensors), as well as the hardware
12channels that IPMI typically uses.
13
14This design aims to use the Management Component Transport Protocol
15(MCTP) to provide a common transport layer over the multiple channels
16that OpenBMC platforms provide. Then, on top of MCTP, we have the
17opportunity to move to newer host/BMC messaging protocols to overcome
18some of the limitations we've encountered with IPMI.
19
20## Background and References
21
22Separating the "transport" and "messaging protocol" parts of the current
23stack allows us to design these parts separately. Currently, IPMI
24defines both of these; we currently have BT and KCS (both defined as
25part of the IPMI 2.0 standard) as the transports, and IPMI itself as the
26messaging protocol.
27
28Some efforts of improving the hardware transport mechanism of IPMI have
29been attempted, but not in a cross-implementation manner so far. This
30does not address some of the limitations of the IPMI data model.
31
32MCTP defines a standard transport protocol, plus a number of separate
33physical layer bindings for the actual transport of MCTP packets. These
34are defined by the DMTF's Platform Management Working group; standards
35are available at:
36
37  https://www.dmtf.org/standards/pmci
38
39The following diagram shows how these standards map to the areas of
40functionality that we may want to implement for OpenBMC. The DSP numbers
41provided are references to DMTF standard documents.
42
43![](mctp-standards.svg)
44
45One of the key concepts here is that separation of transport protocol
46from the physical layer bindings; this means that an MCTP "stack" may be
47using either a I2C, PCI, Serial or custom hardware channel, without the
48higher layers of that stack needing to be aware of the hardware
49implementation.  These higher levels only need to be aware that they are
50communicating with a certain entity, defined by an Entity ID (MCTP EID).
51These entities may be any element of the platform that communicates
52over MCTP - for example, the host device, the BMC, or any other
53system peripheral - static or hot-pluggable.
54
55This document is focused on the "transport" part of the platform design.
56While this does enable new messaging protocols (mainly PLDM), those
57components are not covered in detail much; we will propose those parts
58in separate design efforts. For example, the PLDM design at
59[pldm-stack.md].
60
61As part of the design, the references to MCTP "messages" and "packets"
62are intentional, to match the definitions in the MCTP standard. MCTP
63messages are the higher-level data transferred between MCTP endpoints,
64which packets are typically smaller, and are what is sent over the
65hardware. Messages that are larger than the hardware Maximum Transmit
66Unit (MTU) are split into individual packets by the transmit
67implementation, and reassembled at the receive implementation.
68
69## Requirements
70
71Any channel between host and BMC should:
72
73 - Have a simple serialisation and deserialisation format, to enable
74   implementations in host firmware, which have widely varying runtime
75   capabilities
76
77 - Allow different hardware channels, as we have a wide variety of
78   target platforms for OpenBMC
79
80 - Be usable over simple hardware implementations, but have a facility
81   for higher bandwidth messaging on platforms that require it.
82
83 - Ideally, integrate with newer messaging protocols
84
85## Proposed Designs
86
87The MCTP infrastrcuture in OpenBMC is implemented in two approaches:
88
89 - A userspace-based approach, using a core library, plus a
90   demultiplexing daemon. This is the current implementation, and is
91   described in [MCTP Userspace](mctp-userspace.md).
92
93 - A kernel-based approach, using a sockets API for client and server
94   applications. This approach is in a design stage, and is described
95   in [MCTP Kernel](mctp-kernel.md)
96
97Design details for both approaches are covered in their relevant
98documents, but both share the same Problem Description, Background and
99Requirements, Alternatives and Impacts sections as defined by this
100document.
101
102## Alternatives Considered
103
104There have been two main alternatives to an MCTP implementation in
105OpenBMC:
106
107Continue using IPMI, but start making more use of OEM extensions to
108suit the requirements of new platforms. However, given that the IPMI
109standard is no longer under active development, we would likely end up
110with a large amount of platform-specific customisations. This also does
111not solve the hardware channel issues in a standard manner.
112
113Redfish between host and BMC. This would mean that host firmware needs a
114HTTP client, a TCP/IP stack, a JSON (de)serialiser, and support for
115Redfish schema. While this may be present in some environments (for
116example, UEFI-based firmware), this is may not be feasible for all host
117firmware implementations (for example, OpenPOWER). It's possible that we
118could run a simplified Redfish stack - indeed, MCTP has a proposal for a
119Redfish-over-MCTP channel (DSP0218), which uses simplified serialisation
120format and no requirement on HTTP. However, this may involve a large
121amount of complexity in host firmware.
122
123## Impacts
124
125Development would be required to implement the MCTP transport, plus any
126new users of the MCTP messaging (eg, a PLDM implementation). These would
127somewhat duplicate the work we have in IPMI handlers.
128
129We'd want to keep IPMI running in parallel, so the "upgrade" path should
130be fairly straightforward.
131
132Design and development needs to involve potential host, management
133controllers and managed device implementations.
134