Lines Matching +full:ipmi +full:- +full:bt
8 BMC. This is primarily IPMI-based, but also includes a few hardware-specific
9 side-channels, like hiomap. On OpenPOWER hardware at least, we've definitely
10 started to hit some of the limitations of IPMI (for example, we have need
11 for >255 sensors), as well as the hardware channels that IPMI typically uses.
17 encountered with IPMI.
22 allows us to design these parts separately. Currently, IPMI defines both of
23 these; we currently have BT and KCS (both defined as part of the IPMI 2.0
24 standard) as the transports, and IPMI itself as the messaging protocol.
26 Some efforts of improving the hardware transport mechanism of IPMI have been
27 attempted, but not in a cross-implementation manner so far. This does not
28 address some of the limitations of the IPMI data model.
40 
48 that communicates over MCTP - for example, the host device, the BMC, or any
49 other system peripheral - static or hot-pluggable.
54 For example, the PLDM design at [pldm-stack.md].
58 the higher-level data transferred between MCTP endpoints, which packets are
68 - Have a simple serialisation and deserialisation format, to enable
72 - Allow different hardware channels, as we have a wide variety of target
75 - Be usable over simple hardware implementations, but have a facility for higher
78 - Ideally, integrate with newer messaging protocols
84 - A userspace-based approach, using a core library, plus a demultiplexing
85 daemon. This is described in [MCTP Userspace](mctp-userspace.md).
89 - A kernel-based approach, using a sockets API for client and server
91 in [MCTP Kernel](mctp-kernel.md)
101 Continue using IPMI, but start making more use of OEM extensions to suit the
102 requirements of new platforms. However, given that the IPMI standard is no
104 platform-specific customisations. This also does not solve the hardware channel
109 While this may be present in some environments (for example, UEFI-based
112 stack - indeed, MCTP has a proposal for a Redfish-over-MCTP channel (DSP0218),
120 duplicate the work we have in IPMI handlers.
122 We'd want to keep IPMI running in parallel, so the "upgrade" path should be