Searched refs:NVMe (Results 1 – 25 of 57) sorted by relevance
123
4 tristate "NVMe Target support"10 This enabled target side support for the NVMe protocol, that is11 it allows the Linux kernel to implement NVMe subsystems and12 controllers and export Linux block devices as NVMe namespaces.16 To configure the NVMe target you probably want to use the nvmetcli20 bool "NVMe Target Passthrough support"24 This enables target side NVMe passthru controller support for the25 NVMe Over Fabrics protocol. It allows for hosts to manage and26 directly access an actual NVMe controller residing on the target32 tristate "NVMe loopback device support"[all …]
6 What is NVMe9 NVM Express (NVMe) is a register level interface that allows host software to21 memory that is formatted into logical blocks. An NVMe namespace is equivalent26 There is an NVMe uclass driver (driver name "nvme"), an NVMe host controller27 driver (driver name "nvme") and an NVMe namespace block driver (driver name31 is triggered by the NVMe uclass driver and the actual work is done in the NVMe36 It only support basic block read/write functions in the NVMe driver.40 CONFIG_NVME Enable NVMe device support41 CONFIG_CMD_NVME Enable basic NVMe commands45 To use an NVMe hard disk from U-Boot shell, a 'nvme scan' command needs to[all …]
19 bool "NVMe multipath support"22 This option enables support for multipath access to NVMe24 /dev/nvmeXnY device will show up for each NVMe namespace,28 bool "NVMe verbose error reporting"31 This option enables verbose reporting for NVMe errors. The36 bool "NVMe hardware monitoring"39 This provides support for NVMe hardware monitoring. If enabled,40 a hardware monitoring device will be created for each NVMe drive53 This provides support for the NVMe over Fabrics protocol using55 to use remote block devices exported using the NVMe protocol set.[all …]
4 Linux NVMe feature and and quirk policy8 Linux NVMe driver and what is not.16 The Linux NVMe host driver in drivers/nvme/host/ supports devices17 implementing the NVM Express (NVMe) family of specifications, which20 - the NVMe Base specification23 - the NVMe Management Interface specification25 See https://nvmexpress.org/developers/ for the NVMe specifications.31 NVMe is a large suite of specifications, and contains features that are only36 maintainability of the NVMe host driver.38 Any feature implemented in the Linux NVMe host driver must support the[all …]
1 ### NVMe-MI over SMBus9 Currently, OpenBMC does not support NVMe drive information. NVMe-MI10 specification defines a command that can read the NVMe drive information via11 SMBus directly. The NVMe drive can provide its information or status, like13 monitor NVMe drives so appropriate action can be taken.17 NVMe-MI specification defines a command called18 `NVM Express Basic Management Command` that can read the NVMe drives information22 For our purpose is retrieve NVMe drives information, therefore, using NVM23 Express Basic Management Command where describe in NVMe-MI specification to24 communicate with NVMe drives. According to different platforms, temperature[all …]
2 DESCRIPTION = "This package contains the command line interface to the NVMe \11 # nvmet service will start and stop the NVMe Target configuration on boot and12 # shutdown from a saved NVMe Target configuration in the /etc/nvmet/config.json
46 For example, in the NVMe Target Copy Offload implementation:48 * The NVMe PCI driver is both a client, provider and orchestrator54 can DMA directly to the memory exposed by the NVMe device.55 * The NVMe Target driver (nvmet) can orchestrate the data from the RNIC56 to the P2P memory (CMB) and then to the NVMe device (and vice versa).62 then the NVMe Target could use the RNIC's memory instead of the CMB63 in cases where the NVMe cards in use do not have CMB support.96 example, the NVMe Target driver creates a list including the namespace
1 SUMMARY = "NVMe Drive Manager"2 DESCRIPTION = "Daemon to monitor and report the status of NVMe drives"
18 NVMe, enumerator63 {Protocol::NVMe, "NVMe"},
8 NVMe/MMC drives and resulted in the swap partition being used without34 + NVMe/MMC drives and resulted in the swap partition being used without52 + # Correctly handle NVMe/MMC drives, as well as any similar physical
2 NVMe Emulation5 QEMU provides NVMe emulation through the ``nvme``, ``nvme-ns`` and10 * `Adding NVMe Devices`_, `additional namespaces`_ and `NVM subsystems`_.15 Adding NVMe Devices21 The QEMU emulated NVMe controller implements version 1.4 of the NVM Express29 The simplest way to attach an NVMe controller on the QEMU PCI bus is to add the305 by the NVMe device. Virtual function controllers will not report SR-IOV.336 The minimum steps required to configure a functional NVMe secondary367 * bind the NVMe driver to the VF
58 (2) PCI device pass-through: While NVMe ZNS emulation is available for testing60 the NVMe ZNS device to the guest, use VFIO PCI pass the entire NVMe PCI adapter
85 Intel NVMe drives contain two cores on the physical device.113 unstriped on top of Intel NVMe device that has 2 cores121 There will now be two devices that expose Intel NVMe core 0 and 1
3 * Google Herobrine dts fragment for NVMe SKUs
15 model = "Google Zombie with LTE and NVMe";
15 model = "Google Zombie with NVMe";
2 Description=NVMe Sensor
11 It supports basic functions of NVMe (read/write).
2 Description=NVMe management
3 libnvme provides type definitions for NVMe specification structures, \
1469 // Remove NVMe temperature objects from cache when they are removed from
564 // Remove NVMe temperature objects from cache when they are removed from
1154 // Remove NVMe temperature objects from cache when they are removed from
1656 // Remove NVMe temperature objects from cache when they are removed from