/openbmc/linux/Documentation/admin-guide/mm/ |
H A D | memory-hotplug.rst | 2 Memory Hot(Un)Plug 5 This document describes generic Linux support for memory hot(un)plug with 13 Memory hot(un)plug allows for increasing and decreasing the size of physical 14 memory available to a machine at runtime. In the simplest case, it consists of 18 Memory hot(un)plug is used for various purposes: 20 - The physical memory available to a machine can be adjusted at runtime, up- or 21 downgrading the memory capacity. This dynamic memory resizing, sometimes 26 example is replacing failing memory modules. 28 - Reducing energy consumption either by physically unplugging memory modules or 29 by logically unplugging (parts of) memory modules from Linux. [all …]
|
H A D | concepts.rst | 5 The memory management in Linux is a complex system that evolved over the 7 systems from MMU-less microcontrollers to supercomputers. The memory 16 Virtual Memory Primer 19 The physical memory in a computer system is a limited resource and 20 even for systems that support memory hotplug there is a hard limit on 21 the amount of memory that can be installed. The physical memory is not 27 All this makes dealing directly with physical memory quite complex and 28 to avoid this complexity a concept of virtual memory was developed. 30 The virtual memory abstracts the details of physical memory from the 32 physical memory (demand paging) and provides a mechanism for the [all …]
|
H A D | numaperf.rst | 2 NUMA Memory Performance 8 Some platforms may have multiple types of memory attached to a compute 9 node. These disparate memory ranges may share some characteristics, such 13 A system supports such heterogeneous memory by grouping each memory type 15 characteristics. Some memory may share the same node as a CPU, and others 16 are provided as memory only nodes. While memory only nodes do not provide 19 nodes with local memory and a memory only node for each of compute node:: 30 A "memory initiator" is a node containing one or more devices such as 31 CPUs or separate memory I/O devices that can initiate memory requests. 32 A "memory target" is a node containing one or more physical address [all …]
|
/openbmc/linux/tools/testing/selftests/memory-hotplug/ |
H A D | mem-on-off-test.sh | 25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then 26 echo $msg memory hotplug is not supported >&2 30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then 31 echo $msg no hot-pluggable memory >&2 37 # list all hot-pluggable memory 43 for memory in $SYSFS/devices/system/memory/memory*; do 44 if grep -q 1 $memory/removable && 45 grep -q $state $memory/state; then 46 echo ${memory##/*/memory} 63 grep -q online $SYSFS/devices/system/memory/memory$1/state [all …]
|
/openbmc/openbmc/meta-quanta/meta-s6q/recipes-phosphor/configuration/s6q-yaml-config/ |
H A D | ipmi-sensors.yaml | 107 path: /xyz/openbmc_project/metric/bmc/memory/available 527 path: /xyz/openbmc_project/metrics/memory/Memory Error 534 xyz.openbmc_project.Memory.MemoryECC: 539 assert: xyz.openbmc_project.Memory.MemoryECC.ECCStatus.CE 542 assert: xyz.openbmc_project.Memory.MemoryECC.ECCStatus.UE 545 assert: xyz.openbmc_project.Memory.MemoryECC.ECCStatus.LogFull 551 path: /xyz/openbmc_project/metrics/memory/Other IIO Error 558 xyz.openbmc_project.Memory.MemoryECC: 563 assert: xyz.openbmc_project.Memory.MemoryECC.ECCStatus.CE 566 assert: xyz.openbmc_project.Memory.MemoryECC.ECCStatus.UE [all …]
|
/openbmc/bmcweb/redfish-core/schema/dmtf/csdl/ |
H A D | Memory_v1.xml | 4 <!--# Redfish Schema: Memory v1.20.0 --> 73 <Schema xmlns="http://docs.oasis-open.org/odata/ns/edm" Namespace="Memory"> 77 <EntityType Name="Memory" BaseType="Resource.v1_0_0.Resource" Abstract="true"> 78 …<Annotation Term="OData.Description" String="The `Memory` schema represents a memory device, such … 79 …<Annotation Term="OData.LongDescription" String="This resource shall represent a memory device in … 98 <String>/redfish/v1/Systems/{ComputerSystemId}/Memory/{MemoryId}</String> 100 <String>/redfish/v1/Chassis/{ChassisId}/Memory/{MemoryId}</String> 101 … <String>/redfish/v1/CompositionService/ResourceBlocks/{ResourceBlockId}/Memory/{MemoryId}</String> 102 …itionService/ResourceBlocks/{ResourceBlockId}/Systems/{ComputerSystemId}/Memory/{MemoryId}</String> 103 <String>/redfish/v1/ResourceBlocks/{ResourceBlockId}/Memory/{MemoryId}</String> [all …]
|
/openbmc/bmcweb/redfish-core/schema/dmtf/installed/ |
H A D | Memory_v1.xml | 4 <!--# Redfish Schema: Memory v1.20.0 --> 73 <Schema xmlns="http://docs.oasis-open.org/odata/ns/edm" Namespace="Memory"> 77 <EntityType Name="Memory" BaseType="Resource.v1_0_0.Resource" Abstract="true"> 78 …<Annotation Term="OData.Description" String="The `Memory` schema represents a memory device, such … 79 …<Annotation Term="OData.LongDescription" String="This resource shall represent a memory device in … 98 <String>/redfish/v1/Systems/{ComputerSystemId}/Memory/{MemoryId}</String> 100 <String>/redfish/v1/Chassis/{ChassisId}/Memory/{MemoryId}</String> 101 … <String>/redfish/v1/CompositionService/ResourceBlocks/{ResourceBlockId}/Memory/{MemoryId}</String> 102 …itionService/ResourceBlocks/{ResourceBlockId}/Systems/{ComputerSystemId}/Memory/{MemoryId}</String> 103 <String>/redfish/v1/ResourceBlocks/{ResourceBlockId}/Memory/{MemoryId}</String> [all …]
|
/openbmc/qemu/include/hw/mem/ |
H A D | memory-device.h | 2 * Memory Device Interface 20 #define TYPE_MEMORY_DEVICE "memory-device" 33 * All memory devices need to implement TYPE_MEMORY_DEVICE as an interface. 35 * A memory device is a device that owns a memory region which is 37 * address in guest physical memory can either be specified explicitly 40 * Some memory device might not own a memory region in certain device 42 * empty memory devices are mostly ignored by the memory device code. 44 * Conceptually, memory devices only span one memory region. If multiple 45 * successive memory regions are used, a covering memory region has to 46 * be provided. Scattered memory regions are not supported for single [all …]
|
/openbmc/linux/Documentation/admin-guide/cgroup-v1/ |
H A D | memory.rst | 2 Memory Resource Controller 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviour of a group of tasks 27 uses of the memory controller. The memory controller can be used to 30 Memory-hungry applications can be isolated and limited to a smaller [all …]
|
/openbmc/phosphor-dbus-interfaces/yaml/xyz/openbmc_project/Inventory/Item/ |
H A D | Dimm.interface.yaml | 7 Data width of Memory. 11 Memory size of DIMM in Kilobyte. 15 Socket on base board where Memory located, for example CPU1_DIMM_B1. 19 Type of memory. 23 Additional detail on Memory, such as Synchronous, Static column, etc. 27 The maximum capable clock speed of Memory, in megahertz. 31 Rank attributes of Memory. Means how many groups of memory chips on 36 Configured clock speed to Memory, in megahertz. 46 strobe signal are presented to the memory module and the time at which 47 the corresponding data is made available by the memory module. [all …]
|
H A D | PersistentMemory.interface.yaml | 2 Implement to provide Persistent memory attributes. 7 The manufacturer ID of this memory module as defined by JEDEC in 12 The product ID of this memory module as defined by JEDEC in JEP-106. 16 The manufacturer ID of the memory subsystem controller of this memory 21 The product ID of the memory subsystem controller of this memory 34 Total size of the volatile portion memory in kibibytes (KiB). 38 Total size of the non-volatile portion memory in kibibytes (KiB). 42 Total size of the cache portion memory in kibibytes (KiB). 54 The size of the smallest unit of allocation for a memory region in 59 The boundary that memory regions are allocated on, measured in [all …]
|
/openbmc/linux/Documentation/ABI/testing/ |
H A D | sysfs-devices-memory | 1 What: /sys/devices/system/memory 5 The /sys/devices/system/memory contains a snapshot of the 6 internal state of the kernel memory blocks. Files could be 9 Users: hotplug memory add/remove tools 12 What: /sys/devices/system/memory/memoryX/removable 16 The file /sys/devices/system/memory/memoryX/removable is a 17 legacy interface used to indicated whether a memory block is 19 "1" if and only if the kernel supports memory offlining. 20 Users: hotplug memory remove tools 24 What: /sys/devices/system/memory/memoryX/phys_device [all …]
|
/openbmc/linux/Documentation/mm/ |
H A D | memory-model.rst | 4 Physical Memory Model 7 Physical memory in a system may be addressed in different ways. The 8 simplest case is when the physical memory starts at address 0 and 13 different memory banks are attached to different CPUs. 15 Linux abstracts this diversity using one of the two memory models: 17 memory models it supports, what the default memory model is and 20 All the memory models track the status of physical page frames using 23 Regardless of the selected memory model, there exists one-to-one 27 Each memory model defines :c:func:`pfn_to_page` and :c:func:`page_to_pfn` 34 The simplest memory model is FLATMEM. This model is suitable for [all …]
|
H A D | hmm.rst | 2 Heterogeneous Memory Management (HMM) 5 Provide infrastructure and helpers to integrate non-conventional memory (device 6 memory like GPU on board memory) into regular kernel path, with the cornerstone 7 of this being specialized struct page for such memory (see sections 5 to 7 of 10 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 18 related to using device specific memory allocators. In the second section, I 22 fifth section deals with how device memory is represented inside the kernel. 28 Problems of using a device specific memory allocator 31 Devices with a large amount of on board memory (several gigabytes) like GPUs 32 have historically managed their memory through dedicated driver specific APIs. [all …]
|
H A D | numa.rst | 12 or more CPUs, local memory, and/or IO buses. For brevity and to 26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 30 Memory access time and effective memory bandwidth varies depending on how far 31 away the cell containing the CPU or IO bus making the memory access is from the 32 cell containing the target memory. For example, access to memory by CPUs 34 bandwidths than accesses to memory on other, remote cells. NUMA platforms 39 memory bandwidth. However, to achieve scalable memory bandwidth, system and 40 application software must arrange for a large majority of the memory references 41 [cache misses] to be to "local" memory--memory on the same cell, if any--or 42 to the closest cell with memory. [all …]
|
/openbmc/linux/Documentation/devicetree/bindings/memory-controllers/fsl/ |
H A D | fsl,ddr.yaml | 4 $id: http://devicetree.org/schemas/memory-controllers/fsl/fsl,ddr.yaml# 7 title: Freescale DDR memory controller 15 pattern: "^memory-controller@[0-9a-f]+$" 21 - fsl,qoriq-memory-controller-v4.4 22 - fsl,qoriq-memory-controller-v4.5 23 - fsl,qoriq-memory-controller-v4.7 24 - fsl,qoriq-memory-controller-v5.0 25 - const: fsl,qoriq-memory-controller 27 - fsl,bsc9132-memory-controller 28 - fsl,mpc8536-memory-controller [all …]
|
/openbmc/bmcweb/redfish-core/schema/dmtf/json-schema/ |
H A D | Memory.v1_20_0.json | 2 "$id": "http://redfish.dmtf.org/schemas/v1/Memory.v1_20_0.json", 3 "$ref": "#/definitions/Memory", 26 "#Memory.DisableMasterPassphrase": { 29 "#Memory.DisablePassphrase": { 32 "#Memory.FreezeSecurityState": { 35 "#Memory.InjectPersistentPoison": { 38 "#Memory.OverwriteUnit": { 41 "#Memory.Reset": { 44 "#Memory.ResetToDefaults": { 47 "#Memory.ScanMedia": { [all …]
|
H A D | MemoryRegion.v1_0_3.json | 36 "description": "Definition of memory chunk providing capacity for memory region.", 37 …ription": "This type shall contain the definition of a memory chunk providing capacity for memory … 55 … "description": "The link to the memory chunk providing capacity to the memory region.", 56 … contain a link to a resource of type `MemoryChunks` that provides capacity to the memory region.", 60 … "description": "Offset of the memory chunk within the memory region in mebibytes (MiB).", 61 …ion": "The value of this property shall be the offset of the memory chunk within the memory region… 71 …"description": "Definition of memory extent identifying an available address range in the memory r… 72 … shall contain the definition of a memory extent identifying an available address range in the dyn… 89 … "description": "Offset of the memory extent within the memory region in mebibytes (MiB).", 90 …ion": "The value of this property shall be the offset of the memory extent within the memory regio… [all …]
|
/openbmc/bmcweb/redfish-core/schema/dmtf/json-schema-installed/ |
H A D | Memory.v1_20_0.json | 2 "$id": "http://redfish.dmtf.org/schemas/v1/Memory.v1_20_0.json", 3 "$ref": "#/definitions/Memory", 26 "#Memory.DisableMasterPassphrase": { 29 "#Memory.DisablePassphrase": { 32 "#Memory.FreezeSecurityState": { 35 "#Memory.InjectPersistentPoison": { 38 "#Memory.OverwriteUnit": { 41 "#Memory.Reset": { 44 "#Memory.ResetToDefaults": { 47 "#Memory.ScanMedia": { [all …]
|
/openbmc/linux/Documentation/arch/arm64/ |
H A D | kdump.rst | 2 crashkernel memory reservation on arm64 9 reserved memory is needed to pre-load the kdump kernel and boot such 12 That reserved memory for kdump is adapted to be able to minimally 19 Through the kernel parameters below, memory can be reserved accordingly 21 large chunk of memomy can be found. The low memory reservation needs to 22 be considered if the crashkernel is reserved from the high memory area. 28 Low memory and high memory 31 For kdump reservations, low memory is the memory area under a specific 34 vmcore dumping can be ignored. On arm64, the low memory upper bound is 37 whole system RAM is low memory. Outside of the low memory described [all …]
|
/openbmc/linux/Documentation/core-api/ |
H A D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 12 There are six types of notification defined in ``include/linux/memory.h``: 15 Generated before new memory becomes available in order to be able to 16 prepare subsystems to handle memory. The page allocator is still unable 17 to allocate from the new memory. 23 Generated when memory has successfully brought online. The callback may 24 allocate pages from the new memory. 27 Generated to begin the process of offlining memory. Allocations are no 28 longer possible from the memory but some of the memory to be offlined [all …]
|
/openbmc/linux/tools/testing/selftests/cgroup/ |
H A D | test_memcontrol.c | 29 * the memory controller. 37 /* Create two nested cgroups with the memory controller enabled */ in test_memcg_subtree_control() 46 if (cg_write(parent, "cgroup.subtree_control", "+memory")) in test_memcg_subtree_control() 52 if (cg_read_strstr(child, "cgroup.controllers", "memory")) in test_memcg_subtree_control() 55 /* Create two nested cgroups without enabling memory controller */ in test_memcg_subtree_control() 70 if (!cg_read_strstr(child2, "cgroup.controllers", "memory")) in test_memcg_subtree_control() 109 current = cg_read_long(cgroup, "memory.current"); in alloc_anon_50M_check() 116 anon = cg_read_key_long(cgroup, "memory.stat", "anon "); in alloc_anon_50M_check() 143 current = cg_read_long(cgroup, "memory.current"); in alloc_pagecache_50M_check() 147 file = cg_read_key_long(cgroup, "memory.stat", "file "); in alloc_pagecache_50M_check() [all …]
|
/openbmc/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ |
H A D | memory.json | 4 …memory accesses issued by the CPU load store unit, where those accesses are issued due to load or … 8 …memory errors (ECC or parity) in protected CPUs RAMs. On the core, this event counts errors in the… 16 …memory accesses issued by the CPU due to load operations. The event counts any memory load access,… 20 …memory accesses issued by the CPU due to store operations. The event counts any memory store acces… 24 …"PublicDescription": "Counts the number of memory read and write accesses in a cycle that incurred… 28 …"PublicDescription": "Counts the number of memory read accesses in a cycle that incurred additiona… 32 …"PublicDescription": "Counts the number of memory write access in a cycle that incurred additional… 36 …icDescription": "Counts the number of memory read and write accesses in a cycle that are tag check… 40 …"PublicDescription": "Counts the number of memory read accesses in a cycle that are tag checked by… 44 …"PublicDescription": "Counts the number of memory write accesses in a cycle that is tag checked by…
|
/openbmc/qemu/docs/ |
H A D | memory-hotplug.txt | 1 QEMU memory hotplug 4 This document explains how to use the memory hotplug feature in QEMU, 7 Guest support is required for memory hotplug to work. 12 In order to be able to hotplug memory, QEMU has to be told how many 13 hotpluggable memory slots to create and what is the maximum amount of 14 memory the guest can grow. This is done at startup time by means of 22 - "slots" is the number of hotpluggable memory slots 29 Creates a guest with 1GB of memory and three hotpluggable memory slots. 30 The hotpluggable memory slots are empty when the guest is booted, so all 31 memory the guest will see after boot is 1GB. The maximum memory the [all …]
|
/openbmc/linux/include/linux/ |
H A D | memory.h | 3 * include/linux/memory.h - generic memory definition 9 * Basic handling of the devices is done in drivers/base/memory.c 12 * Memory block are exported via sysfs in the class/memory/devices/ 26 * struct memory_group - a logical group of memory blocks 27 * @nid: The node id for all memory blocks inside the memory group. 28 * @blocks: List of all memory blocks belonging to this memory group. 29 * @present_kernel_pages: Present (online) memory outside ZONE_MOVABLE of this 30 * memory group. 31 * @present_movable_pages: Present (online) memory in ZONE_MOVABLE of this 32 * memory group. [all …]
|