/openbmc/qemu/tests/tcg/multiarch/gdbstub/ |
H A D | test-proc-mappings.py | 1 """Test that gdbstub has access to proc mappings. 16 mappings = gdb.execute("info proc mappings", False, True) 17 report(isinstance(mappings, str), "Fetched the mappings from the inferior") 19 # report("/sha1" in mappings, "Found the test binary name in the mappings")
|
/openbmc/linux/Documentation/mm/ |
H A D | highmem.rst | 15 at all times. This means the kernel needs to start using temporary mappings of 48 Temporary Virtual Mappings 51 The kernel contains several ways of creating temporary mappings. The following 55 short term mappings. They can be invoked from any context (including 56 interrupts) but the mappings can only be used in the context which acquired 64 These mappings are thread-local and CPU-local, meaning that the mapping 89 mappings, the local mappings are only valid in the context of the caller 94 Most code can be designed to use thread local mappings. User should 99 Nesting kmap_local_page() and kmap_atomic() mappings is allowed to a certain 103 mappings. [all …]
|
H A D | hugetlbfs_reserv.rst | 87 of mappings. Location differences are: 89 - For private mappings, the reservation map hangs off the VMA structure. 92 - For shared mappings, the reservation map hangs off the inode. Specifically, 93 inode->i_mapping->private_data. Since shared mappings are always backed 121 One of the big differences between PRIVATE and SHARED mappings is the way 124 - For shared mappings, an entry in the reservation map indicates a reservation 127 - For private mappings, the lack of an entry in the reservation map indicates 133 For private mappings, hugetlb_reserve_pages() creates the reservation map and 138 are needed for the current mapping/segment. For private mappings, this is 139 always the value (to - from). However, for shared mappings it is possible that [all …]
|
/openbmc/linux/drivers/gpu/drm/panfrost/ |
H A D | panfrost_gem.c | 33 * If we still have mappings attached to the BO, there's a problem in in panfrost_gem_free_object() 36 WARN_ON_ONCE(!list_empty(&bo->mappings.list)); in panfrost_gem_free_object() 61 mutex_lock(&bo->mappings.lock); in panfrost_gem_mapping_get() 62 list_for_each_entry(iter, &bo->mappings.list, node) { in panfrost_gem_mapping_get() 69 mutex_unlock(&bo->mappings.lock); in panfrost_gem_mapping_get() 110 list_for_each_entry(mapping, &bo->mappings.list, node) in panfrost_gem_teardown_mappings_locked() 158 mutex_lock(&bo->mappings.lock); in panfrost_gem_open() 160 list_add_tail(&mapping->node, &bo->mappings.list); in panfrost_gem_open() 161 mutex_unlock(&bo->mappings.lock); in panfrost_gem_open() 175 mutex_lock(&bo->mappings.lock); in panfrost_gem_close() [all …]
|
/openbmc/linux/drivers/gpu/drm/nouveau/ |
H A D | nouveau_exec.c | 27 * and unmap memory. Mappings may be flagged as sparse. Sparse mappings are not 31 * Userspace may request memory backed mappings either within or outside of the 33 * mapping. Subsequently requested memory backed mappings within a sparse 35 * mapping. If such memory backed mappings are unmapped the kernel will make 37 * Requests to unmap a sparse mapping that still contains memory backed mappings 38 * will result in those memory backed mappings being unmapped first. 40 * Unmap requests are not bound to the range of existing mappings and can even 41 * overlap the bounds of sparse mappings. For such a request the kernel will 42 * make sure to unmap all memory backed mappings within the given range, 43 * splitting up memory backed mappings which are only partially contained [all …]
|
/openbmc/qemu/util/ |
H A D | vfio-helpers.c | 86 * - Fixed mappings of HVAs are assigned "low" IOVAs in the range of 93 * mappings. At each qemu_vfio_dma_reset_temporary() call, the whole area 95 * mappings are completed before calling. 99 IOVAMapping *mappings; member 509 * VFIO may pin all memory inside mappings, resulting it in pinning in qemu_vfio_open_pci() 532 trace_qemu_vfio_dump_mapping(s->mappings[i].host, in qemu_vfio_dump_mappings() 533 s->mappings[i].iova, in qemu_vfio_dump_mappings() 534 s->mappings[i].size); in qemu_vfio_dump_mappings() 547 IOVAMapping *p = s->mappings; in qemu_vfio_find_mapping() 570 } else if (mid < &s->mappings[s->nr_mappings - 1] in qemu_vfio_find_mapping() [all …]
|
H A D | mmap-alloc.c | 97 * shared mappings. For shared mappings, all mappers have to specify in map_noreserve_effective() 106 * Accountable mappings in the kernel that can be affected by MAP_NORESEVE in map_noreserve_effective() 107 * are private writable mappings (see mm/mmap.c:accountable_mapping() in in map_noreserve_effective() 108 * Linux). For all shared or readonly mappings, MAP_NORESERVE is always in map_noreserve_effective() 118 * MAP_NORESERVE is globally ignored for applicable !hugetlb mappings when in map_noreserve_effective() 158 * On ppc64 mappings in the same segment (aka slice) must share the same in mmap_reserve() 240 /* Mappings in the same segment must share the same page size */ in mmap_guard_pagesize()
|
/openbmc/linux/Documentation/arch/ia64/ |
H A D | aliasing.rst | 64 Kernel Identity Mappings 67 Linux/ia64 identity mappings are done with large pages, currently 68 either 16MB or 64MB, referred to as "granules." Cacheable mappings 78 Uncacheable mappings are not speculative, so the processor will 80 software. This allows UC identity mappings to cover granules that 84 User Mappings 87 User mappings are typically done with 16K or 64K pages. The smaller 94 There are several ways the kernel creates new mappings: 99 This uses remap_pfn_range(), which creates user mappings. These 100 mappings may be either WB or UC. If the region being mapped [all …]
|
/openbmc/openbmc-test-automation/redfish/account_service/ |
H A D | test_redfish_privilege_registry.robot | 62 Verify Redfish Privilege Registry Mappings Properties For Account Service 63 [Documentation] Verify Privilege Registry Account Service Mappings resource properties. 69 # "Mappings": [ 109 # Get mappings properties for Entity: Account Service. 110 @{mappings}= Get From Dictionary ${resp.dict} Mappings 112 Should Be Equal ${mappings[${account_service}]['OperationMap']['GET'][0]['Privilege'][0]} 114 Should Be Equal ${mappings[${account_service}]['OperationMap']['HEAD'][0]['Privilege'][0]} 116 Should Be Equal ${mappings[${account_service}]['OperationMap']['PATCH'][0]['Privilege'][0]} 118 Should Be Equal ${mappings[${account_service}]['OperationMap']['PUT'][0]['Privilege'][0]} 120 Should Be Equal ${mappings[${account_service}]['OperationMap']['DELETE'][0]['Privilege'][0]} [all …]
|
/openbmc/linux/arch/x86/include/asm/ |
H A D | invpcid.h | 13 * mappings, we don't want the compiler to reorder any subsequent in __invpcid() 25 /* Flush all mappings for a given pcid and addr, not including globals. */ 32 /* Flush all mappings for a given PCID, not including globals. */ 38 /* Flush all mappings, including globals, for all PCIDs. */ 44 /* Flush all mappings for all PCIDs except globals. */
|
/openbmc/linux/Documentation/admin-guide/mm/ |
H A D | nommu-mmap.rst | 29 These behave very much like private mappings, except that they're 133 In the no-MMU case, however, anonymous mappings are backed by physical 147 (#) A list of all the private copy and anonymous mappings on the system is 150 (#) A list of all the mappings in use by a process is visible through 176 mappings made by a process or if the mapping in which the address lies does not 191 Shared mappings may not be moved. Shareable mappings may not be moved either, 196 mappings, move parts of existing mappings or resize parts of mappings. It must 243 mappings may still be mapped directly off the device under some 250 Provision of shared mappings on memory backed files is similar to the provision 253 of pages and permit mappings to be made on that. [all …]
|
/openbmc/linux/drivers/gpu/drm/ |
H A D | drm_gem_atomic_helper.c | 21 * synchronization helpers, and plane state and framebuffer BO mappings 43 * a mapping of the shadow buffer into kernel address space. The mappings 47 * The helpers for shadow-buffered planes establish and release mappings, 70 * In the driver's atomic-update function, shadow-buffer mappings are available 87 * struct &drm_shadow_plane_state.map. The mappings are valid while the state 92 * callbacks. Access to shadow-buffer mappings is similar to regular 212 * The function does not duplicate existing mappings of the shadow buffers. 213 * Mappings are maintained during the atomic commit by the plane's prepare_fb 234 * The function does not duplicate existing mappings of the shadow buffers. 235 * Mappings are maintained during the atomic commit by the plane's prepare_fb [all …]
|
/openbmc/linux/Documentation/arch/arm/ |
H A D | memory.rst | 62 Machine specific static mappings are also 72 PKMAP_BASE PAGE_OFFSET-1 Permanent kernel mappings 78 placed here using dynamic mappings. 85 00001000 TASK_SIZE-1 User space mappings 86 Per-thread mappings are placed here via 96 Please note that mappings which collide with the above areas may result 103 must set up their own mappings using open() and mmap().
|
/openbmc/linux/arch/x86/mm/ |
H A D | mem_encrypt_identity.c | 13 * Since we're dealing with identity mappings, physical and virtual 257 * entries that are needed. Those mappings will be covered mostly in sme_pgtable_calc() 260 * mappings. For mappings that are not 2MB aligned, PTE mappings in sme_pgtable_calc() 355 * One PGD for both encrypted and decrypted mappings and a set of in sme_encrypt_kernel() 356 * PUDs and PMDs for each of the encrypted and decrypted mappings. in sme_encrypt_kernel() 381 * mappings are populated. in sme_encrypt_kernel() 402 * decrypted kernel mappings are created. in sme_encrypt_kernel() 423 /* Add encrypted kernel (identity) mappings */ in sme_encrypt_kernel() 429 /* Add decrypted, write-protected kernel (non-identity) mappings */ in sme_encrypt_kernel() 436 /* Add encrypted initrd (identity) mappings */ in sme_encrypt_kernel() [all …]
|
/openbmc/linux/Documentation/driver-api/ |
H A D | io-mapping.rst | 44 used with mappings created by io_mapping_create_wc() 46 Temporary mappings are only valid in the context of the caller. The mapping 56 Nested mappings need to be undone in reverse order because the mapping 65 The mappings are released with:: 83 The mappings are released with::
|
/openbmc/linux/tools/testing/selftests/kvm/ |
H A D | kvm_page_table_test.c | 112 * Then KVM will create normal page mappings or huge block in guest_code() 113 * mappings for them. in guest_code() 128 * normal page mappings from RO to RW if memory backing src type in guest_code() 130 * mappings into normal page mappings if memory backing src type in guest_code() 152 * this will create new mappings at the smallest in guest_code() 166 * split page mappings back to block mappings. And a TLB in guest_code() 168 * page mappings are not fully invalidated. in guest_code() 367 /* Test the stage of KVM creating mappings */ in run_test() 377 /* Test the stage of KVM updating mappings */ in run_test() 390 /* Test the stage of KVM adjusting mappings */ in run_test()
|
/openbmc/linux/arch/sh/mm/ |
H A D | pmb.c | 50 /* Adjacent entry link for contiguous multi-entry mappings */ 172 * Finally for sizes that involve compound mappings, walk in pmb_mapping_exists() 424 * Small mappings need to go through the TLB. in pmb_remap_caller() 530 pr_info("PMB: boot mappings:\n"); in pmb_notify() 551 * Sync our software copy of the PMB mappings with those in hardware. The 552 * mappings in the hardware PMB were either set up by the bootloader or 561 * Run through the initial boot mappings, log the established in pmb_synchronize() 563 * PPN range. Specifically, we only care about existing mappings in pmb_synchronize() 567 * loader can establish multi-page mappings with the same caching in pmb_synchronize() 573 * jumping between the cached and uncached mappings and tearing in pmb_synchronize() [all …]
|
/openbmc/linux/mm/ |
H A D | Kconfig.debug | 99 bool "Check for invalid mappings in user page tables" 163 bool "Warn on W+X mappings at boot" 168 Generate a warning if any W+X mappings are found at boot. 171 mappings after applying NX, as such mappings are a security risk. 175 <arch>/mm: Checked W+X mappings: passed, no W+X pages found. 179 <arch>/mm: Checked W+X mappings: failed, <N> W+X pages found. 182 still fine, as W+X mappings are not a security hole in
|
/openbmc/linux/arch/hexagon/include/asm/ |
H A D | mem-layout.h | 71 * Permanent IO mappings will live at 0xfexx_xxxx 80 * "permanent kernel mappings", defined as long-lasting mappings of 92 * "Permanent Kernel Mappings"; fancy (or less fancy) PTE table
|
/openbmc/linux/arch/arm64/mm/ |
H A D | pageattr.c | 83 * Kernel VA mappings are always live, and splitting live section in change_memory_common() 84 * mappings into page mappings may cause TLB conflicts. This means in change_memory_common() 88 * Let's restrict ourselves to mappings created by vmalloc (or vmap). in change_memory_common() 89 * Those are guaranteed to consist entirely of page mappings, and in change_memory_common() 210 * p?d_present(). When debug_pagealloc is enabled, sections mappings are
|
/openbmc/linux/drivers/gpu/drm/tegra/ |
H A D | submit.c | 150 xa_lock(&context->mappings); in tegra_drm_mapping_get() 152 mapping = xa_load(&context->mappings, id); in tegra_drm_mapping_get() 156 xa_unlock(&context->mappings); in tegra_drm_mapping_get() 261 struct tegra_drm_used_mapping *mappings; in submit_process_bufs() local 273 mappings = kcalloc(args->num_bufs, sizeof(*mappings), GFP_KERNEL); in submit_process_bufs() 274 if (!mappings) { in submit_process_bufs() 303 mappings[i].mapping = mapping; in submit_process_bufs() 304 mappings[i].flags = buf->flags; in submit_process_bufs() 307 job_data->used_mappings = mappings; in submit_process_bufs() 316 tegra_drm_mapping_put(mappings[i].mapping); in submit_process_bufs() [all …]
|
/openbmc/linux/drivers/soc/aspeed/ |
H A D | Kconfig | 13 Control LPC firmware cycle mappings through ioctl()s. The driver 43 Control ASPEED P2A VGA MMIO to BMC mappings through ioctl()s. The 44 driver also provides an interface for userspace mappings to a
|
/openbmc/linux/Documentation/driver-api/usb/ |
H A D | dma.rst | 19 manage dma mappings for existing dma-ready buffers (see below). 27 don't manage dma mappings for URBs. 41 IOMMU to manage the DMA mappings. It can cost MUCH more to set up and 42 tear down the IOMMU mappings with each request than perform the I/O! 64 "streaming" DMA mappings.)
|
/openbmc/linux/include/drm/ |
H A D | drm_cache.h | 56 * for some buffers, both the CPU and the GPU use uncached mappings, in drm_arch_can_wc_memory() 59 * The use of uncached GPU mappings relies on the correct implementation in drm_arch_can_wc_memory() 61 * will use cached mappings nonetheless. On x86 platforms, this does not in drm_arch_can_wc_memory() 62 * seem to matter, as uncached CPU mappings will snoop the caches in any in drm_arch_can_wc_memory()
|
/openbmc/linux/arch/sh/kernel/ |
H A D | head_32.S | 91 * Reconfigure the initial PMB mappings setup by the hardware. 102 * our address space and the initial mappings may not map PAGE_OFFSET 105 * Once we've setup cached and uncached mappings we clear the rest of the 156 * existing mappings that match the initial mappings VPN/PPN. 175 cmp/eq r0, r8 /* Check for valid __MEMORY_START mappings */ 185 * mappings.
|