1.. _page_migration: 2 3============== 4Page migration 5============== 6 7Page migration allows moving the physical location of pages between 8nodes in a NUMA system while the process is running. This means that the 9virtual addresses that the process sees do not change. However, the 10system rearranges the physical location of those pages. 11 12Also see :ref:`Heterogeneous Memory Management (HMM) <hmm>` 13for migrating pages to or from device private memory. 14 15The main intent of page migration is to reduce the latency of memory accesses 16by moving pages near to the processor where the process accessing that memory 17is running. 18 19Page migration allows a process to manually relocate the node on which its 20pages are located through the MF_MOVE and MF_MOVE_ALL options while setting 21a new memory policy via mbind(). The pages of a process can also be relocated 22from another process using the sys_migrate_pages() function call. The 23migrate_pages() function call takes two sets of nodes and moves pages of a 24process that are located on the from nodes to the destination nodes. 25Page migration functions are provided by the numactl package by Andi Kleen 26(a version later than 0.9.3 is required. Get it from 27https://github.com/numactl/numactl.git). numactl provides libnuma 28which provides an interface similar to other NUMA functionality for page 29migration. cat ``/proc/<pid>/numa_maps`` allows an easy review of where the 30pages of a process are located. See also the numa_maps documentation in the 31proc(5) man page. 32 33Manual migration is useful if for example the scheduler has relocated 34a process to a processor on a distant node. A batch scheduler or an 35administrator may detect the situation and move the pages of the process 36nearer to the new processor. The kernel itself only provides 37manual page migration support. Automatic page migration may be implemented 38through user space processes that move pages. A special function call 39"move_pages" allows the moving of individual pages within a process. 40For example, A NUMA profiler may obtain a log showing frequent off-node 41accesses and may use the result to move pages to more advantageous 42locations. 43 44Larger installations usually partition the system using cpusets into 45sections of nodes. Paul Jackson has equipped cpusets with the ability to 46move pages when a task is moved to another cpuset (See 47:ref:`CPUSETS <cpusets>`). 48Cpusets allow the automation of process locality. If a task is moved to 49a new cpuset then also all its pages are moved with it so that the 50performance of the process does not sink dramatically. Also the pages 51of processes in a cpuset are moved if the allowed memory nodes of a 52cpuset are changed. 53 54Page migration allows the preservation of the relative location of pages 55within a group of nodes for all migration techniques which will preserve a 56particular memory allocation pattern generated even after migrating a 57process. This is necessary in order to preserve the memory latencies. 58Processes will run with similar performance after migration. 59 60Page migration occurs in several steps. First a high level 61description for those trying to use migrate_pages() from the kernel 62(for userspace usage see the Andi Kleen's numactl package mentioned above) 63and then a low level description of how the low level details work. 64 65In kernel use of migrate_pages() 66================================ 67 681. Remove pages from the LRU. 69 70 Lists of pages to be migrated are generated by scanning over 71 pages and moving them into lists. This is done by 72 calling isolate_lru_page(). 73 Calling isolate_lru_page() increases the references to the page 74 so that it cannot vanish while the page migration occurs. 75 It also prevents the swapper or other scans from encountering 76 the page. 77 782. We need to have a function of type new_page_t that can be 79 passed to migrate_pages(). This function should figure out 80 how to allocate the correct new page given the old page. 81 823. The migrate_pages() function is called which attempts 83 to do the migration. It will call the function to allocate 84 the new page for each page that is considered for 85 moving. 86 87How migrate_pages() works 88========================= 89 90migrate_pages() does several passes over its list of pages. A page is moved 91if all references to a page are removable at the time. The page has 92already been removed from the LRU via isolate_lru_page() and the refcount 93is increased so that the page cannot be freed while page migration occurs. 94 95Steps: 96 971. Lock the page to be migrated. 98 992. Ensure that writeback is complete. 100 1013. Lock the new page that we want to move to. It is locked so that accesses to 102 this (not yet up-to-date) page immediately block while the move is in progress. 103 1044. All the page table references to the page are converted to migration 105 entries. This decreases the mapcount of a page. If the resulting 106 mapcount is not zero then we do not migrate the page. All user space 107 processes that attempt to access the page will now wait on the page lock 108 or wait for the migration page table entry to be removed. 109 1105. The i_pages lock is taken. This will cause all processes trying 111 to access the page via the mapping to block on the spinlock. 112 1136. The refcount of the page is examined and we back out if references remain. 114 Otherwise, we know that we are the only one referencing this page. 115 1167. The radix tree is checked and if it does not contain the pointer to this 117 page then we back out because someone else modified the radix tree. 118 1198. The new page is prepped with some settings from the old page so that 120 accesses to the new page will discover a page with the correct settings. 121 1229. The radix tree is changed to point to the new page. 123 12410. The reference count of the old page is dropped because the address space 125 reference is gone. A reference to the new page is established because 126 the new page is referenced by the address space. 127 12811. The i_pages lock is dropped. With that lookups in the mapping 129 become possible again. Processes will move from spinning on the lock 130 to sleeping on the locked new page. 131 13212. The page contents are copied to the new page. 133 13413. The remaining page flags are copied to the new page. 135 13614. The old page flags are cleared to indicate that the page does 137 not provide any information anymore. 138 13915. Queued up writeback on the new page is triggered. 140 14116. If migration entries were inserted into the page table, then replace them 142 with real ptes. Doing so will enable access for user space processes not 143 already waiting for the page lock. 144 14517. The page locks are dropped from the old and new page. 146 Processes waiting on the page lock will redo their page faults 147 and will reach the new page. 148 14918. The new page is moved to the LRU and can be scanned by the swapper, 150 etc. again. 151 152Non-LRU page migration 153====================== 154 155Although migration originally aimed for reducing the latency of memory 156accesses for NUMA, compaction also uses migration to create high-order 157pages. For compaction purposes, it is also useful to be able to move 158non-LRU pages, such as zsmalloc and virtio-balloon pages. 159 160If a driver wants to make its pages movable, it should define a struct 161movable_operations. It then needs to call __SetPageMovable() on each 162page that it may be able to move. This uses the ``page->mapping`` field, 163so this field is not available for the driver to use for other purposes. 164 165Monitoring Migration 166===================== 167 168The following events (counters) can be used to monitor page migration. 169 1701. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a 171 page was migrated. If the page was a non-THP and non-hugetlb page, then 172 this counter is increased by one. If the page was a THP or hugetlb, then 173 this counter is increased by the number of THP or hugetlb subpages. 174 For example, migration of a single 2MB THP that has 4KB-size base pages 175 (subpages) will cause this counter to increase by 512. 176 1772. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for 178 PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages, 179 if it was a THP or hugetlb. 180 1813. THP_MIGRATION_SUCCESS: A THP was migrated without being split. 182 1834. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split. 184 1855. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had 186 to be split. After splitting, a migration retry was used for it's sub-pages. 187 188THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or 189PGMIGRATE_FAIL events. For example, a THP migration failure will cause both 190THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase. 191 192Christoph Lameter, May 8, 2006. 193Minchan Kim, Mar 28, 2016. 194 195.. kernel-doc:: include/linux/migrate.h 196