/openbmc/linux/include/linux/ |
H A D | memcontrol.h | diff bb2a0de92c891b8feeedc0178acb3ae009d899a8 Tue Jul 26 18:08:22 CDT 2011 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> memcg: consolidate memory cgroup lru stat functions
In mm/memcontrol.c, there are many lru stat functions as..
mem_cgroup_zone_nr_lru_pages mem_cgroup_node_nr_file_lru_pages mem_cgroup_nr_file_lru_pages mem_cgroup_node_nr_anon_lru_pages mem_cgroup_nr_anon_lru_pages mem_cgroup_node_nr_unevictable_lru_pages mem_cgroup_nr_unevictable_lru_pages mem_cgroup_node_nr_lru_pages mem_cgroup_nr_lru_pages mem_cgroup_get_local_zonestat
Some of them are under #ifdef MAX_NUMNODES >1 and others are not. This seems bad. This patch consolidates all functions into
mem_cgroup_zone_nr_lru_pages() mem_cgroup_node_nr_lru_pages() mem_cgroup_nr_lru_pages()
For these functions, "which LRU?" information is passed by a mask.
example: mem_cgroup_nr_lru_pages(mem, BIT(LRU_ACTIVE_ANON))
And I added some macro as ALL_LRU, ALL_LRU_FILE, ALL_LRU_ANON.
example: mem_cgroup_nr_lru_pages(mem, ALL_LRU)
BTW, considering layout of NUMA memory placement of counters, this patch seems to be better.
Now, when we gather all LRU information, we scan in following orer for_each_lru -> for_each_node -> for_each_zone.
This means we'll touch cache lines in different node in turn.
After patch, we'll scan for_each_node -> for_each_zone -> for_each_lru(mask)
Then, we'll gather information in the same cacheline at once.
[akpm@linux-foundation.org: fix warnigns, build error] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Ying Han <yinghan@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
H A D | mmzone.h | diff bb2a0de92c891b8feeedc0178acb3ae009d899a8 Tue Jul 26 18:08:22 CDT 2011 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> memcg: consolidate memory cgroup lru stat functions
In mm/memcontrol.c, there are many lru stat functions as..
mem_cgroup_zone_nr_lru_pages mem_cgroup_node_nr_file_lru_pages mem_cgroup_nr_file_lru_pages mem_cgroup_node_nr_anon_lru_pages mem_cgroup_nr_anon_lru_pages mem_cgroup_node_nr_unevictable_lru_pages mem_cgroup_nr_unevictable_lru_pages mem_cgroup_node_nr_lru_pages mem_cgroup_nr_lru_pages mem_cgroup_get_local_zonestat
Some of them are under #ifdef MAX_NUMNODES >1 and others are not. This seems bad. This patch consolidates all functions into
mem_cgroup_zone_nr_lru_pages() mem_cgroup_node_nr_lru_pages() mem_cgroup_nr_lru_pages()
For these functions, "which LRU?" information is passed by a mask.
example: mem_cgroup_nr_lru_pages(mem, BIT(LRU_ACTIVE_ANON))
And I added some macro as ALL_LRU, ALL_LRU_FILE, ALL_LRU_ANON.
example: mem_cgroup_nr_lru_pages(mem, ALL_LRU)
BTW, considering layout of NUMA memory placement of counters, this patch seems to be better.
Now, when we gather all LRU information, we scan in following orer for_each_lru -> for_each_node -> for_each_zone.
This means we'll touch cache lines in different node in turn.
After patch, we'll scan for_each_node -> for_each_zone -> for_each_lru(mask)
Then, we'll gather information in the same cacheline at once.
[akpm@linux-foundation.org: fix warnigns, build error] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Ying Han <yinghan@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
/openbmc/linux/mm/ |
H A D | vmscan.c | diff bb2a0de92c891b8feeedc0178acb3ae009d899a8 Tue Jul 26 18:08:22 CDT 2011 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> memcg: consolidate memory cgroup lru stat functions
In mm/memcontrol.c, there are many lru stat functions as..
mem_cgroup_zone_nr_lru_pages mem_cgroup_node_nr_file_lru_pages mem_cgroup_nr_file_lru_pages mem_cgroup_node_nr_anon_lru_pages mem_cgroup_nr_anon_lru_pages mem_cgroup_node_nr_unevictable_lru_pages mem_cgroup_nr_unevictable_lru_pages mem_cgroup_node_nr_lru_pages mem_cgroup_nr_lru_pages mem_cgroup_get_local_zonestat
Some of them are under #ifdef MAX_NUMNODES >1 and others are not. This seems bad. This patch consolidates all functions into
mem_cgroup_zone_nr_lru_pages() mem_cgroup_node_nr_lru_pages() mem_cgroup_nr_lru_pages()
For these functions, "which LRU?" information is passed by a mask.
example: mem_cgroup_nr_lru_pages(mem, BIT(LRU_ACTIVE_ANON))
And I added some macro as ALL_LRU, ALL_LRU_FILE, ALL_LRU_ANON.
example: mem_cgroup_nr_lru_pages(mem, ALL_LRU)
BTW, considering layout of NUMA memory placement of counters, this patch seems to be better.
Now, when we gather all LRU information, we scan in following orer for_each_lru -> for_each_node -> for_each_zone.
This means we'll touch cache lines in different node in turn.
After patch, we'll scan for_each_node -> for_each_zone -> for_each_lru(mask)
Then, we'll gather information in the same cacheline at once.
[akpm@linux-foundation.org: fix warnigns, build error] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Ying Han <yinghan@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
H A D | memcontrol.c | diff bb2a0de92c891b8feeedc0178acb3ae009d899a8 Tue Jul 26 18:08:22 CDT 2011 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> memcg: consolidate memory cgroup lru stat functions
In mm/memcontrol.c, there are many lru stat functions as..
mem_cgroup_zone_nr_lru_pages mem_cgroup_node_nr_file_lru_pages mem_cgroup_nr_file_lru_pages mem_cgroup_node_nr_anon_lru_pages mem_cgroup_nr_anon_lru_pages mem_cgroup_node_nr_unevictable_lru_pages mem_cgroup_nr_unevictable_lru_pages mem_cgroup_node_nr_lru_pages mem_cgroup_nr_lru_pages mem_cgroup_get_local_zonestat
Some of them are under #ifdef MAX_NUMNODES >1 and others are not. This seems bad. This patch consolidates all functions into
mem_cgroup_zone_nr_lru_pages() mem_cgroup_node_nr_lru_pages() mem_cgroup_nr_lru_pages()
For these functions, "which LRU?" information is passed by a mask.
example: mem_cgroup_nr_lru_pages(mem, BIT(LRU_ACTIVE_ANON))
And I added some macro as ALL_LRU, ALL_LRU_FILE, ALL_LRU_ANON.
example: mem_cgroup_nr_lru_pages(mem, ALL_LRU)
BTW, considering layout of NUMA memory placement of counters, this patch seems to be better.
Now, when we gather all LRU information, we scan in following orer for_each_lru -> for_each_node -> for_each_zone.
This means we'll touch cache lines in different node in turn.
After patch, we'll scan for_each_node -> for_each_zone -> for_each_lru(mask)
Then, we'll gather information in the same cacheline at once.
[akpm@linux-foundation.org: fix warnigns, build error] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Ying Han <yinghan@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|