/openbmc/linux/net/iucv/ |
H A D | af_iucv.c | diff 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 Thu Oct 13 02:28:54 CDT 2011 Eric Dumazet <eric.dumazet@gmail.com> net: more accurate skb truesize
skb truesize currently accounts for sk_buff struct and part of skb head. kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of skb head, to allow a better use of memory (lowering number of reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of misconfigured protocol stacks, hitting per socket or global memory limits that were previously not reached. But its a necessary step for a more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Andi Kleen <ak@linux.intel.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/sctp/ |
H A D | protocol.c | diff 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 Thu Oct 13 02:28:54 CDT 2011 Eric Dumazet <eric.dumazet@gmail.com> net: more accurate skb truesize
skb truesize currently accounts for sk_buff struct and part of skb head. kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of skb head, to allow a better use of memory (lowering number of reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of misconfigured protocol stacks, hitting per socket or global memory limits that were previously not reached. But its a necessary step for a more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Andi Kleen <ak@linux.intel.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/ipv6/ |
H A D | icmp.c | diff 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 Thu Oct 13 02:28:54 CDT 2011 Eric Dumazet <eric.dumazet@gmail.com> net: more accurate skb truesize
skb truesize currently accounts for sk_buff struct and part of skb head. kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of skb head, to allow a better use of memory (lowering number of reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of misconfigured protocol stacks, hitting per socket or global memory limits that were previously not reached. But its a necessary step for a more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Andi Kleen <ak@linux.intel.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/ipv4/ |
H A D | icmp.c | diff 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 Thu Oct 13 02:28:54 CDT 2011 Eric Dumazet <eric.dumazet@gmail.com> net: more accurate skb truesize
skb truesize currently accounts for sk_buff struct and part of skb head. kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of skb head, to allow a better use of memory (lowering number of reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of misconfigured protocol stacks, hitting per socket or global memory limits that were previously not reached. But its a necessary step for a more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Andi Kleen <ak@linux.intel.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | tcp_input.c | diff 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 Thu Oct 13 02:28:54 CDT 2011 Eric Dumazet <eric.dumazet@gmail.com> net: more accurate skb truesize
skb truesize currently accounts for sk_buff struct and part of skb head. kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of skb head, to allow a better use of memory (lowering number of reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of misconfigured protocol stacks, hitting per socket or global memory limits that were previously not reached. But its a necessary step for a more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Andi Kleen <ak@linux.intel.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/core/ |
H A D | skbuff.c | diff bc417e30f8dff6e8657005c4317cd71239e53375 Wed Nov 02 08:40:28 CDT 2011 Tony Lindgren <tony@atomide.com> net: Add back alignment for size for __alloc_skb
Commit 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 (net: more accurate skb truesize) changed the alignment of size. This can cause problems at least on some machines with NFS root:
Unhandled fault: alignment exception (0x801) at 0xc183a43a Internal error: : 801 [#1] PREEMPT Modules linked in: CPU: 0 Not tainted (3.1.0-08784-g5eeee4a #733) pc : [<c02fbba0>] lr : [<c02fbb9c>] psr: 60000013 sp : c180fef8 ip : 00000000 fp : c181f580 r10: 00000000 r9 : c044b28c r8 : 00000001 r7 : c183a3a0 r6 : c1835be0 r5 : c183a412 r4 : 000001f2 r3 : 00000000 r2 : 00000000 r1 : ffffffe6 r0 : c183a43a Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment kernel Control: 0005317f Table: 10004000 DAC: 00000017 Process swapper (pid: 1, stack limit = 0xc180e270) Stack: (0xc180fef8 to 0xc1810000) fee0: 00000024 00000000 ff00: 00000000 c183b9c0 c183b8e0 c044b28c c0507ccc c019dfc4 c180ff2c c0503cf8 ff20: c180ff4c c180ff4c 00000000 c1835420 c182c740 c18349c0 c05233c0 00000000 ff40: 00000000 c00e6bb8 c180e000 00000000 c04dd82c c0507e7c c050cc18 c183b9c0 ff60: c05233c0 00000000 00000000 c01f34f4 c0430d70 c019d364 c04dd898 c04dd898 ff80: c04dd82c c0507e7c c180e000 00000000 c04c584c c01f4918 c04dd898 c04dd82c ffa0: c04ddd28 c180e000 00000000 c0008758 c181fa60 3231d82c 00000037 00000000 ffc0: 00000000 c04dd898 c04dd82c c04ddd28 00000013 00000000 00000000 00000000 ffe0: 00000000 c04b2224 00000000 c04b21a0 c001056c c001056c 00000000 00000000 Function entered at [<c02fbba0>] from [<c019dfc4>] Function entered at [<c019dfc4>] from [<c01f34f4>] Function entered at [<c01f34f4>] from [<c01f4918>] Function entered at [<c01f4918>] from [<c0008758>] Function entered at [<c0008758>] from [<c04b2224>] Function entered at [<c04b2224>] from [<c001056c>] Code: e1a00005 e3a01028 ebfa7cb0 e35a0000 (e5858028)
Here PC is at __alloc_skb and &shinfo->dataref is unaligned because skb->end can be unaligned without this patch.
As explained by Eric Dumazet <eric.dumazet@gmail.com>, this happens only with SLOB, and not with SLAB or SLUB:
* Eric Dumazet <eric.dumazet@gmail.com> [111102 15:56]: > > Your patch is absolutely needed, I completely forgot about SLOB :( > > since, kmalloc(386) on SLOB gives exactly ksize=386 bytes, not nearest > power of two. > > [ 60.305763] malloc(size=385)->ffff880112c11e38 ksize=386 -> nsize=2 > [ 60.305921] malloc(size=385)->ffff88007c92ce28 ksize=386 -> nsize=2 > [ 60.306898] malloc(size=656)->ffff88007c44ad28 ksize=656 -> nsize=272 > [ 60.325385] malloc(size=656)->ffff88007c575868 ksize=656 -> nsize=272 > [ 60.325531] malloc(size=656)->ffff88011c777230 ksize=656 -> nsize=272 > [ 60.325701] malloc(size=656)->ffff880114011008 ksize=656 -> nsize=272 > [ 60.346716] malloc(size=385)->ffff880114142008 ksize=386 -> nsize=2 > [ 60.346900] malloc(size=385)->ffff88011c777690 ksize=386 -> nsize=2
Signed-off-by: Tony Lindgren <tony@atomide.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> diff 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 Thu Oct 13 02:28:54 CDT 2011 Eric Dumazet <eric.dumazet@gmail.com> net: more accurate skb truesize
skb truesize currently accounts for sk_buff struct and part of skb head. kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of skb head, to allow a better use of memory (lowering number of reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of misconfigured protocol stacks, hitting per socket or global memory limits that were previously not reached. But its a necessary step for a more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Andi Kleen <ak@linux.intel.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | sock.c | diff 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 Thu Oct 13 02:28:54 CDT 2011 Eric Dumazet <eric.dumazet@gmail.com> net: more accurate skb truesize
skb truesize currently accounts for sk_buff struct and part of skb head. kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of skb head, to allow a better use of memory (lowering number of reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of misconfigured protocol stacks, hitting per socket or global memory limits that were previously not reached. But its a necessary step for a more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Andi Kleen <ak@linux.intel.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/include/linux/ |
H A D | skbuff.h | diff 87fb4b7b533073eeeaed0b6bf7c2328995f6c075 Thu Oct 13 02:28:54 CDT 2011 Eric Dumazet <eric.dumazet@gmail.com> net: more accurate skb truesize
skb truesize currently accounts for sk_buff struct and part of skb head. kmalloc() roundings are also ignored.
Considering that skb_shared_info is larger than sk_buff, its time to take it into account for better memory accounting.
This patch introduces SKB_TRUESIZE(X) macro to centralize various assumptions into a single place.
At skb alloc phase, we put skb_shared_info struct at the exact end of skb head, to allow a better use of memory (lowering number of reallocations), since kmalloc() gives us power-of-two memory blocks.
Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are aligned to cache lines, as before.
Note: This patch might trigger performance regressions because of misconfigured protocol stacks, hitting per socket or global memory limits that were previously not reached. But its a necessary step for a more accurate memory accounting.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Andi Kleen <ak@linux.intel.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|