/openbmc/linux/include/net/ |
H A D | secure_seq.h | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | inetpeer.h | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | ip.h | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | ipv6.h | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/core/ |
H A D | secure_seq.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/ipv6/ |
H A D | output_core.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | ip6_output.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/drivers/net/ppp/ |
H A D | pptp.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/ipv4/ |
H A D | inetpeer.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | ip_tunnel_core.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | igmp.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | raw.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | ipmr.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | ip_output.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
H A D | route.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/netfilter/ipvs/ |
H A D | ip_vs_xmit.c | diff 73f156a6e8c1074ac6327e0abd1169e95eb66463 Mon Jun 02 07:26:03 CDT 2014 Eric Dumazet <edumazet@google.com> inetpeer: get rid of ip_id_count
Ideally, we would need to generate IP ID using a per destination IP generator.
linux kernels used inet_peer cache for this purpose, but this had a huge cost on servers disabling MTU discovery.
1) each inet_peer struct consumes 192 bytes
2) inetpeer cache uses a binary tree of inet_peer structs, with a nominal size of ~66000 elements under load.
3) lookups in this tree are hitting a lot of cache lines, as tree depth is about 20.
4) If server deals with many tcp flows, we have a high probability of not finding the inet_peer, allocating a fresh one, inserting it in the tree with same initial ip_id_count, (cf secure_ip_id())
5) We garbage collect inet_peer aggressively.
IP ID generation do not have to be 'perfect'
Goal is trying to avoid duplicates in a short period of time, so that reassembly units have a chance to complete reassembly of fragments belonging to one message before receiving other fragments with a recycled ID.
We simply use an array of generators, and a Jenkin hash using the dst IP as a key.
ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it belongs (it is only used from this file)
secure_ip_id() and secure_ipv6_id() no longer are needed.
Rename ip_select_ident_more() to ip_select_ident_segs() to avoid unnecessary decrement/increment of the number of segments.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|