Searched hist:"6 c1c5097781f563b70a81683ea6fdac21637573b" (Results 1 – 3 of 3) sorted by relevance
/openbmc/linux/include/net/ |
H A D | dst.h | diff 6c1c5097781f563b70a81683ea6fdac21637573b Tue Nov 15 02:53:55 CST 2022 Eric Dumazet <edumazet@google.com> net: add atomic_long_t to net_device_stats fields
Long standing KCSAN issues are caused by data-race around some dev->stats changes.
Most performance critical paths already use per-cpu variables, or per-queue ones.
It is reasonable (and more correct) to use atomic operations for the slow paths.
This patch adds an union for each field of net_device_stats, so that we can convert paths that are not yet protected by a spinlock or a mutex.
netdev_stats_to_stats64() no longer has an #if BITS_PER_LONG==64
Note that the memcpy() we were using on 64bit arches had no provision to avoid load-tearing, while atomic_long_read() is providing the needed protection at no cost.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/include/linux/ |
H A D | netdevice.h | diff 6c1c5097781f563b70a81683ea6fdac21637573b Tue Nov 15 02:53:55 CST 2022 Eric Dumazet <edumazet@google.com> net: add atomic_long_t to net_device_stats fields
Long standing KCSAN issues are caused by data-race around some dev->stats changes.
Most performance critical paths already use per-cpu variables, or per-queue ones.
It is reasonable (and more correct) to use atomic operations for the slow paths.
This patch adds an union for each field of net_device_stats, so that we can convert paths that are not yet protected by a spinlock or a mutex.
netdev_stats_to_stats64() no longer has an #if BITS_PER_LONG==64
Note that the memcpy() we were using on 64bit arches had no provision to avoid load-tearing, while atomic_long_read() is providing the needed protection at no cost.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
/openbmc/linux/net/core/ |
H A D | dev.c | diff 6c1c5097781f563b70a81683ea6fdac21637573b Tue Nov 15 02:53:55 CST 2022 Eric Dumazet <edumazet@google.com> net: add atomic_long_t to net_device_stats fields
Long standing KCSAN issues are caused by data-race around some dev->stats changes.
Most performance critical paths already use per-cpu variables, or per-queue ones.
It is reasonable (and more correct) to use atomic operations for the slow paths.
This patch adds an union for each field of net_device_stats, so that we can convert paths that are not yet protected by a spinlock or a mutex.
netdev_stats_to_stats64() no longer has an #if BITS_PER_LONG==64
Note that the memcpy() we were using on 64bit arches had no provision to avoid load-tearing, while atomic_long_read() is providing the needed protection at no cost.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|