Searched hist:b6c336528926ef73b0f70260f2636de2c3b94c14 (Results 1 – 2 of 2) sorted by relevance
/openbmc/linux/kernel/ |
H A D | ucount.c | diff b6c336528926ef73b0f70260f2636de2c3b94c14 Thu Apr 22 07:27:10 CDT 2021 Alexey Gladkov <legion@kernel.org> Use atomic_t for ucounts reference counting
The current implementation of the ucounts reference counter requires the use of spin_lock. We're going to use get_ucounts() in more performance critical areas like a handling of RLIMIT_SIGPENDING.
Now we need to use spin_lock only if we want to change the hashtable.
v10: * Always try to put ucounts in case we cannot increase ucounts->count. This will allow to cover the case when all consumers will return ucounts at once.
v9: * Use a negative value to check that the ucounts->count is close to overflow.
Signed-off-by: Alexey Gladkov <legion@kernel.org> Link: https://lkml.kernel.org/r/94d1dbecab060a6b116b0a2d1accd8ca1bbb4f5f.1619094428.git.legion@kernel.org Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
|
/openbmc/linux/include/linux/ |
H A D | user_namespace.h | diff b6c336528926ef73b0f70260f2636de2c3b94c14 Thu Apr 22 07:27:10 CDT 2021 Alexey Gladkov <legion@kernel.org> Use atomic_t for ucounts reference counting
The current implementation of the ucounts reference counter requires the use of spin_lock. We're going to use get_ucounts() in more performance critical areas like a handling of RLIMIT_SIGPENDING.
Now we need to use spin_lock only if we want to change the hashtable.
v10: * Always try to put ucounts in case we cannot increase ucounts->count. This will allow to cover the case when all consumers will return ucounts at once.
v9: * Use a negative value to check that the ucounts->count is close to overflow.
Signed-off-by: Alexey Gladkov <legion@kernel.org> Link: https://lkml.kernel.org/r/94d1dbecab060a6b116b0a2d1accd8ca1bbb4f5f.1619094428.git.legion@kernel.org Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
|