#
19efbd93 |
| 19-Feb-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Kill net_mutex
We take net_mutex, when there are !async pernet_operations registered, and read locking of net_sem is not enough. But we may get rid of taking the mutex, and just change the logi
net: Kill net_mutex
We take net_mutex, when there are !async pernet_operations registered, and read locking of net_sem is not enough. But we may get rid of taking the mutex, and just change the logic to write lock net_sem in such cases. This obviously reduces the number of lock operations, we do.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
ff291d00 |
| 13-Feb-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Convert net_defaults_ops
net_defaults_ops introduce only net_defaults_init_net method, and it acts on net::core::sysctl_somaxconn, which is not interesting for the rest of pernet_subsys and per
net: Convert net_defaults_ops
net_defaults_ops introduce only net_defaults_init_net method, and it acts on net::core::sysctl_somaxconn, which is not interesting for the rest of pernet_subsys and pernet_device lists. Then, make them async.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Andrei Vagin <avagin@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
3fc3b827 |
| 13-Feb-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Convert net_ns_ops methods
This patch starts to convert pernet_subsys, registered from pure initcalls.
net_ns_ops::net_ns_net_init/net_ns_net_init, methods use only ida_simple_* functions, whi
net: Convert net_ns_ops methods
This patch starts to convert pernet_subsys, registered from pure initcalls.
net_ns_ops::net_ns_net_init/net_ns_net_init, methods use only ida_simple_* functions, which are not need a synchronization. They are synchronized by idr subsystem.
So, net_ns_ops methods are able to be executed in parallel with methods of other pernet operations.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Andrei Vagin <avagin@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
447cd7a0 |
| 13-Feb-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Allow pernet_operations to be executed in parallel
This adds new pernet_operations::async flag to indicate operations, which ->init(), ->exit() and ->exit_batch() methods are allowed to be exec
net: Allow pernet_operations to be executed in parallel
This adds new pernet_operations::async flag to indicate operations, which ->init(), ->exit() and ->exit_batch() methods are allowed to be executed in parallel with the methods of any other pernet_operations.
When there are only asynchronous pernet_operations in the system, net_mutex won't be taken for a net construction and destruction.
Also, remove BUG_ON(mutex_is_locked()) from net_assign_generic() without replacing with the equivalent net_sem check, as there is one more lockdep assert below.
v3: Add comment near net_mutex.
Suggested-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Andrei Vagin <avagin@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
bcab1ddd |
| 13-Feb-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Move mutex_unlock() in cleanup_net() up
net_sem protects from pernet_list changing, while ops_free_list() makes simple kfree(), and it can't race with other pernet_operations callbacks.
So we
net: Move mutex_unlock() in cleanup_net() up
net_sem protects from pernet_list changing, while ops_free_list() makes simple kfree(), and it can't race with other pernet_operations callbacks.
So we may release net_mutex earlier then it was.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Andrei Vagin <avagin@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
1a57feb8 |
| 13-Feb-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Introduce net_sem for protection of pernet_list
Currently, the mutex is mostly used to protect pernet operations list. It orders setup_net() and cleanup_net() with parallel {un,}register_pernet
net: Introduce net_sem for protection of pernet_list
Currently, the mutex is mostly used to protect pernet operations list. It orders setup_net() and cleanup_net() with parallel {un,}register_pernet_operations() calls, so ->exit{,batch} methods of the same pernet operations are executed for a dying net, as were used to call ->init methods, even after the net namespace is unlinked from net_namespace_list in cleanup_net().
But there are several problems with scalability. The first one is that more than one net can't be created or destroyed at the same moment on the node. For big machines with many cpus running many containers it's very sensitive.
The second one is that it's need to synchronize_rcu() after net is removed from net_namespace_list():
Destroy net_ns: cleanup_net() mutex_lock(&net_mutex) list_del_rcu(&net->list) synchronize_rcu() <--- Sleep there for ages list_for_each_entry_reverse(ops, &pernet_list, list) ops_exit_list(ops, &net_exit_list) list_for_each_entry_reverse(ops, &pernet_list, list) ops_free_list(ops, &net_exit_list) mutex_unlock(&net_mutex)
This primitive is not fast, especially on the systems with many processors and/or when preemptible RCU is enabled in config. So, all the time, while cleanup_net() is waiting for RCU grace period, creation of new net namespaces is not possible, the tasks, who makes it, are sleeping on the same mutex:
Create net_ns: copy_net_ns() mutex_lock_killable(&net_mutex) <--- Sleep there for ages
I observed 20-30 seconds hangs of "unshare -n" on ordinary 8-cpu laptop with preemptible RCU enabled after CRIU tests round is finished.
The solution is to convert net_mutex to the rw_semaphore and add fine grain locks to really small number of pernet_operations, what really need them.
Then, pernet_operations::init/::exit methods, modifying the net-related data, will require down_read() locking only, while down_write() will be used for changing pernet_list (i.e., when modules are being loaded and unloaded).
This gives signify performance increase, after all patch set is applied, like you may see here:
%for i in {1..10000}; do unshare -n bash -c exit; done
*before* real 1m40,377s user 0m9,672s sys 0m19,928s
*after* real 0m17,007s user 0m5,311s sys 0m11,779
(5.8 times faster)
This patch starts replacing net_mutex to net_sem. It adds rw_semaphore, describes the variables it protects, and makes to use, where appropriate. net_mutex is still present, and next patches will kick it out step-by-step.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Andrei Vagin <avagin@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
5ba049a5 |
| 13-Feb-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Cleanup in copy_net_ns()
Line up destructors actions in the revers order to constructors. Next patches will add more actions, and this will be comfortable, if there is the such order.
Signed-o
net: Cleanup in copy_net_ns()
Line up destructors actions in the revers order to constructors. Next patches will add more actions, and this will be comfortable, if there is the such order.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Andrei Vagin <avagin@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
98f6c533 |
| 13-Feb-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Assign net to net_namespace_list in setup_net()
This patch merges two repeating pieces of code in one, and they will live in setup_net() now.
The only change is that assignment:
init_net_ini
net: Assign net to net_namespace_list in setup_net()
This patch merges two repeating pieces of code in one, and they will live in setup_net() now.
The only change is that assignment:
init_net_initialized = true;
becomes reordered with:
list_add_tail_rcu(&net->list, &net_namespace_list);
The order does not have visible effect, and it is a simple cleanup because of:
init_net_initialized is used in !CONFIG_NET_NS case to order proc_net_ns_ops registration occuring at boot time:
start_kernel()->proc_root_init()->proc_net_init(), with net_ns_init()->setup_net(&init_net, &init_user_ns)
also occuring in boot time from the same init_task.
When there are no another tasks to race with them, for the single task it does not matter, which order two sequential independent loads should be made. So we make them reordered.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Andrei Vagin <avagin@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v4.15 |
|
#
fb07a820 |
| 19-Jan-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Move net:netns_ids destruction out of rtnl_lock() and document locking scheme
Currently, we unhash a dying net from netns_ids lists under rtnl_lock(). It's a leftover from the time when net::ne
net: Move net:netns_ids destruction out of rtnl_lock() and document locking scheme
Currently, we unhash a dying net from netns_ids lists under rtnl_lock(). It's a leftover from the time when net::netns_ids was introduced. There was no net::nsid_lock, and rtnl_lock() was mostly need to order modification of alive nets nsid idr, i.e. for: for_each_net(tmp) { ... id = __peernet2id(tmp, net); idr_remove(&tmp->netns_ids, id); ... }
Since we have net::nsid_lock, the modifications are protected by this local lock, and now we may introduce better scheme of netns_ids destruction.
Let's look at the functions peernet2id_alloc() and get_net_ns_by_id(). Previous commits taught these functions to work well with dying net acquired from rtnl unlocked lists. And they are the only functions which can hash a net to netns_ids or obtain from there. And as easy to check, other netns_ids operating functions works with id, not with net pointers. So, we do not need rtnl_lock to synchronize cleanup_net() with all them.
The another property, which is used in the patch, is that net is unhashed from net_namespace_list in the only place and by the only process. So, we avoid excess rcu_read_lock() or rtnl_lock(), when we'are iterating over the list in unhash_nsid().
All the above makes possible to keep rtnl_lock() locked only for net->list deletion, and completely avoid it for netns_ids unhashing and destruction. As these two doings may take long time (e.g., memory allocation to send skb), the patch should positively act on the scalability and signify decrease the time, which rtnl_lock() is held in cleanup_net().
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
42157277 |
| 16-Jan-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Remove spinlock from get_net_ns_by_id()
idr_find() is safe under rcu_read_lock() and maybe_get_net() guarantees that net is alive.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off
net: Remove spinlock from get_net_ns_by_id()
idr_find() is safe under rcu_read_lock() and maybe_get_net() guarantees that net is alive.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
0c06bea9 |
| 16-Jan-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Fix possible race in peernet2id_alloc()
peernet2id_alloc() is racy without rtnl_lock() as refcount_read(&peer->count) under net->nsid_lock does not guarantee, peer is alive:
rcu_read_lock() pe
net: Fix possible race in peernet2id_alloc()
peernet2id_alloc() is racy without rtnl_lock() as refcount_read(&peer->count) under net->nsid_lock does not guarantee, peer is alive:
rcu_read_lock() peernet2id_alloc() .. spin_lock_bh(&net->nsid_lock) .. refcount_read(&peer->count) (!= 0) .. .. put_net() .. cleanup_net() .. for_each_net(tmp) .. spin_lock_bh(&tmp->nsid_lock) .. __peernet2id(tmp, net) == -1 .. .. .. .. __peernet2id_alloc(alloc == true) .. .. .. rcu_read_unlock() .. .. synchronize_rcu() .. kmem_cache_free(net)
After the above situation, net::netns_id contains id pointing to freed memory, and any other dereferencing by the id will operate with this freed memory.
Currently, peernet2id_alloc() is used under rtnl_lock() everywhere except ovs_vport_cmd_fill_info(), and this race can't occur. But peernet2id_alloc() is generic interface, and better we fix it before someone really starts use it in wrong context.
v2: Don't place refcount_read(&net->count) under net->nsid_lock as suggested by Eric W. Biederman <ebiederm@xmission.com> v3: Rebase on top of net-next
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
273c28bc |
| 12-Jan-2018 |
Kirill Tkhai <ktkhai@virtuozzo.com> |
net: Convert atomic_t net::count to refcount_t
Since net could be obtained from RCU lists, and there is a race with net destruction, the patch converts net::count to refcount_t.
This provides sanit
net: Convert atomic_t net::count to refcount_t
Since net could be obtained from RCU lists, and there is a race with net destruction, the patch converts net::count to refcount_t.
This provides sanity checks for the cases of incrementing counter of already dead net, when maybe_get_net() has to used instead of get_net().
Drivers: allyesconfig and allmodconfig are OK.
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
21b59443 |
| 19-Dec-2017 |
Eric W. Biederman <ebiederm@xmission.com> |
net: Fix double free and memory corruption in get_net_ns_by_id()
(I can trivially verify that that idr_remove in cleanup_net happens after the network namespace count has dropped to zero --EWB)
Fu
net: Fix double free and memory corruption in get_net_ns_by_id()
(I can trivially verify that that idr_remove in cleanup_net happens after the network namespace count has dropped to zero --EWB)
Function get_net_ns_by_id() does not check for net::count after it has found a peer in netns_ids idr.
It may dereference a peer, after its count has already been finaly decremented. This leads to double free and memory corruption:
put_net(peer) rtnl_lock() atomic_dec_and_test(&peer->count) [count=0] ... __put_net(peer) get_net_ns_by_id(net, id) spin_lock(&cleanup_list_lock) list_add(&net->cleanup_list, &cleanup_list) spin_unlock(&cleanup_list_lock) queue_work() peer = idr_find(&net->netns_ids, id) | get_net(peer) [count=1] | ... | (use after final put) v ... cleanup_net() ... spin_lock(&cleanup_list_lock) ... list_replace_init(&cleanup_list, ..) ... spin_unlock(&cleanup_list_lock) ... ... ... ... put_net(peer) ... atomic_dec_and_test(&peer->count) [count=0] ... spin_lock(&cleanup_list_lock) ... list_add(&net->cleanup_list, &cleanup_list) ... spin_unlock(&cleanup_list_lock) ... queue_work() ... rtnl_unlock() rtnl_lock() ... for_each_net(tmp) { ... id = __peernet2id(tmp, peer) ... spin_lock_irq(&tmp->nsid_lock) ... idr_remove(&tmp->netns_ids, id) ... ... ... net_drop_ns() ... net_free(peer) ... } ... | v cleanup_net() ... (Second free of peer)
Also, put_net() on the right cpu may reorder with left's cpu list_replace_init(&cleanup_list, ..), and then cleanup_list will be corrupted.
Since cleanup_net() is executed in worker thread, while put_net(peer) can happen everywhere, there should be enough time for concurrent get_net_ns_by_id() to pick the peer up, and the race does not seem to be unlikely. The patch fixes the problem in standard way.
(Also, there is possible problem in peernet2id_alloc(), which requires check for net::count under nsid_lock and maybe_get_net(peer), but in current stable kernel it's used under rtnl_lock() and it has to be safe. Openswitch begun to use peernet2id_alloc(), and possibly it should be fixed too. While this is not in stable kernel yet, so I'll send a separate message to netdev@ later).
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Fixes: 0c7aecd4bde4 "netns: add rtnl cmd to add and get peer netns ids" Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v4.13.16, v4.14 |
|
#
7cbebc8a |
| 02-Nov-2017 |
Jiri Benc <jbenc@redhat.com> |
net: export peernet2id_alloc
It will be used by openvswitch.
Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
|
Revision tags: v4.13.5, v4.13 |
|
#
165b9117 |
| 09-Aug-2017 |
Florian Westphal <fw@strlen.de> |
net: call newid/getid without rtnl mutex held
Both functions take nsid_lock and don't rely on rtnl lock.
Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Hannes Frederic Sowa <hannes@str
net: call newid/getid without rtnl mutex held
Both functions take nsid_lock and don't rely on rtnl lock.
Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
b97bac64 |
| 09-Aug-2017 |
Florian Westphal <fw@strlen.de> |
rtnetlink: make rtnl_register accept a flags parameter
This change allows us to later indicate to rtnetlink core that certain doit functions should be called without acquiring rtnl_mutex.
This chan
rtnetlink: make rtnl_register accept a flags parameter
This change allows us to later indicate to rtnetlink core that certain doit functions should be called without acquiring rtnl_mutex.
This change should have no effect, we simply replace the last (now unused) calcit argument with the new flag.
Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v4.12 |
|
#
c122e14d |
| 30-Jun-2017 |
Reshetova, Elena <elena.reshetova@intel.com> |
net: convert net.passive from atomic_t to refcount_t
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid a
net: convert net.passive from atomic_t to refcount_t
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations.
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David Windsor <dwindsor@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
7866cc57 |
| 30-May-2017 |
Florian Westphal <fw@strlen.de> |
netns: add and use net_ns_barrier
Quoting Joe Stringer: If a user loads nf_conntrack_ftp, sends FTP traffic through a network namespace, destroys that namespace then unloads the FTP helper modul
netns: add and use net_ns_barrier
Quoting Joe Stringer: If a user loads nf_conntrack_ftp, sends FTP traffic through a network namespace, destroys that namespace then unloads the FTP helper module, then the kernel will crash.
Events that lead to the crash: 1. conntrack is created with ftp helper in netns x 2. This netns is destroyed 3. netns destruction is scheduled 4. netns destruction wq starts, removes netns from global list 5. ftp helper is unloaded, which resets all helpers of the conntracks via for_each_net()
but because netns is already gone from list the for_each_net() loop doesn't include it, therefore all of these conntracks are unaffected.
6. helper module unload finishes 7. netns wq invokes destructor for rmmod'ed helper
CC: "Eric W. Biederman" <ebiederm@xmission.com> Reported-by: Joe Stringer <joe@ovn.org> Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
show more ...
|
#
10d486a3 |
| 09-Jun-2017 |
Nicolas Dichtel <nicolas.dichtel@6wind.com> |
netns: fix error code when the nsid is already used
When the user tries to assign a specific nsid, idr_alloc() is called with the range [nsid, nsid+1]. If this nsid is already used, idr_alloc() retu
netns: fix error code when the nsid is already used
When the user tries to assign a specific nsid, idr_alloc() is called with the range [nsid, nsid+1]. If this nsid is already used, idr_alloc() returns ENOSPC (No space left on device). In our case, it's better to return EEXIST to make it clear that the nsid is not available.
CC: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
4a7f7bc6 |
| 09-Jun-2017 |
Nicolas Dichtel <nicolas.dichtel@6wind.com> |
netns: define extack error msg for nsis cmds
It helps the user to identify errors.
CC: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
netns: define extack error msg for nsis cmds
It helps the user to identify errors.
CC: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
7c3f1875 |
| 24-May-2017 |
Roman Kapl <roman.kapl@sysgo.com> |
net: move somaxconn init from sysctl code
The default value for somaxconn is set in sysctl_core_net_init(), but this function is not called when kernel is configured without CONFIG_SYSCTL.
This res
net: move somaxconn init from sysctl code
The default value for somaxconn is set in sysctl_core_net_init(), but this function is not called when kernel is configured without CONFIG_SYSCTL.
This results in the kernel not being able to accept TCP connections, because the backlog has zero size. Usually, the user ends up with: "TCP: request_sock_TCP: Possible SYN flooding on port 7. Dropping request. Check SNMP counters." If SYN cookies are not enabled the connection is rejected.
Before ef547f2ac16 (tcp: remove max_qlen_log), the effects were less severe, because the backlog was always at least eight slots long.
Signed-off-by: Roman Kapl <roman.kapl@sysgo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v4.10.17, v4.10.16, v4.10.15, v4.10.14 |
|
#
b5082df8 |
| 27-Apr-2017 |
David Howells <dhowells@redhat.com> |
net: Initialise init_net.count to 1
Initialise init_net.count to 1 for its pointer from init_nsproxy lest someone tries to do a get_net() and a put_net() in a process in which current->ns_proxy->net
net: Initialise init_net.count to 1
Initialise init_net.count to 1 for its pointer from init_nsproxy lest someone tries to do a get_net() and a put_net() in a process in which current->ns_proxy->net_ns points to the initial network namespace.
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v4.10.13, v4.10.12, v4.10.11 |
|
#
c21ef3e3 |
| 16-Apr-2017 |
David Ahern <dsa@cumulusnetworks.com> |
net: rtnetlink: plumb extended ack to doit function
Add netlink_ext_ack arg to rtnl_doit_func. Pass extack arg to nlmsg_parse for doit functions that call it directly.
This is the first step to usi
net: rtnetlink: plumb extended ack to doit function
Add netlink_ext_ack arg to rtnl_doit_func. Pass extack arg to nlmsg_parse for doit functions that call it directly.
This is the first step to using extended error reporting in rtnetlink. >From here individual subsystems can be updated to set netlink_ext_ack as needed.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
fceb6435 |
| 12-Apr-2017 |
Johannes Berg <johannes.berg@intel.com> |
netlink: pass extended ACK struct to parsing functions
Pass the new extended ACK reporting struct to all of the generic netlink parsing functions. For now, pass NULL in almost all callers (except fo
netlink: pass extended ACK struct to parsing functions
Pass the new extended ACK reporting struct to all of the generic netlink parsing functions. For now, pass NULL in almost all callers (except for some in the core.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v4.10.10, v4.10.9, v4.10.8, v4.10.7, v4.10.6, v4.10.5, v4.10.4, v4.10.3, v4.10.2, v4.10.1, v4.10 |
|
#
f719ff9b |
| 06-Feb-2017 |
Ingo Molnar <mingo@kernel.org> |
sched/headers: Prepare to move the task_lock()/unlock() APIs to <linux/sched/task.h>
But first update the code that uses these facilities with the new header.
Acked-by: Linus Torvalds <torvalds@lin
sched/headers: Prepare to move the task_lock()/unlock() APIs to <linux/sched/task.h>
But first update the code that uses these facilities with the new header.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|