History log of /openbmc/linux/net/rxrpc/ar-internal.h (Results 51 – 75 of 559)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# a2cf3264 16-Nov-2022 David Howells <dhowells@redhat.com>

rxrpc: Fold __rxrpc_unuse_local() into rxrpc_unuse_local()

Fold __rxrpc_unuse_local() into rxrpc_unuse_local() as the latter is now
the only user of the former.

Signed-off-by: David Howells <dhowel

rxrpc: Fold __rxrpc_unuse_local() into rxrpc_unuse_local()

Fold __rxrpc_unuse_local() into rxrpc_unuse_local() as the latter is now
the only user of the former.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 5086d9a9 11-Nov-2022 David Howells <dhowells@redhat.com>

rxrpc: Move the cwnd degradation after transmitting packets

When we've gone for >1RTT without transmitting a packet, we should reduce
the ssthresh and cut the cwnd by half (as suggested in RFC2861 s

rxrpc: Move the cwnd degradation after transmitting packets

When we've gone for >1RTT without transmitting a packet, we should reduce
the ssthresh and cut the cwnd by half (as suggested in RFC2861 sec 3.1).

However, we may receive ACK packets in a batch and the first of these may
cut the cwnd, preventing further transmission, and each subsequent one cuts
the cwnd yet further, reducing it to the floor and killing performance.

Fix this by moving the cwnd reset to after doing the transmission and
resetting the base time such that we don't cut the cwnd by half again for
at least another RTT.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 32cf8edb 11-Nov-2022 David Howells <dhowells@redhat.com>

rxrpc: Trace/count transmission underflows and cwnd resets

Add a tracepoint to log when a cwnd reset occurs due to lack of
transmission on a call.

Add stat counters to count transmission underflows

rxrpc: Trace/count transmission underflows and cwnd resets

Add a tracepoint to log when a cwnd reset occurs due to lack of
transmission on a call.

Add stat counters to count transmission underflows (ie. when we have tx
window space, but sendmsg doesn't manage to keep up), cwnd resets and
transmission failures.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


Revision tags: v5.4.16, v5.5, v5.4.15
# 5e6ef4f1 23-Jan-2020 David Howells <dhowells@redhat.com>

rxrpc: Make the I/O thread take over the call and local processor work

Move the functions from the call->processor and local->processor work items
into the domain of the I/O thread.

The call event

rxrpc: Make the I/O thread take over the call and local processor work

Move the functions from the call->processor and local->processor work items
into the domain of the I/O thread.

The call event processor, now called from the I/O thread, then takes over
the job of cranking the call state machine, processing incoming packets and
transmitting DATA, ACK and ABORT packets. In a future patch,
rxrpc_send_ACK() will transmit the ACK on the spot rather than queuing it
for later transmission.

The call event processor becomes purely received-skb driven. It only
transmits things in response to events. We use "pokes" to queue a dummy
skb to make it do things like start/resume transmitting data. Timer expiry
also results in pokes.

The connection event processor, becomes similar, though crypto events, such
as dealing with CHALLENGE and RESPONSE packets is offloaded to a work item
to avoid doing crypto in the I/O thread.

The local event processor is removed and VERSION response packets are
generated directly from the packet parser. Similarly, ABORTs generated in
response to protocol errors will be transmitted immediately rather than
being pushed onto a queue for later transmission.

Changes:
========
ver #2)
- Fix a couple of introduced lock context imbalances.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 393a2a20 20-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Extract the peer address from an incoming packet earlier

Extract the peer address from an incoming packet earlier, at the beginning
of rxrpc_input_packet() and thence pass a pointer to it to

rxrpc: Extract the peer address from an incoming packet earlier

Extract the peer address from an incoming packet earlier, at the beginning
of rxrpc_input_packet() and thence pass a pointer to it to various
functions that use it as part of the lookup rather than doing it on several
separate paths.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# cd21effb 08-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Reduce the use of RCU in packet input

Shrink the region of rxrpc_input_packet() that is covered by the RCU read
lock so that it only covers the connection and call lookup. This means
that th

rxrpc: Reduce the use of RCU in packet input

Shrink the region of rxrpc_input_packet() that is covered by the RCU read
lock so that it only covers the connection and call lookup. This means
that the bits now outside of that can call sleepable functions such as
kmalloc and sendmsg.

Also take a ref on the conn or call we're going to use before we drop the
RCU read lock.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# cf37b598 31-Mar-2022 David Howells <dhowells@redhat.com>

rxrpc: Move DATA transmission into call processor work item

Move DATA transmission into the call processor work item. In a future
patch, this will be called from the I/O thread rather than being it

rxrpc: Move DATA transmission into call processor work item

Move DATA transmission into the call processor work item. In a future
patch, this will be called from the I/O thread rather than being itsown
work item.

This will allow DATA transmission to be driven directly by incoming ACKs,
pokes and timers as those are processed.

The Tx queue is also split: The queue of packets prepared by sendmsg is now
places in call->tx_sendmsg and the packet dispatcher decants the packets
into call->tx_buffer as space becomes available in the transmission
window. This allows sendmsg to run ahead of the available space to try and
prevent an underflow in transmission.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# f3441d41 20-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Copy client call parameters into rxrpc_call earlier

Copy client call parameters into rxrpc_call earlier so that that can be
used to convey them to the connection code - which can then be offl

rxrpc: Copy client call parameters into rxrpc_call earlier

Copy client call parameters into rxrpc_call earlier so that that can be
used to convey them to the connection code - which can then be offloaded to
the I/O thread.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 15f661dc 10-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Implement a mechanism to send an event notification to a call

Provide a means by which an event notification can be sent to a call such
that the I/O thread can process it rather than it being

rxrpc: Implement a mechanism to send an event notification to a call

Provide a means by which an event notification can be sent to a call such
that the I/O thread can process it rather than it being done in a separate
workqueue. This will allow a lot of locking to be removed.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 4041a8ff 23-Jan-2020 David Howells <dhowells@redhat.com>

rxrpc: Remove call->input_lock

Remove call->input_lock as it was only necessary to serialise access to the
state stored in the rxrpc_call struct by simultaneous softirq handlers
presenting received

rxrpc: Remove call->input_lock

Remove call->input_lock as it was only necessary to serialise access to the
state stored in the rxrpc_call struct by simultaneous softirq handlers
presenting received packets. They now dump the packets in a queue and a
single process-context handler now processes them.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# ff734825 10-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Move error processing into the local endpoint I/O thread

Move the processing of error packets into the local endpoint I/O thread,
leaving the handover from UDP to merely transfer them into th

rxrpc: Move error processing into the local endpoint I/O thread

Move the processing of error packets into the local endpoint I/O thread,
leaving the handover from UDP to merely transfer them into the local
endpoint queue.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 446b3e14 10-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Move packet reception processing into I/O thread

Split the packet input handler to make the softirq side just dump the
received packet into the local endpoint receive queue and then call the

rxrpc: Move packet reception processing into I/O thread

Split the packet input handler to make the softirq side just dump the
received packet into the local endpoint receive queue and then call the
remainder of the input function from the I/O thread.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# a275da62 10-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Create a per-local endpoint receive queue and I/O thread

Create a per-local receive queue to which, in a future patch, all incoming
packets will be directed and an I/O thread that will proces

rxrpc: Create a per-local endpoint receive queue and I/O thread

Create a per-local receive queue to which, in a future patch, all incoming
packets will be directed and an I/O thread that will process those packets
and perform all transmission of packets.

Destruction of the local endpoint is also moved from the local processor
work item (which will be absorbed) to the thread.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 96b2d69b 23-Jan-2020 David Howells <dhowells@redhat.com>

rxrpc: Split the receive code

Split the code that handles packet reception in softirq mode as a prelude
to moving all the packet processing beyond routing to the appropriate call
and setting up of a

rxrpc: Split the receive code

Split the code that handles packet reception in softirq mode as a prelude
to moving all the packet processing beyond routing to the appropriate call
and setting up of a new call out into process context.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 3cec055c 25-Nov-2022 David Howells <dhowells@redhat.com>

rxrpc: Don't hold a ref for connection workqueue

Currently, rxrpc gives the connection's work item a ref on the connection
when it queues it - and this is called from the timer expiration function.

rxrpc: Don't hold a ref for connection workqueue

Currently, rxrpc gives the connection's work item a ref on the connection
when it queues it - and this is called from the timer expiration function.
The problem comes when queue_work() fails (ie. the work item is already
queued): the timer routine must put the ref - but this may cause the
cleanup code to run.

This has the unfortunate effect that the cleanup code may then be run in
softirq context - which means that any spinlocks it might need to touch
have to be guarded to disable softirqs (ie. they need a "_bh" suffix).

(1) Don't give a ref to the work item.

(2) Simplify handling of service connections by adding a separate active
count so that the refcount isn't also used for this.

(3) Connection destruction for both client and service connections can
then be cleaned up by putting rxrpc_put_connection() out of line and
making a tidy progression through the destruction code (offloaded to a
workqueue if put from softirq or processor function context). The RCU
part of the cleanup then only deals with the freeing at the end.

(4) Make rxrpc_queue_conn() return immediately if it sees the active count
is -1 rather then queuing the connection.

(5) Make sure that the cleanup routine waits for the work item to
complete.

(6) Stash the rxrpc_net pointer in the conn struct so that the rcu free
routine can use it, even if the local endpoint has been freed.

Unfortunately, neither the timer nor the work item can simply get around
the problem by just using refcount_inc_not_zero() as the waits would still
have to be done, and there would still be the possibility of having to put
the ref in the expiration function.

Note the connection work item is mostly going to go away with the main
event work being transferred to the I/O thread, so the wait in (6) will
become obsolete.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 3feda9d6 25-Nov-2022 David Howells <dhowells@redhat.com>

rxrpc: Don't hold a ref for call timer or workqueue

Currently, rxrpc gives the call timer a ref on the call when it starts it
and this is passed along to the workqueue by the timer expiration functi

rxrpc: Don't hold a ref for call timer or workqueue

Currently, rxrpc gives the call timer a ref on the call when it starts it
and this is passed along to the workqueue by the timer expiration function.
The problem comes when queue_work() fails (ie. the work item is already
queued): the timer routine must put the ref - but this may cause the
cleanup code to run.

This has the unfortunate effect that the cleanup code may then be run in
softirq context - which means that any spinlocks it might need to touch
have to be guarded to disable softirqs (ie. they need a "_bh" suffix).

Fix this by:

(1) Don't give a ref to the timer.

(2) Making the expiration function not do anything if the refcount is 0.
Note that this is more of an optimisation.

(3) Make sure that the cleanup routine waits for timer to complete.

However, this has a consequence that timer cannot give a ref to the work
item. Therefore the following fixes are also necessary:

(4) Don't give a ref to the work item.

(5) Make the work item return asap if it sees the ref count is 0.

(6) Make sure that the cleanup routine waits for the work item to
complete.

Unfortunately, neither the timer nor the work item can simply get around
the problem by just using refcount_inc_not_zero() as the waits would still
have to be done, and there would still be the possibility of having to put
the ref in the expiration function.

Note the call work item is going to go away with the work being transferred
to the I/O thread, so the wait in (6) will become obsolete.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# fa3492ab 02-Nov-2022 David Howells <dhowells@redhat.com>

rxrpc: Trace rxrpc_bundle refcount

Add a tracepoint for the rxrpc_bundle refcounting.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lis

rxrpc: Trace rxrpc_bundle refcount

Add a tracepoint for the rxrpc_bundle refcounting.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# cb0fc0c9 21-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: trace: Don't use __builtin_return_address for rxrpc_call tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_call

rxrpc: trace: Don't use __builtin_return_address for rxrpc_call tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_call tracepoint

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 7fa25105 21-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: trace: Don't use __builtin_return_address for rxrpc_conn tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_conn

rxrpc: trace: Don't use __builtin_return_address for rxrpc_conn tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_conn tracepoint

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 47c810a7 21-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: trace: Don't use __builtin_return_address for rxrpc_peer tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_peer

rxrpc: trace: Don't use __builtin_return_address for rxrpc_peer tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_peer tracepoint

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 0fde882f 21-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: trace: Don't use __builtin_return_address for rxrpc_local tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_loca

rxrpc: trace: Don't use __builtin_return_address for rxrpc_local tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_local tracepoint

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 2cc80086 19-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Drop rxrpc_conn_parameters from rxrpc_connection and rxrpc_bundle

Remove the rxrpc_conn_parameters struct from the rxrpc_connection and
rxrpc_bundle structs and emplace the members directly.

rxrpc: Drop rxrpc_conn_parameters from rxrpc_connection and rxrpc_bundle

Remove the rxrpc_conn_parameters struct from the rxrpc_connection and
rxrpc_bundle structs and emplace the members directly. These are going to
get filled in from the rxrpc_call struct in future.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# e969c92c 20-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Remove the [_k]net() debugging macros

Remove the _net() and knet() debugging macros in favour of tracepoints.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@

rxrpc: Remove the [_k]net() debugging macros

Remove the _net() and knet() debugging macros in favour of tracepoints.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 2ebdb26e 20-Oct-2022 David Howells <dhowells@redhat.com>

rxrpc: Remove the [k_]proto() debugging macros

Remove the kproto() and _proto() debugging macros in preference to using
tracepoints for this.

Signed-off-by: David Howells <dhowells@redhat.com>
cc:

rxrpc: Remove the [k_]proto() debugging macros

Remove the kproto() and _proto() debugging macros in preference to using
tracepoints for this.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org

show more ...


# 3bcd6c7e 16-Nov-2022 David Howells <dhowells@redhat.com>

rxrpc: Fix race between conn bundle lookup and bundle removal [ZDI-CAN-15975]

After rxrpc_unbundle_conn() has removed a connection from a bundle, it
checks to see if there are any conns with availab

rxrpc: Fix race between conn bundle lookup and bundle removal [ZDI-CAN-15975]

After rxrpc_unbundle_conn() has removed a connection from a bundle, it
checks to see if there are any conns with available channels and, if not,
removes and attempts to destroy the bundle.

Whilst it does check after grabbing client_bundles_lock that there are no
connections attached, this races with rxrpc_look_up_bundle() retrieving the
bundle, but not attaching a connection for the connection to be attached
later.

There is therefore a window in which the bundle can get destroyed before we
manage to attach a new connection to it.

Fix this by adding an "active" counter to struct rxrpc_bundle:

(1) rxrpc_connect_call() obtains an active count by prepping/looking up a
bundle and ditches it before returning.

(2) If, during rxrpc_connect_call(), a connection is added to the bundle,
this obtains an active count, which is held until the connection is
discarded.

(3) rxrpc_deactivate_bundle() is created to drop an active count on a
bundle and destroy it when the active count reaches 0. The active
count is checked inside client_bundles_lock() to prevent a race with
rxrpc_look_up_bundle().

(4) rxrpc_unbundle_conn() then calls rxrpc_deactivate_bundle().

Fixes: 245500d853e9 ("rxrpc: Rewrite the client connection manager")
Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-15975
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: zdi-disclosures@trendmicro.com
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


12345678910>>...23