Revision tags: v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10 |
|
#
a6acbe62 |
| 24-Nov-2020 |
Vadim Fedorenko <vfedorenko@novek.ru> |
net/tls: add CHACHA20-POLY1305 specific behavior
RFC 7905 defines special behavior for ChaCha-Poly TLS sessions. The differences are in the calculation of nonce and the absence of explicit IV. This
net/tls: add CHACHA20-POLY1305 specific behavior
RFC 7905 defines special behavior for ChaCha-Poly TLS sessions. The differences are in the calculation of nonce and the absence of explicit IV. This behavior is like TLSv1.3 partly.
Signed-off-by: Vadim Fedorenko <vfedorenko@novek.ru> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
#
923c40c4 |
| 24-Nov-2020 |
Vadim Fedorenko <vfedorenko@novek.ru> |
net/tls: add CHACHA20-POLY1305 specific defines and structures
To provide support for ChaCha-Poly cipher we need to define specific constants and structures.
Signed-off-by: Vadim Fedorenko <vfedore
net/tls: add CHACHA20-POLY1305 specific defines and structures
To provide support for ChaCha-Poly cipher we need to define specific constants and structures.
Signed-off-by: Vadim Fedorenko <vfedorenko@novek.ru> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
#
6942a284 |
| 24-Nov-2020 |
Vadim Fedorenko <vfedorenko@novek.ru> |
net/tls: make inline helpers protocol-aware
Inline functions defined in tls.h have a lot of AES-specific constants. Remove these constants and change argument to struct tls_prot_info to have an acce
net/tls: make inline helpers protocol-aware
Inline functions defined in tls.h have a lot of AES-specific constants. Remove these constants and change argument to struct tls_prot_info to have an access to cipher type in later patches
Signed-off-by: Vadim Fedorenko <vfedorenko@novek.ru> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
#
f1d4184f |
| 01-Jun-2021 |
Maxim Mikityanskiy <maximmi@nvidia.com> |
net/tls: Fix use-after-free after the TLS device goes down and up
[ Upstream commit c55dcdd435aa6c6ad6ccac0a4c636d010ee367a4 ]
When a netdev with active TLS offload goes down, tls_device_down is ca
net/tls: Fix use-after-free after the TLS device goes down and up
[ Upstream commit c55dcdd435aa6c6ad6ccac0a4c636d010ee367a4 ]
When a netdev with active TLS offload goes down, tls_device_down is called to stop the offload and tear down the TLS context. However, the socket stays alive, and it still points to the TLS context, which is now deallocated. If a netdev goes up, while the connection is still active, and the data flow resumes after a number of TCP retransmissions, it will lead to a use-after-free of the TLS context.
This commit addresses this bug by keeping the context alive until its normal destruction, and implements the necessary fallbacks, so that the connection can resume in software (non-offloaded) kTLS mode.
On the TX side tls_sw_fallback is used to encrypt all packets. The RX side already has all the necessary fallbacks, because receiving non-decrypted packets is supported. The thing needed on the RX side is to block resync requests, which are normally produced after receiving non-decrypted packets.
The necessary synchronization is implemented for a graceful teardown: first the fallbacks are deployed, then the driver resources are released (it used to be possible to have a tls_dev_resync after tls_dev_del).
A new flag called TLS_RX_DEV_DEGRADED is added to indicate the fallback mode. It's used to skip the RX resync logic completely, as it becomes useless, and some objects may be released (for example, resync_async, which is allocated and freed by the driver).
Fixes: e8f69799810c ("net/tls: Add generic NIC offload infrastructure") Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
874ece25 |
| 01-Jun-2021 |
Maxim Mikityanskiy <maximmi@nvidia.com> |
net/tls: Replace TLS_RX_SYNC_RUNNING with RCU
[ Upstream commit 05fc8b6cbd4f979a6f25759c4a17dd5f657f7ecd ]
RCU synchronization is guaranteed to finish in finite time, unlike a busy loop that polls
net/tls: Replace TLS_RX_SYNC_RUNNING with RCU
[ Upstream commit 05fc8b6cbd4f979a6f25759c4a17dd5f657f7ecd ]
RCU synchronization is guaranteed to finish in finite time, unlike a busy loop that polls a flag. This patch is a preparation for the bugfix in the next patch, where the same synchronize_net() call will also be used to sync with the TX datapath.
Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
025cc2fb |
| 25-Nov-2020 |
Maxim Mikityanskiy <maximmi@mellanox.com> |
net/tls: Protect from calling tls_dev_del for TLS RX twice
tls_device_offload_cleanup_rx doesn't clear tls_ctx->netdev after calling tls_dev_del if TLX TX offload is also enabled. Clearing tls_ctx->
net/tls: Protect from calling tls_dev_del for TLS RX twice
tls_device_offload_cleanup_rx doesn't clear tls_ctx->netdev after calling tls_dev_del if TLX TX offload is also enabled. Clearing tls_ctx->netdev gets postponed until tls_device_gc_task. It leaves a time frame when tls_device_down may get called and call tls_dev_del for RX one extra time, confusing the driver, which may lead to a crash.
This patch corrects this racy behavior by adding a flag to prevent tls_device_down from calling tls_dev_del the second time.
Fixes: e8f69799810c ("net/tls: Add generic NIC offload infrastructure") Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20201125221810.69870-1-saeedm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
#
138559b9 |
| 15-Nov-2020 |
Tariq Toukan <tariqt@nvidia.com> |
net/tls: Fix wrong record sn in async mode of device resync
In async_resync mode, we log the TCP seq of records until the async request is completed. Later, in case one of the logged seqs matches t
net/tls: Fix wrong record sn in async mode of device resync
In async_resync mode, we log the TCP seq of records until the async request is completed. Later, in case one of the logged seqs matches the resync request, we return it, together with its record serial number. Before this fix, we mistakenly returned the serial number of the current record instead.
Fixes: ed9b7646b06a ("net/tls: Add asynchronous resync") Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Boris Pismenny <borisp@nvidia.com> Link: https://lore.kernel.org/r/20201115131448.2702-1-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
Revision tags: v5.8.17, v5.8.16, v5.8.15, v5.9 |
|
#
923527dc |
| 09-Oct-2020 |
Randy Dunlap <rdunlap@infradead.org> |
net/tls: remove a duplicate function prototype
Remove one of the two instances of the function prototype for tls_validate_xmit_skb().
Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Boris P
net/tls: remove a duplicate function prototype
Remove one of the two instances of the function prototype for tls_validate_xmit_skb().
Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Boris Pismenny <borisp@nvidia.com> Cc: Aviad Yehezkel <aviadye@nvidia.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
Revision tags: v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7 |
|
#
a6ed3ebc |
| 30-Jun-2020 |
Colin Ian King <colin.king@canonical.com> |
net/tls: fix sign extension issue when left shifting u16 value
Left shifting the u16 value promotes it to a int and then it gets sign extended to a u64. If len << 16 is greater than 0x7fffffff then
net/tls: fix sign extension issue when left shifting u16 value
Left shifting the u16 value promotes it to a int and then it gets sign extended to a u64. If len << 16 is greater than 0x7fffffff then the upper bits get set to 1 because of the implicit sign extension. Fix this by casting len to u64 before shifting it.
Addresses-Coverity: ("integer handling issues") Fixes: ed9b7646b06a ("net/tls: Add asynchronous resync") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2 |
|
#
ed9b7646 |
| 08-Jun-2020 |
Boris Pismenny <borisp@mellanox.com> |
net/tls: Add asynchronous resync
This patch adds support for asynchronous resynchronization in tls_device. Async resync follows two distinct stages:
1. The NIC driver indicates that it would like t
net/tls: Add asynchronous resync
This patch adds support for asynchronous resynchronization in tls_device. Async resync follows two distinct stages:
1. The NIC driver indicates that it would like to resync on some TLS record within the received packet (P), but the driver does not know (yet) which of the TLS records within the packet. At this stage, the NIC driver will query the device to find the exact TCP sequence for resync (tcpsn), however, the driver does not wait for the device to provide the response.
2. Eventually, the device responds, and the driver provides the tcpsn within the resync packet to KTLS. Now, KTLS can check the tcpsn against any processed TLS records within packet P, and also against any record that is processed in the future within packet P.
The asynchronous resync path simplifies the device driver, as it can save bits on the packet completion (32-bit TCP sequence), and pass this information on an asynchronous command instead.
Signed-off-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
show more ...
|
#
acb5a07a |
| 08-Jun-2020 |
Boris Pismenny <borisp@mellanox.com> |
Revert "net/tls: Add force_resync for driver resync"
This reverts commit b3ae2459f89773adcbf16fef4b68deaaa3be1929. Revert the force resync API. Not in use. To be replaced by a better async resync AP
Revert "net/tls: Add force_resync for driver resync"
This reverts commit b3ae2459f89773adcbf16fef4b68deaaa3be1929. Revert the force resync API. Not in use. To be replaced by a better async resync API downstream.
Signed-off-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
show more ...
|
Revision tags: v5.4.45, v5.7.1, v5.4.44, v5.7 |
|
#
e91de6af |
| 29-May-2020 |
John Fastabend <john.fastabend@gmail.com> |
bpf: Fix running sk_skb program types with ktls
KTLS uses a stream parser to collect TLS messages and send them to the upper layer tls receive handler. This ensures the tls receiver has a full TLS h
bpf: Fix running sk_skb program types with ktls
KTLS uses a stream parser to collect TLS messages and send them to the upper layer tls receive handler. This ensures the tls receiver has a full TLS header to parse when it is run. However, when a socket has BPF_SK_SKB_STREAM_VERDICT program attached before KTLS is enabled we end up with two stream parsers running on the same socket.
The result is both try to run on the same socket. First the KTLS stream parser runs and calls read_sock() which will tcp_read_sock which in turn calls tcp_rcv_skb(). This dequeues the skb from the sk_receive_queue. When this is done KTLS code then data_ready() callback which because we stacked KTLS on top of the bpf stream verdict program has been replaced with sk_psock_start_strp(). This will in turn kick the stream parser again and eventually do the same thing KTLS did above calling into tcp_rcv_skb() and dequeuing a skb from the sk_receive_queue.
At this point the data stream is broke. Part of the stream was handled by the KTLS side some other bytes may have been handled by the BPF side. Generally this results in either missing data or more likely a "Bad Message" complaint from the kTLS receive handler as the BPF program steals some bytes meant to be in a TLS header and/or the TLS header length is no longer correct.
We've already broke the idealized model where we can stack ULPs in any order with generic callbacks on the TX side to handle this. So in this patch we do the same thing but for RX side. We add a sk_psock_strp_enabled() helper so TLS can learn a BPF verdict program is running and add a tls_sw_has_ctx_rx() helper so BPF side can learn there is a TLS ULP on the socket.
Then on BPF side we omit calling our stream parser to avoid breaking the data stream for the KTLS receiver. Then on the KTLS side we call BPF_SK_SKB_STREAM_VERDICT once the KTLS receiver is done with the packet but before it posts the msg to userspace. This gives us symmetry between the TX and RX halfs and IMO makes it usable again. On the TX side we process packets in this order BPF -> TLS -> TCP and on the receive side in the reverse order TCP -> TLS -> BPF.
Discovered while testing OpenSSL 3.0 Alpha2.0 release.
Fixes: d829e9c4112b5 ("tls: convert to generic sk_msg interface") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/159079361946.5745.605854335665044485.stgit@john-Precision-5820-Tower Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
Revision tags: v5.4.43 |
|
#
b3ae2459 |
| 27-May-2020 |
Tariq Toukan <tariqt@mellanox.com> |
net/tls: Add force_resync for driver resync
This patch adds a field to the tls rx offload context which enables drivers to force a send_resync call.
This field can be used by drivers to request a r
net/tls: Add force_resync for driver resync
This patch adds a field to the tls rx offload context which enables drivers to force a send_resync call.
This field can be used by drivers to request a resync at the next possible tls record. It is beneficial for hardware that provides the resync sequence number asynchronously. In such cases, the packet that triggered the resync does not contain the information required for a resync. Instead, the driver requests resync for all the following TLS record until the asynchronous notification with the resync request TCP sequence arrives.
A following series for mlx5e ConnectX-6DX TLS RX offload support will use this mechanism.
Signed-off-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
0cada332 |
| 22-May-2020 |
Vinay Kumar Yadav <vinay.yadav@chelsio.com> |
net/tls: fix race condition causing kernel panic
tls_sw_recvmsg() and tls_decrypt_done() can be run concurrently. // tls_sw_recvmsg() if (atomic_read(&ctx->decrypt_pending)) crypto_wait_req(-EINP
net/tls: fix race condition causing kernel panic
tls_sw_recvmsg() and tls_decrypt_done() can be run concurrently. // tls_sw_recvmsg() if (atomic_read(&ctx->decrypt_pending)) crypto_wait_req(-EINPROGRESS, &ctx->async_wait); else reinit_completion(&ctx->async_wait.completion);
//tls_decrypt_done() pending = atomic_dec_return(&ctx->decrypt_pending);
if (!pending && READ_ONCE(ctx->async_notify)) complete(&ctx->async_wait.completion);
Consider the scenario tls_decrypt_done() is about to run complete()
if (!pending && READ_ONCE(ctx->async_notify))
and tls_sw_recvmsg() reads decrypt_pending == 0, does reinit_completion(), then tls_decrypt_done() runs complete(). This sequence of execution results in wrong completion. Consequently, for next decrypt request, it will not wait for completion, eventually on connection close, crypto resources freed, there is no way to handle pending decrypt response.
This race condition can be avoided by having atomic_read() mutually exclusive with atomic_dec_return(),complete().Intoduced spin lock to ensure the mutual exclution.
Addressed similar problem in tx direction.
v1->v2: - More readable commit message. - Corrected the lock to fix new race scenario. - Removed barrier which is not needed now.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption of records for performance") Signed-off-by: Vinay Kumar Yadav <vinay.yadav@chelsio.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5 |
|
#
8d5a49e9 |
| 17-Dec-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: add helper for testing if socket is RX offloaded
There is currently no way for driver to reliably check that the socket it has looked up is in fact RX offloaded. Add a helper. This allows d
net/tls: add helper for testing if socket is RX offloaded
There is currently no way for driver to reliably check that the socket it has looked up is in fact RX offloaded. Add a helper. This allows drivers to catch misbehaving firmware.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14 |
|
#
c5daa6cc |
| 27-Nov-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: use sg_next() to walk sg entries
Partially sent record cleanup path increments an SG entry directly instead of using sg_next(). This should not be a problem today, as encrypted messages sho
net/tls: use sg_next() to walk sg entries
Partially sent record cleanup path increments an SG entry directly instead of using sg_next(). This should not be a problem today, as encrypted messages should be always allocated as arrays. But given this is a cleanup path it's easy to miss was this ever to change. Use sg_next(), and simplify the code.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
9e5ffed3 |
| 27-Nov-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: remove the dead inplace_crypto code
Looks like when BPF support was added by commit d3b18ad31f93 ("tls: add bpf support to sk_msg handling") and commit d829e9c4112b ("tls: convert to generi
net/tls: remove the dead inplace_crypto code
Looks like when BPF support was added by commit d3b18ad31f93 ("tls: add bpf support to sk_msg handling") and commit d829e9c4112b ("tls: convert to generic sk_msg interface") it broke/removed the support for in-place crypto as added by commit 4e6d47206c32 ("tls: Add support for inplace records encryption").
The inplace_crypto member of struct tls_rec is dead, inited to zero, and sometimes set to zero again. It used to be set to 1 when record was allocated, but the skmsg code doesn't seem to have been written with the idea of in-place crypto in mind.
Since non trivial effort is required to bring the feature back and we don't really have the HW to measure the benefit just remove the left over support for now to avoid confusing readers.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v5.4, v5.3.13, v5.3.12 |
|
#
d4ffb02d |
| 18-Nov-2019 |
Willem de Bruijn <willemb@google.com> |
net/tls: enable sk_msg redirect to tls socket egress
Bring back tls_sw_sendpage_locked. sk_msg redirection into a socket with TLS_TX takes the following path:
tcp_bpf_sendmsg_redir tcp_bpf_pu
net/tls: enable sk_msg redirect to tls socket egress
Bring back tls_sw_sendpage_locked. sk_msg redirection into a socket with TLS_TX takes the following path:
tcp_bpf_sendmsg_redir tcp_bpf_push_locked tcp_bpf_push kernel_sendpage_locked sock->ops->sendpage_locked
Also update the flags test in tls_sw_sendpage_locked to allow flag MSG_NO_SHARED_FRAGS. bpf_tcp_sendmsg sets this.
Link: https://lore.kernel.org/netdev/CA+FuTSdaAawmZ2N8nfDDKu3XLpXBbMtcCT0q4FntDD2gn8ASUw@mail.gmail.com/T/#t Link: https://github.com/wdebruij/kerneltools/commits/icept.2 Fixes: 0608c69c9a80 ("bpf: sk_msg, sock{map|hash} redirect through ULP") Fixes: f3de19af0f5b ("Revert \"net/tls: remove unused function tls_sw_sendpage_locked\"") Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v5.3.11, v5.3.10, v5.3.9 |
|
#
79ffe608 |
| 05-Nov-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: add a TX lock
TLS TX needs to release and re-acquire the socket lock if send buffer fills up.
TLS SW TX path currently depends on only allowing one thread to enter the function by the abus
net/tls: add a TX lock
TLS TX needs to release and re-acquire the socket lock if send buffer fills up.
TLS SW TX path currently depends on only allowing one thread to enter the function by the abuse of sk_write_pending. If another writer is already waiting for memory no new ones are allowed in.
This has two problems: - writers don't wake other threads up when they leave the kernel; meaning that this scheme works for single extra thread (second application thread or delayed work) because memory becoming available will send a wake up request, but as Mallesham and Pooja report with larger number of threads it leads to threads being put to sleep indefinitely; - the delayed work does not get _scheduled_ but it may _run_ when other writers are present leading to crashes as writers don't expect state to change under their feet (same records get pushed and freed multiple times); it's hard to reliably bail from the work, however, because the mere presence of a writer does not guarantee that the writer will push pending records before exiting.
Ensuring wakeups always happen will make the code basically open code a mutex. Just use a mutex.
The TLS HW TX path does not have any locking (not even the sk_write_pending hack), yet it uses a per-socket sg_tx_data array to push records.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption of records for performance") Reported-by: Mallesham Jatharakonda <mallesh537@gmail.com> Reported-by: Pooja Trivedi <poojatrivedi@gmail.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v5.3.8, v5.3.7, v5.3.6, v5.3.5 |
|
#
bc76e5bb |
| 06-Oct-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: store decrypted on a single bit
Use a single bit instead of boolean to remember if packet was already decrypted.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: S
net/tls: store decrypted on a single bit
Use a single bit instead of boolean to remember if packet was already decrypted.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
5c5458ec |
| 06-Oct-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: store async_capable on a single bit
Store async_capable on a single bit instead of a full integer to save space.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: S
net/tls: store async_capable on a single bit
Store async_capable on a single bit instead of a full integer to save space.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
4de30a8d |
| 06-Oct-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: pass context to tls_device_decrypted()
Avoid unnecessary pointer chasing and calculations, callers already have most of the state tls_device_decrypted() needs.
Signed-off-by: Jakub Kicinsk
net/tls: pass context to tls_device_decrypted()
Avoid unnecessary pointer chasing and calculations, callers already have most of the state tls_device_decrypted() needs.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
Revision tags: v5.3.4, v5.3.3 |
|
#
d26b698d |
| 04-Oct-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: add skeleton of MIB statistics
Add a skeleton structure for adding TLS statistics.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@daveml
net/tls: add skeleton of MIB statistics
Add a skeleton structure for adding TLS statistics.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
8538d29c |
| 04-Oct-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: add tracing for device/offload events
Add tracing of device-related interaction to aid performance analysis, especially around resync:
tls:tls_device_offload_set tls:tls_device_rx_resync
net/tls: add tracing for device/offload events
Add tracing of device-related interaction to aid performance analysis, especially around resync:
tls:tls_device_offload_set tls:tls_device_rx_resync_send tls:tls_device_rx_resync_nh_schedule tls:tls_device_rx_resync_nh_delay tls:tls_device_tx_resync_req tls:tls_device_tx_resync_send
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|
#
08700dab |
| 03-Oct-2019 |
Jakub Kicinski <jakub.kicinski@netronome.com> |
net/tls: move TOE-related code to a separate file
Move tls_hw_* functions to a new, separate source file to avoid confusion with normal, non-TOE offload.
Signed-off-by: Jakub Kicinski <jakub.kicins
net/tls: move TOE-related code to a separate file
Move tls_hw_* functions to a new, separate source file to avoid confusion with normal, non-TOE offload.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: John Hurley <john.hurley@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
show more ...
|