Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3 |
|
#
d68ac388 |
| 28-Nov-2023 |
Herbert Xu <herbert@gondor.apana.org.au> |
crypto: s390/aes - Fix buffer overread in CTR mode
commit d07f951903fa9922c375b8ab1ce81b18a0034e3b upstream.
When processing the last block, the s390 ctr code will always read a whole block, even i
crypto: s390/aes - Fix buffer overread in CTR mode
commit d07f951903fa9922c375b8ab1ce81b18a0034e3b upstream.
When processing the last block, the s390 ctr code will always read a whole block, even if there isn't a whole block of data left. Fix this by using the actual length left and copy it into a buffer first for processing.
Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for CTR mode") Cc: <stable@vger.kernel.org> Reported-by: Guangwu Zhang <guazhang@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewd-by: Harald Freudenberger <freude@de.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16 |
|
#
b6f52780 |
| 29-Dec-2022 |
Vladis Dronov <vdronov@redhat.com> |
crypto: s390/aes - drop redundant xts key check
xts_fallback_setkey() in xts_aes_set_key() will now enforce key size rule in FIPS mode when setting up the fallback algorithm keys, which makes the ch
crypto: s390/aes - drop redundant xts key check
xts_fallback_setkey() in xts_aes_set_key() will now enforce key size rule in FIPS mode when setting up the fallback algorithm keys, which makes the check in xts_aes_set_key() redundant or unreachable. So just drop this check.
xts_fallback_setkey() now makes a key size check in xts_verify_key():
xts_fallback_setkey() crypto_skcipher_setkey() [ skcipher_setkey_unaligned() ] cipher->setkey() { .setkey = xts_setkey } xts_setkey() xts_verify_key()
Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55 |
|
#
0a5f9b38 |
| 13-Jul-2022 |
Heiko Carstens <hca@linux.ibm.com> |
s390/cpufeature: rework to allow more than only hwcap bits
Rework cpufeature implementation to allow for various cpu feature indications, which is not only limited to hwcap bits. This is achieved by
s390/cpufeature: rework to allow more than only hwcap bits
Rework cpufeature implementation to allow for various cpu feature indications, which is not only limited to hwcap bits. This is achieved by adding a sequential list of cpu feature numbers, where each of them is mapped to an entry which indicates what this number is about.
Each entry contains a type member, which indicates what feature name space to look into (e.g. hwcap, or cpu facility). If wanted this allows also to automatically load modules only in e.g. z/VM configurations.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Steffen Eiden <seiden@linux.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com> Link: https://lore.kernel.org/r/20220713125644.16121-2-seiden@linux.ibm.com Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
show more ...
|
Revision tags: v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41 |
|
#
bd52cd5e |
| 17-May-2022 |
Jann Horn <jannh@google.com> |
s390/crypto: fix scatterwalk_unmap() callers in AES-GCM
The argument of scatterwalk_unmap() is supposed to be the void* that was returned by the previous scatterwalk_map() call. The s390 AES-GCM imp
s390/crypto: fix scatterwalk_unmap() callers in AES-GCM
The argument of scatterwalk_unmap() is supposed to be the void* that was returned by the previous scatterwalk_map() call. The s390 AES-GCM implementation was instead passing the pointer to the struct scatter_walk.
This doesn't actually break anything because scatterwalk_unmap() only uses its argument under CONFIG_HIGHMEM and ARCH_HAS_FLUSH_ON_KUNMAP.
Fixes: bf7fa038707c ("s390/crypto: add s390 platform specific aes gcm support.") Signed-off-by: Jann Horn <jannh@google.com> Acked-by: Harald Freudenberger <freude@linux.ibm.com> Link: https://lore.kernel.org/r/20220517143047.3054498-1-jannh@google.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
show more ...
|
#
a67b4646 |
| 17-May-2022 |
Jann Horn <jannh@google.com> |
s390/crypto: fix scatterwalk_unmap() callers in AES-GCM
[ Upstream commit bd52cd5e23f134019b23f0c389db0f9a436e4576 ]
The argument of scatterwalk_unmap() is supposed to be the void* that was returne
s390/crypto: fix scatterwalk_unmap() callers in AES-GCM
[ Upstream commit bd52cd5e23f134019b23f0c389db0f9a436e4576 ]
The argument of scatterwalk_unmap() is supposed to be the void* that was returned by the previous scatterwalk_map() call. The s390 AES-GCM implementation was instead passing the pointer to the struct scatter_walk.
This doesn't actually break anything because scatterwalk_unmap() only uses its argument under CONFIG_HIGHMEM and ARCH_HAS_FLUSH_ON_KUNMAP.
Fixes: bf7fa038707c ("s390/crypto: add s390 platform specific aes gcm support.") Signed-off-by: Jann Horn <jannh@google.com> Acked-by: Harald Freudenberger <freude@linux.ibm.com> Link: https://lore.kernel.org/r/20220517143047.3054498-1-jannh@google.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
a67b4646 |
| 17-May-2022 |
Jann Horn <jannh@google.com> |
s390/crypto: fix scatterwalk_unmap() callers in AES-GCM
[ Upstream commit bd52cd5e23f134019b23f0c389db0f9a436e4576 ]
The argument of scatterwalk_unmap() is supposed to be the void* that was returne
s390/crypto: fix scatterwalk_unmap() callers in AES-GCM
[ Upstream commit bd52cd5e23f134019b23f0c389db0f9a436e4576 ]
The argument of scatterwalk_unmap() is supposed to be the void* that was returned by the previous scatterwalk_map() call. The s390 AES-GCM implementation was instead passing the pointer to the struct scatter_walk.
This doesn't actually break anything because scatterwalk_unmap() only uses its argument under CONFIG_HIGHMEM and ARCH_HAS_FLUSH_ON_KUNMAP.
Fixes: bf7fa038707c ("s390/crypto: add s390 platform specific aes gcm support.") Signed-off-by: Jann Horn <jannh@google.com> Acked-by: Harald Freudenberger <freude@linux.ibm.com> Link: https://lore.kernel.org/r/20220517143047.3054498-1-jannh@google.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
Revision tags: v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10 |
|
#
0eb76ba2 |
| 11-Dec-2020 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: remove cipher routines from public crypto API
The cipher routines in the crypto API are mostly intended for templates implementing skcipher modes generically in software, and shouldn't be us
crypto: remove cipher routines from public crypto API
The cipher routines in the crypto API are mostly intended for templates implementing skcipher modes generically in software, and shouldn't be used outside of the crypto subsystem. So move the prototypes and all related definitions to a new header file under include/crypto/internal. Also, let's use the new module namespace feature to move the symbol exports into a new namespace CRYPTO_INTERNAL.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23 |
|
#
4a559cd1 |
| 25-Feb-2020 |
Torsten Duwe <duwe@suse.de> |
s390/crypto: explicitly memzero stack key material in aes_s390.c
aes_s390.c has several functions which allocate space for key material on the stack and leave the used keys there. It is considered g
s390/crypto: explicitly memzero stack key material in aes_s390.c
aes_s390.c has several functions which allocate space for key material on the stack and leave the used keys there. It is considered good practice to clean these locations before the function returns.
Link: https://lkml.kernel.org/r/20200221165511.GB6928@lst.de Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
show more ...
|
Revision tags: v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7 |
|
#
af5034e8 |
| 30-Dec-2019 |
Eric Biggers <ebiggers@google.com> |
crypto: remove propagation of CRYPTO_TFM_RES_* flags
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the ->setkey() functions provide more information about errors. But these flag
crypto: remove propagation of CRYPTO_TFM_RES_* flags
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the ->setkey() functions provide more information about errors. But these flags weren't actually being used or tested, and in many cases they weren't being set correctly anyway. So they've now been removed.
Also, if someone ever actually needs to start better distinguishing ->setkey() errors (which is somewhat unlikely, as this has been unneeded for a long time), we'd be much better off just defining different return values, like -EINVAL if the key is invalid for the algorithm vs. -EKEYREJECTED if the key was rejected by a policy like "no weak keys". That would be much simpler, less error-prone, and easier to test.
So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that propagates these flags around.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
#
674f368a |
| 30-Dec-2019 |
Eric Biggers <ebiggers@google.com> |
crypto: remove CRYPTO_TFM_RES_BAD_KEY_LEN
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to make the ->setkey() functions provide more information about errors.
However, no one a
crypto: remove CRYPTO_TFM_RES_BAD_KEY_LEN
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to make the ->setkey() functions provide more information about errors.
However, no one actually checks for this flag, which makes it pointless.
Also, many algorithms fail to set this flag when given a bad length key. Reviewing just the generic implementations, this is the case for aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309, rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably many more in arch/*/crypto/ and drivers/crypto/.
Some algorithms can even set this flag when the key is the correct length. For example, authenc and authencesn set it when the key payload is malformed in any way (not just a bad length), the atmel-sha and ccree drivers can set it if a memory allocation fails, and the chelsio driver sets it for bad auth tag lengths, not just bad key lengths.
So even if someone actually wanted to start checking this flag (which seems unlikely, since it's been unused for a long time), there would be a lot of work needed to get it working correctly. But it would probably be much better to go back to the drawing board and just define different return values, like -EINVAL if the key is invalid for the algorithm vs. -EKEYREJECTED if the key was rejected by a policy like "no weak keys". That would be much simpler, less error-prone, and easier to test.
So just remove this flag.
Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8, v5.3.7 |
|
#
7988fb2c |
| 12-Oct-2019 |
Eric Biggers <ebiggers@google.com> |
crypto: s390/aes - convert to skcipher API
Convert the glue code for the S390 CPACF implementations of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated "blkcipher" API to the "skcipher" AP
crypto: s390/aes - convert to skcipher API
Convert the glue code for the S390 CPACF implementations of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed.
Note: I made CTR use the same function for encryption and decryption, since CTR encryption and decryption are identical.
Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v5.3.6, v5.3.5, v5.3.4, v5.3.3, v5.3.2, v5.3.1, v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12 |
|
#
9e323d45 |
| 05-Sep-2019 |
Harald Freudenberger <freude@linux.ibm.com> |
s390/crypto: xts-aes-s390 fix extra run-time crypto self tests finding
With 'extra run-time crypto self tests' enabled, the selftest for s390-xts fails with
alg: skcipher: xts-aes-s390 encryption
s390/crypto: xts-aes-s390 fix extra run-time crypto self tests finding
With 'extra run-time crypto self tests' enabled, the selftest for s390-xts fails with
alg: skcipher: xts-aes-s390 encryption unexpectedly succeeded on test vector "random: len=0 klen=64"; expected_error=-22, cfg="random: inplace use_digest nosimd src_divs=[2.61%@+4006, 84.44%@+21, 1.55%@+13, 4.50%@+344, 4.26%@+21, 2.64%@+27]"
This special case with nbytes=0 is not handled correctly and this fix now makes sure that -EINVAL is returned when there is en/decrypt called with 0 bytes to en/decrypt.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
show more ...
|
Revision tags: v5.2.11, v5.2.10 |
|
#
5a74362c |
| 22-Aug-2019 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
crypto: s390/aes - fix typo in XTS_BLOCK_SIZE identifier
Fix a typo XTS_BLOCKSIZE -> XTS_BLOCK_SIZE, causing the build to break.
Fixes: ce68acbcb6a5 ("crypto: s390/xts-aes - invoke fallback for..."
crypto: s390/aes - fix typo in XTS_BLOCK_SIZE identifier
Fix a typo XTS_BLOCKSIZE -> XTS_BLOCK_SIZE, causing the build to break.
Fixes: ce68acbcb6a5 ("crypto: s390/xts-aes - invoke fallback for...") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
#
ce68acbc |
| 16-Aug-2019 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
crypto: s390/xts-aes - invoke fallback for ciphertext stealing
For correctness and compliance with the XTS-AES specification, we are adding support for ciphertext stealing to XTS implementations, ev
crypto: s390/xts-aes - invoke fallback for ciphertext stealing
For correctness and compliance with the XTS-AES specification, we are adding support for ciphertext stealing to XTS implementations, even though no use cases are known that will be enabled by this.
Since the s390 implementation already has a fallback skcipher standby for other purposes, let's use it for this purpose as well. If ciphertext stealing use cases ever become a bottleneck, we can always revisit this.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v5.2.9, v5.2.8, v5.2.7, v5.2.6, v5.2.5, v5.2.4 |
|
#
931c940f |
| 26-Jul-2019 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
crypto: s390/aes - fix name clash after AES library refactor
The newly introduced AES library exposes aes_encrypt/aes_decrypt routines so rename existing occurrences of those identifiers in the s390
crypto: s390/aes - fix name clash after AES library refactor
The newly introduced AES library exposes aes_encrypt/aes_decrypt routines so rename existing occurrences of those identifiers in the s390 driver.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v5.2.3, v5.2.2, v5.2.1, v5.2, v5.1.16, v5.1.15, v5.1.14, v5.1.13, v5.1.12, v5.1.11, v5.1.10, v5.1.9, v5.1.8, v5.1.7, v5.1.6 |
|
#
1c2c7029 |
| 27-May-2019 |
Harald Freudenberger <freude@linux.ibm.com> |
s390/crypto: fix possible sleep during spinlock aquired
This patch fixes a complain about possible sleep during spinlock aquired "BUG: sleeping function called from invalid context at include/crypto
s390/crypto: fix possible sleep during spinlock aquired
This patch fixes a complain about possible sleep during spinlock aquired "BUG: sleeping function called from invalid context at include/crypto/algapi.h:426" for the ctr(aes) and ctr(des) s390 specific ciphers.
Instead of using a spinlock this patch introduces a mutex which is save to be held in sleeping context. Please note a deadlock is not possible as mutex_trylock() is used.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reported-by: Julian Wiedmann <jwi@linux.ibm.com> Cc: stable@vger.kernel.org Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
show more ...
|
Revision tags: v5.1.5 |
|
#
bef9f0ba |
| 23-May-2019 |
Harald Freudenberger <freude@linux.ibm.com> |
s390/crypto: fix gcm-aes-s390 selftest failures
The current kernel uses improved crypto selftests. These tests showed that the current implementation of gcm-aes-s390 is not able to deal with chunks
s390/crypto: fix gcm-aes-s390 selftest failures
The current kernel uses improved crypto selftests. These tests showed that the current implementation of gcm-aes-s390 is not able to deal with chunks of output buffers which are not a multiple of 16 bytes. This patch introduces a rework of the gcm aes s390 scatter walk handling which now is able to handle any input and output scatter list chunk sizes correctly.
Code has been verified by the crypto selftests, the tcrypt kernel module and additional tests ran via the af_alg interface.
Cc: <stable@vger.kernel.org> Reported-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Patrick Steuer <steuer@linux.ibm.com> Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
show more ...
|
Revision tags: v5.1.4, v5.1.3, v5.1.2, v5.1.1, v5.0.14, v5.1, v5.0.13, v5.0.12, v5.0.11, v5.0.10, v5.0.9, v5.0.8, v5.0.7, v5.0.6, v5.0.5, v5.0.4, v5.0.3, v4.19.29, v5.0.2, v4.19.28, v5.0.1, v4.19.27, v5.0, v4.19.26, v4.19.25, v4.19.24, v4.19.23, v4.19.22, v4.19.21, v4.19.20, v4.19.19, v4.19.18, v4.19.17, v4.19.16, v4.19.15, v4.19.14, v4.19.13, v4.19.12, v4.19.11, v4.19.10, v4.19.9, v4.19.8, v4.19.7, v4.19.6, v4.19.5, v4.19.4, v4.18.20, v4.19.3 |
|
#
1ad0f160 |
| 14-Nov-2018 |
Eric Biggers <ebiggers@google.com> |
crypto: drop mask=CRYPTO_ALG_ASYNC from 'cipher' tfm allocations
'cipher' algorithms (single block ciphers) are always synchronous, so passing CRYPTO_ALG_ASYNC in the mask to crypto_alloc_cipher() h
crypto: drop mask=CRYPTO_ALG_ASYNC from 'cipher' tfm allocations
'cipher' algorithms (single block ciphers) are always synchronous, so passing CRYPTO_ALG_ASYNC in the mask to crypto_alloc_cipher() has no effect. Many users therefore already don't pass it, but some still do. This inconsistency can cause confusion, especially since the way the 'mask' argument works is somewhat counterintuitive.
Thus, just remove the unneeded CRYPTO_ALG_ASYNC flags.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v4.18.19, v4.19.2, v4.18.18, v4.18.17, v4.19.1, v4.19, v4.18.16, v4.18.15, v4.18.14, v4.18.13, v4.18.12, v4.18.11, v4.18.10, v4.18.9 |
|
#
531fa5d6 |
| 18-Sep-2018 |
Kees Cook <keescook@chromium.org> |
s390/crypto: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto
s390/crypto: Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux-s390@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v4.18.7, v4.18.6, v4.18.5, v4.17.18, v4.18.4, v4.18.3, v4.17.17, v4.18.2, v4.17.16, v4.17.15, v4.18.1, v4.18, v4.17.14, v4.17.13, v4.17.12, v4.17.11, v4.17.10, v4.17.9, v4.17.8, v4.17.7, v4.17.6, v4.17.5, v4.17.4 |
|
#
3f4a537a |
| 30-Jun-2018 |
Eric Biggers <ebiggers@google.com> |
crypto: aead - remove useless setting of type flags
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD. But this is redundant with the C structure type ('struct aead_alg'), and crypto_regis
crypto: aead - remove useless setting of type flags
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD. But this is redundant with the C structure type ('struct aead_alg'), and crypto_register_aead() already sets the type flag automatically, clearing any type flag that was already there. Apparently the useless assignment has just been copy+pasted around.
So, remove the useless assignment from all the aead algorithms.
This patch shouldn't change any actual behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v4.17.3, v4.17.2, v4.17.1, v4.17 |
|
#
aff304e7 |
| 05-Apr-2018 |
Harald Freudenberger <freude@linux.vnet.ibm.com> |
s390/crypto: Adjust s390 aes and paes cipher priorities
Tests with paes-xts and debugging investigations showed that the ciphers are not always correctly resolved. The rules for cipher priorities se
s390/crypto: Adjust s390 aes and paes cipher priorities
Tests with paes-xts and debugging investigations showed that the ciphers are not always correctly resolved. The rules for cipher priorities seem to be: - Ecb-aes should have a prio greater than the generic ecb-aes. - The mode specialized ciphers (like cbc-aes-s390) should have a prio greater than the sum of the more generic combinations (like cbs(aes)).
This patch adjusts the cipher priorities for the s390 aes and paes in kernel crypto implementations.
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
show more ...
|
Revision tags: v4.16 |
|
#
c7260ca3 |
| 01-Mar-2018 |
Harald Freudenberger <freude@linux.vnet.ibm.com> |
s390/crypto: Fix kernel crash on aes_s390 module remove.
A kernel crash occurs when the aes_s390 kernel module is removed on machines < z14. This only happens on kernel version 4.15 and higher on ma
s390/crypto: Fix kernel crash on aes_s390 module remove.
A kernel crash occurs when the aes_s390 kernel module is removed on machines < z14. This only happens on kernel version 4.15 and higher on machines not supporting MSA 8.
The reason for the crash is a unconditional crypto_unregister_aead() invocation where no previous crypto_register_aead() had been called. The fix now remembers if there has been a successful registration and only then calls the unregister function upon kernel module remove.
The code now crashing has been introduced with "bf7fa03 s390/crypto: add s390 platform specific aes gcm support."
Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
show more ...
|
Revision tags: v4.15 |
|
#
a876ca4d |
| 24-Nov-2017 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
s390: crypto: Remove redundant license text
Now that the SPDX tag is in all arch/s390/crypto/ files, that identifies the license in a specific and legally-defined manner. So the extra GPL text word
s390: crypto: Remove redundant license text
Now that the SPDX tag is in all arch/s390/crypto/ files, that identifies the license in a specific and legally-defined manner. So the extra GPL text wording can be removed as it is no longer needed at all.
This is done on a quest to remove the 700+ different ways that files in the kernel describe the GPL license text. And there's unneeded stuff like the address (sometimes incorrect) for the FSF which is never needed.
No copyright headers or other non-license-description text was removed.
Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
show more ...
|
#
20a884f5 |
| 24-Nov-2017 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
s390: crypto: add SPDX identifiers to the remaining files
It's good to have SPDX identifiers in all files to make it easier to audit the kernel tree for correct licenses.
Update the arch/s390/crypt
s390: crypto: add SPDX identifiers to the remaining files
It's good to have SPDX identifiers in all files to make it easier to audit the kernel tree for correct licenses.
Update the arch/s390/crypto/ files with the correct SPDX license identifier based on the license text in the file itself. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text.
This work is based on a script and data from Thomas Gleixner, Philippe Ombredanne, and Kate Stewart.
Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Philippe Ombredanne <pombredanne@nexb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
show more ...
|
Revision tags: v4.13.16, v4.14, v4.13.5 |
|
#
bf7fa038 |
| 18-Sep-2017 |
Harald Freudenberger <freude@linux.vnet.ibm.com> |
s390/crypto: add s390 platform specific aes gcm support.
This patch introduces gcm(aes) support into the aes_s390 kernel module.
Signed-off-by: Patrick Steuer <patrick.steuer@de.ibm.com> Signed-off
s390/crypto: add s390 platform specific aes gcm support.
This patch introduces gcm(aes) support into the aes_s390 kernel module.
Signed-off-by: Patrick Steuer <patrick.steuer@de.ibm.com> Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
show more ...
|