From patchwork Wed Apr 16 06:42:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882038 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 888C2225419 for ; Wed, 16 Apr 2025 06:42:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785775; cv=none; b=dKBLe3ggSMEfMfF8T3rFXmFDSttWlchM1JVsCcmvaxFz5Lf+NExD+mpZ2EycSrUQS+8Vw9lrIdg5JdwYgtra1gt4nhRt9eX965+FLM4O6kPSmnqgf2U9EIOuwWbD33R90Rud+T/XcPVO26MxHTDBS/YIViDDGoOsWGfwVLcN2Ps= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785775; c=relaxed/simple; bh=x70Qi5KBv1jZ0EBmnznReTL/gXm/SP7rI4c5icZuTic=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=d5WnxFtzLmE+fUzrtZ27tVfGa/DX7N2XnEn0Vqi664+LNoEtGFnF9cs4/ws/vIgaGTMqh6LGOtLbzis/PJ3oWhcd196qNUG7Y2sBK6uNQejNqq+ff8wQQaBx58EX2Vb1UBgSB8sVMhADJToct/ZmBEvPJP3IjiIwqABQgV1lFzA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=kTkQkY+r; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="kTkQkY+r" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=5USs5sUEVHFzip/MdSruNOm86OF1747LG7L0ox+qlek=; b=kTkQkY+r4xgtY6gZP+1Zxzy46X xTjOR2R8orTG+/RGH9PxeijrpRXGvTd3aOBSDDZszNoI2uCX72KffqggMl/zKYdXRon4BZ+QQnWvL BiZTtakv4dqKjJvizd3Hi+VzUiCT4Lx8sX385hLIVaZrP8f77XgaWDrz7HVlBQLuPa9pi7axIvmSc fbcT88Jy+pCuEkIFNeKKgYybg1RNRd1OEp+C0pvTAvrpjLfyec42Ebfla5IRT1ZGS7IFna6WJA7ZS BP7Crr54KlWjrbe3NNuTGXCJagAwToE5F6H4Kj2kbLEeBLPoYF128Ui8tNmlEmlBj7SDmCO34VSKK 8kcgDDMA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wTv-00G6Gf-1Y; Wed, 16 Apr 2025 14:42:48 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:42:47 +0800 Date: Wed, 16 Apr 2025 14:42:47 +0800 Message-Id: <16ad0ff37c87e9a3d110eb13841cc16aa22ab1d9.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 02/67] crypto: blake2b-generic - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- crypto/blake2b_generic.c | 31 ++++++++------ include/crypto/blake2b.h | 14 ++----- include/crypto/internal/blake2b.h | 70 ++++++++++++++++++++++++++----- 3 files changed, 80 insertions(+), 35 deletions(-) diff --git a/crypto/blake2b_generic.c b/crypto/blake2b_generic.c index 04a712ddfb43..6fa38965a493 100644 --- a/crypto/blake2b_generic.c +++ b/crypto/blake2b_generic.c @@ -15,12 +15,12 @@ * More information about BLAKE2 can be found at https://blake2.net. */ -#include -#include -#include -#include #include #include +#include +#include +#include +#include static const u8 blake2b_sigma[12][16] = { { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 }, @@ -111,8 +111,8 @@ static void blake2b_compress_one_generic(struct blake2b_state *S, #undef G #undef ROUND -void blake2b_compress_generic(struct blake2b_state *state, - const u8 *block, size_t nblocks, u32 inc) +static void blake2b_compress_generic(struct blake2b_state *state, + const u8 *block, size_t nblocks, u32 inc) { do { blake2b_increment_counter(state, inc); @@ -120,17 +120,19 @@ void blake2b_compress_generic(struct blake2b_state *state, block += BLAKE2B_BLOCK_SIZE; } while (--nblocks); } -EXPORT_SYMBOL(blake2b_compress_generic); static int crypto_blake2b_update_generic(struct shash_desc *desc, const u8 *in, unsigned int inlen) { - return crypto_blake2b_update(desc, in, inlen, blake2b_compress_generic); + return crypto_blake2b_update_bo(desc, in, inlen, + blake2b_compress_generic); } -static int crypto_blake2b_final_generic(struct shash_desc *desc, u8 *out) +static int crypto_blake2b_finup_generic(struct shash_desc *desc, const u8 *in, + unsigned int inlen, u8 *out) { - return crypto_blake2b_final(desc, out, blake2b_compress_generic); + return crypto_blake2b_finup(desc, in, inlen, out, + blake2b_compress_generic); } #define BLAKE2B_ALG(name, driver_name, digest_size) \ @@ -138,7 +140,9 @@ static int crypto_blake2b_final_generic(struct shash_desc *desc, u8 *out) .base.cra_name = name, \ .base.cra_driver_name = driver_name, \ .base.cra_priority = 100, \ - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY | \ + CRYPTO_AHASH_ALG_BLOCK_ONLY | \ + CRYPTO_AHASH_ALG_FINAL_NONZERO, \ .base.cra_blocksize = BLAKE2B_BLOCK_SIZE, \ .base.cra_ctxsize = sizeof(struct blake2b_tfm_ctx), \ .base.cra_module = THIS_MODULE, \ @@ -146,8 +150,9 @@ static int crypto_blake2b_final_generic(struct shash_desc *desc, u8 *out) .setkey = crypto_blake2b_setkey, \ .init = crypto_blake2b_init, \ .update = crypto_blake2b_update_generic, \ - .final = crypto_blake2b_final_generic, \ - .descsize = sizeof(struct blake2b_state), \ + .finup = crypto_blake2b_finup_generic, \ + .descsize = BLAKE2B_DESC_SIZE, \ + .statesize = BLAKE2B_STATE_SIZE, \ } static struct shash_alg blake2b_algs[] = { diff --git a/include/crypto/blake2b.h b/include/crypto/blake2b.h index 0c0176285349..68da368dc182 100644 --- a/include/crypto/blake2b.h +++ b/include/crypto/blake2b.h @@ -11,6 +11,8 @@ enum blake2b_lengths { BLAKE2B_BLOCK_SIZE = 128, BLAKE2B_HASH_SIZE = 64, BLAKE2B_KEY_SIZE = 64, + BLAKE2B_STATE_SIZE = 80, + BLAKE2B_DESC_SIZE = 96, BLAKE2B_160_HASH_SIZE = 20, BLAKE2B_256_HASH_SIZE = 32, @@ -25,7 +27,6 @@ struct blake2b_state { u64 f[2]; u8 buf[BLAKE2B_BLOCK_SIZE]; unsigned int buflen; - unsigned int outlen; }; enum blake2b_iv { @@ -40,7 +41,7 @@ enum blake2b_iv { }; static inline void __blake2b_init(struct blake2b_state *state, size_t outlen, - const void *key, size_t keylen) + size_t keylen) { state->h[0] = BLAKE2B_IV0 ^ (0x01010000 | keylen << 8 | outlen); state->h[1] = BLAKE2B_IV1; @@ -52,15 +53,6 @@ static inline void __blake2b_init(struct blake2b_state *state, size_t outlen, state->h[7] = BLAKE2B_IV7; state->t[0] = 0; state->t[1] = 0; - state->f[0] = 0; - state->f[1] = 0; - state->buflen = 0; - state->outlen = outlen; - if (keylen) { - memcpy(state->buf, key, keylen); - memset(&state->buf[keylen], 0, BLAKE2B_BLOCK_SIZE - keylen); - state->buflen = BLAKE2B_BLOCK_SIZE; - } } #endif /* _CRYPTO_BLAKE2B_H */ diff --git a/include/crypto/internal/blake2b.h b/include/crypto/internal/blake2b.h index 982fe5e8471c..48dc9830400d 100644 --- a/include/crypto/internal/blake2b.h +++ b/include/crypto/internal/blake2b.h @@ -7,16 +7,27 @@ #ifndef _CRYPTO_INTERNAL_BLAKE2B_H #define _CRYPTO_INTERNAL_BLAKE2B_H +#include #include #include +#include +#include +#include +#include +#include #include - -void blake2b_compress_generic(struct blake2b_state *state, - const u8 *block, size_t nblocks, u32 inc); +#include static inline void blake2b_set_lastblock(struct blake2b_state *state) { state->f[0] = -1; + state->f[1] = 0; +} + +static inline void blake2b_set_nonlast(struct blake2b_state *state) +{ + state->f[0] = 0; + state->f[1] = 0; } typedef void (*blake2b_compress_t)(struct blake2b_state *state, @@ -30,6 +41,7 @@ static inline void __blake2b_update(struct blake2b_state *state, if (unlikely(!inlen)) return; + blake2b_set_nonlast(state); if (inlen > fill) { memcpy(state->buf + state->buflen, in, fill); (*compress)(state, state->buf, 1, BLAKE2B_BLOCK_SIZE); @@ -49,6 +61,7 @@ static inline void __blake2b_update(struct blake2b_state *state, } static inline void __blake2b_final(struct blake2b_state *state, u8 *out, + unsigned int outlen, blake2b_compress_t compress) { int i; @@ -59,13 +72,13 @@ static inline void __blake2b_final(struct blake2b_state *state, u8 *out, (*compress)(state, state->buf, 1, state->buflen); for (i = 0; i < ARRAY_SIZE(state->h); i++) __cpu_to_le64s(&state->h[i]); - memcpy(out, state->h, state->outlen); + memcpy(out, state->h, outlen); } /* Helper functions for shash implementations of BLAKE2b */ struct blake2b_tfm_ctx { - u8 key[BLAKE2B_KEY_SIZE]; + u8 key[BLAKE2B_BLOCK_SIZE]; unsigned int keylen; }; @@ -74,10 +87,13 @@ static inline int crypto_blake2b_setkey(struct crypto_shash *tfm, { struct blake2b_tfm_ctx *tctx = crypto_shash_ctx(tfm); - if (keylen == 0 || keylen > BLAKE2B_KEY_SIZE) + if (keylen > BLAKE2B_KEY_SIZE) return -EINVAL; + BUILD_BUG_ON(BLAKE2B_KEY_SIZE > BLAKE2B_BLOCK_SIZE); + memcpy(tctx->key, key, keylen); + memset(tctx->key + keylen, 0, BLAKE2B_BLOCK_SIZE - keylen); tctx->keylen = keylen; return 0; @@ -89,8 +105,9 @@ static inline int crypto_blake2b_init(struct shash_desc *desc) struct blake2b_state *state = shash_desc_ctx(desc); unsigned int outlen = crypto_shash_digestsize(desc->tfm); - __blake2b_init(state, outlen, tctx->key, tctx->keylen); - return 0; + __blake2b_init(state, outlen, tctx->keylen); + return tctx->keylen ? + crypto_shash_update(desc, tctx->key, BLAKE2B_BLOCK_SIZE) : 0; } static inline int crypto_blake2b_update(struct shash_desc *desc, @@ -103,12 +120,43 @@ static inline int crypto_blake2b_update(struct shash_desc *desc, return 0; } -static inline int crypto_blake2b_final(struct shash_desc *desc, u8 *out, - blake2b_compress_t compress) +static inline int crypto_blake2b_update_bo(struct shash_desc *desc, + const u8 *in, unsigned int inlen, + blake2b_compress_t compress) { struct blake2b_state *state = shash_desc_ctx(desc); - __blake2b_final(state, out, compress); + blake2b_set_nonlast(state); + compress(state, in, inlen / BLAKE2B_BLOCK_SIZE, BLAKE2B_BLOCK_SIZE); + return inlen - round_down(inlen, BLAKE2B_BLOCK_SIZE); +} + +static inline int crypto_blake2b_final(struct shash_desc *desc, u8 *out, + blake2b_compress_t compress) +{ + unsigned int outlen = crypto_shash_digestsize(desc->tfm); + struct blake2b_state *state = shash_desc_ctx(desc); + + __blake2b_final(state, out, outlen, compress); + return 0; +} + +static inline int crypto_blake2b_finup(struct shash_desc *desc, const u8 *in, + unsigned int inlen, u8 *out, + blake2b_compress_t compress) +{ + struct blake2b_state *state = shash_desc_ctx(desc); + u8 buf[BLAKE2B_BLOCK_SIZE]; + int i; + + memcpy(buf, in, inlen); + memset(buf + inlen, 0, BLAKE2B_BLOCK_SIZE - inlen); + blake2b_set_lastblock(state); + compress(state, buf, 1, inlen); + for (i = 0; i < ARRAY_SIZE(state->h); i++) + __cpu_to_le64s(&state->h[i]); + memcpy(out, state->h, crypto_shash_digestsize(desc->tfm)); + memzero_explicit(buf, sizeof(buf)); return 0; } From patchwork Wed Apr 16 06:42:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882037 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1B12229B13 for ; Wed, 16 Apr 2025 06:42:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785778; cv=none; b=tLV/chOhnbJvU/9Dno0gREMfnTM5TsXREnQjbOVIjNOFEbYpZjC+EzR80uk7n7XYK+Igc2DxAtnZ8BtKkDYL5jhEFdOxqoIYwUt/1Y5CJ65J2VlcrEtfmPXtf3DuEkM/9i8j5ZuJqhhvfWo5h7PGVt756nmvt6elgn8YvTggTOI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785778; c=relaxed/simple; bh=xRCBjPeEWNSj8qaZObnkAoZdhlp0BFLe1A6A0RMDpX0=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=Yt2sIN7bOYl8WScctycJBLvJse88D/U5T99dZQlONBtD6B0MhbELF5esd3PG+sJuscZ/SHTC7P82XHa3sPzqOP2xcE/aAZi78KRgVKQUgrbHGwkRu0ZEuHyC/3kA+ZzraoxARpMa547XCTpV3yR1+JxccreGcPH3qgPNGcxkyz4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=Z2Eh45Sm; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="Z2Eh45Sm" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=7d8z9s2NWVqXoDg2yi3Gb9c4v1eZW5YTUXOvjOowov0=; b=Z2Eh45SmtFrK2Jk2JBM/Xntisi Va+fczgI0vWd74wP/B1zVT+kxHG9ygkLCxE2++Vx+phfe3F8uQJu3cn7THHUTm/FfkQJT+4Vw2CsK VUzHWg6fuDtHD13TPEQ3Wog0oRadzl9BWWqDipu+JUbPWFX9j6zjxGAleH9UMjR0IcCd8d0c7qCyy Ex6yu6q8ykpEp7xS1ghIKGa+3jTvPwkuuwOrzhdp987qOY34JzR1J3xoJwrbqGB2VayINh1+hqQfc 0ccDF4XQ7GDt0qsOKoZFEZjvc6FjOllH8dSf2xvI8SooPW61nsTgPfHfjrutGqb3dGtNfWOZIVrQV dAnqPeMQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wU0-00G6H1-0L; Wed, 16 Apr 2025 14:42:53 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:42:52 +0800 Date: Wed, 16 Apr 2025 14:42:52 +0800 Message-Id: <5323accefaa805b6f0c94c330aae7cf0f04eb163.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 04/67] crypto: ghash-generic - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- crypto/ghash-generic.c | 56 +++++++++++++----------------------------- include/crypto/ghash.h | 3 ++- 2 files changed, 19 insertions(+), 40 deletions(-) diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c index c70d163c1ac9..b5fc20a0dafc 100644 --- a/crypto/ghash-generic.c +++ b/crypto/ghash-generic.c @@ -34,14 +34,14 @@ * (https://csrc.nist.gov/publications/detail/sp/800-38d/final) */ -#include #include #include #include -#include -#include +#include +#include #include #include +#include static int ghash_init(struct shash_desc *desc) { @@ -82,59 +82,36 @@ static int ghash_update(struct shash_desc *desc, struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *dst = dctx->buffer; - if (dctx->bytes) { - int n = min(srclen, dctx->bytes); - u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); - - dctx->bytes -= n; - srclen -= n; - - while (n--) - *pos++ ^= *src++; - - if (!dctx->bytes) - gf128mul_4k_lle((be128 *)dst, ctx->gf128); - } - - while (srclen >= GHASH_BLOCK_SIZE) { + do { crypto_xor(dst, src, GHASH_BLOCK_SIZE); gf128mul_4k_lle((be128 *)dst, ctx->gf128); src += GHASH_BLOCK_SIZE; srclen -= GHASH_BLOCK_SIZE; - } + } while (srclen >= GHASH_BLOCK_SIZE); - if (srclen) { - dctx->bytes = GHASH_BLOCK_SIZE - srclen; - while (srclen--) - *dst++ ^= *src++; - } - - return 0; + return srclen; } -static void ghash_flush(struct ghash_ctx *ctx, struct ghash_desc_ctx *dctx) +static void ghash_flush(struct shash_desc *desc, const u8 *src, + unsigned int len) { + struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); + struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); u8 *dst = dctx->buffer; - if (dctx->bytes) { - u8 *tmp = dst + (GHASH_BLOCK_SIZE - dctx->bytes); - - while (dctx->bytes--) - *tmp++ ^= 0; - + if (len) { + crypto_xor(dst, src, len); gf128mul_4k_lle((be128 *)dst, ctx->gf128); } - - dctx->bytes = 0; } -static int ghash_final(struct shash_desc *desc, u8 *dst) +static int ghash_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *dst) { struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); - struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *buf = dctx->buffer; - ghash_flush(ctx, dctx); + ghash_flush(desc, src, len); memcpy(dst, buf, GHASH_BLOCK_SIZE); return 0; @@ -151,13 +128,14 @@ static struct shash_alg ghash_alg = { .digestsize = GHASH_DIGEST_SIZE, .init = ghash_init, .update = ghash_update, - .final = ghash_final, + .finup = ghash_finup, .setkey = ghash_setkey, .descsize = sizeof(struct ghash_desc_ctx), .base = { .cra_name = "ghash", .cra_driver_name = "ghash-generic", .cra_priority = 100, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = GHASH_BLOCK_SIZE, .cra_ctxsize = sizeof(struct ghash_ctx), .cra_module = THIS_MODULE, diff --git a/include/crypto/ghash.h b/include/crypto/ghash.h index f832c9f2aca3..16904f2b5184 100644 --- a/include/crypto/ghash.h +++ b/include/crypto/ghash.h @@ -12,13 +12,14 @@ #define GHASH_BLOCK_SIZE 16 #define GHASH_DIGEST_SIZE 16 +struct gf128mul_4k; + struct ghash_ctx { struct gf128mul_4k *gf128; }; struct ghash_desc_ctx { u8 buffer[GHASH_BLOCK_SIZE]; - u32 bytes; }; #endif From patchwork Wed Apr 16 06:42:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882036 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FE8722DFAA for ; Wed, 16 Apr 2025 06:43:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785783; cv=none; b=TIwd81Jqe8erK4kWZQSogaP4+hiQ7/czDAJmayp6Lspriz8XAKvMLNVA5cyEM0XX5bNDWDBnzkGp/5UpmJQebX6nmLLMzheULa5FRFXu7lVIYoVXUPriYEY061ye+pKlYxHTvkxKspCUD96q/bN2NkmaOoLLeOYsiCQExiFoyzk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785783; c=relaxed/simple; bh=MssqPNwPuP4uwwROVlnz8fNcqY2lsGr88+65AXGrfEc=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=XBtA/68VR0HeNofhYtNXPLl8oALZ+nDqyuZf5wLfSnH6DQDUkg2KE4r08VjS2m0pLPk0oUJ/4WWVMOmyswjMZa8G5IkJiZ7hk3yTXqarnBXZXNbpBwVM2Elu/NXNfwLbJH6/iNtYS2qySWsqLsWu5888GakgqHk6VHhztUKMO8Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=ajwozLlh; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="ajwozLlh" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=ZP9qz8OoCYC8GYuInRsCY56h0mbHwRWTL7dGJCBX2W0=; b=ajwozLlhpnc8gbgNDdqcB5j8pq jtgBbZOJkxQkB23G+aJgNg4VPjHzWQHcAMeqRRD0bI1bZHTjrfyVav3OwMbe0agcDQJiMqHExPxcB vvJ/2tOBytcyxvRXKPJxesan6UxXkJfYGfhitgiz6iRyKObYQtbq2YUcwGjkuj0mlTl5lFMkE0Mt8 qKeZEmFLt3iWzqGqQePa8rfCF6hes1w0ZYOJWcXS7VkIyuvzFKHWYvnl8+oeaVQd9/1icd5Uspy/Y 0ohEyEscQBZ7pfnsW7wt+eHZGSTBOK+I9381Pzy1Kdjynm0+n7dkoP1vES/efAgSrbu52Cb+A8o00 eETlvQzg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wU4-00G6HN-2K; Wed, 16 Apr 2025 14:42:57 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:42:56 +0800 Date: Wed, 16 Apr 2025 14:42:56 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 06/67] crypto: arm/ghash - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also switch to the generic export format. Finally remove a couple of stray may_use_simd() calls in gcm. Signed-off-by: Herbert Xu --- arch/arm/crypto/ghash-ce-glue.c | 110 +++++++++++++++----------------- 1 file changed, 50 insertions(+), 60 deletions(-) diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c index aabfcf522a2c..a52dcc8c1e33 100644 --- a/arch/arm/crypto/ghash-ce-glue.c +++ b/arch/arm/crypto/ghash-ce-glue.c @@ -8,22 +8,22 @@ #include #include -#include -#include #include -#include #include -#include +#include +#include +#include #include #include -#include #include -#include #include #include -#include +#include #include +#include #include +#include +#include MODULE_DESCRIPTION("GHASH hash function using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); @@ -32,9 +32,6 @@ MODULE_ALIAS_CRYPTO("ghash"); MODULE_ALIAS_CRYPTO("gcm(aes)"); MODULE_ALIAS_CRYPTO("rfc4106(gcm(aes))"); -#define GHASH_BLOCK_SIZE 16 -#define GHASH_DIGEST_SIZE 16 - #define RFC4106_NONCE_SIZE 4 struct ghash_key { @@ -49,10 +46,8 @@ struct gcm_key { u8 nonce[]; // for RFC4106 nonce }; -struct ghash_desc_ctx { +struct arm_ghash_desc_ctx { u64 digest[GHASH_DIGEST_SIZE/sizeof(u64)]; - u8 buf[GHASH_BLOCK_SIZE]; - u32 count; }; asmlinkage void pmull_ghash_update_p64(int blocks, u64 dg[], const char *src, @@ -65,9 +60,9 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(use_p64); static int ghash_init(struct shash_desc *desc) { - struct ghash_desc_ctx *ctx = shash_desc_ctx(desc); + struct arm_ghash_desc_ctx *ctx = shash_desc_ctx(desc); - *ctx = (struct ghash_desc_ctx){}; + *ctx = (struct arm_ghash_desc_ctx){}; return 0; } @@ -85,54 +80,51 @@ static void ghash_do_update(int blocks, u64 dg[], const char *src, static int ghash_update(struct shash_desc *desc, const u8 *src, unsigned int len) { - struct ghash_desc_ctx *ctx = shash_desc_ctx(desc); - unsigned int partial = ctx->count % GHASH_BLOCK_SIZE; + struct ghash_key *key = crypto_shash_ctx(desc->tfm); + struct arm_ghash_desc_ctx *ctx = shash_desc_ctx(desc); + int blocks; - ctx->count += len; + blocks = len / GHASH_BLOCK_SIZE; + ghash_do_update(blocks, ctx->digest, src, key, NULL); + return len - blocks * GHASH_BLOCK_SIZE; +} - if ((partial + len) >= GHASH_BLOCK_SIZE) { - struct ghash_key *key = crypto_shash_ctx(desc->tfm); - int blocks; +static int ghash_export(struct shash_desc *desc, void *out) +{ + struct arm_ghash_desc_ctx *ctx = shash_desc_ctx(desc); + u8 *dst = out; - if (partial) { - int p = GHASH_BLOCK_SIZE - partial; - - memcpy(ctx->buf + partial, src, p); - src += p; - len -= p; - } - - blocks = len / GHASH_BLOCK_SIZE; - len %= GHASH_BLOCK_SIZE; - - ghash_do_update(blocks, ctx->digest, src, key, - partial ? ctx->buf : NULL); - src += blocks * GHASH_BLOCK_SIZE; - partial = 0; - } - if (len) - memcpy(ctx->buf + partial, src, len); + put_unaligned_be64(ctx->digest[1], dst); + put_unaligned_be64(ctx->digest[0], dst + 8); return 0; } -static int ghash_final(struct shash_desc *desc, u8 *dst) +static int ghash_import(struct shash_desc *desc, const void *in) { - struct ghash_desc_ctx *ctx = shash_desc_ctx(desc); - unsigned int partial = ctx->count % GHASH_BLOCK_SIZE; + struct arm_ghash_desc_ctx *ctx = shash_desc_ctx(desc); + const u8 *src = in; - if (partial) { - struct ghash_key *key = crypto_shash_ctx(desc->tfm); - - memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); - ghash_do_update(1, ctx->digest, ctx->buf, key, NULL); - } - put_unaligned_be64(ctx->digest[1], dst); - put_unaligned_be64(ctx->digest[0], dst + 8); - - *ctx = (struct ghash_desc_ctx){}; + ctx->digest[1] = get_unaligned_be64(src); + ctx->digest[0] = get_unaligned_be64(src + 8); return 0; } +static int ghash_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *dst) +{ + struct ghash_key *key = crypto_shash_ctx(desc->tfm); + struct arm_ghash_desc_ctx *ctx = shash_desc_ctx(desc); + + if (len) { + u8 buf[GHASH_BLOCK_SIZE] = {}; + + memcpy(buf, src, len); + ghash_do_update(1, ctx->digest, buf, key, NULL); + memzero_explicit(buf, sizeof(buf)); + } + return ghash_export(desc, dst); +} + static void ghash_reflect(u64 h[], const be128 *k) { u64 carry = be64_to_cpu(k->a) >> 63; @@ -175,13 +167,17 @@ static struct shash_alg ghash_alg = { .digestsize = GHASH_DIGEST_SIZE, .init = ghash_init, .update = ghash_update, - .final = ghash_final, + .finup = ghash_finup, .setkey = ghash_setkey, - .descsize = sizeof(struct ghash_desc_ctx), + .export = ghash_export, + .import = ghash_import, + .descsize = sizeof(struct arm_ghash_desc_ctx), + .statesize = sizeof(struct ghash_desc_ctx), .base.cra_name = "ghash", .base.cra_driver_name = "ghash-ce", .base.cra_priority = 300, + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = GHASH_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct ghash_key) + sizeof(u64[2]), .base.cra_module = THIS_MODULE, @@ -317,9 +313,6 @@ static int gcm_encrypt(struct aead_request *req, const u8 *iv, u32 assoclen) u8 *tag, *dst; int tail, err; - if (WARN_ON_ONCE(!may_use_simd())) - return -EBUSY; - err = skcipher_walk_aead_encrypt(&walk, req, false); kernel_neon_begin(); @@ -409,9 +402,6 @@ static int gcm_decrypt(struct aead_request *req, const u8 *iv, u32 assoclen) u8 *tag, *dst; int tail, err, ret; - if (WARN_ON_ONCE(!may_use_simd())) - return -EBUSY; - scatterwalk_map_and_copy(otag, req->src, req->assoclen + req->cryptlen - authsize, authsize, 0); From patchwork Wed Apr 16 06:43:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882035 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3452E2309AA for ; Wed, 16 Apr 2025 06:43:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785787; cv=none; b=S6hxPdRYYz0oguafQhEoahzIseECZq9sh01F8b+egI0PYkmlPr0sPHYUwm9vOJjZjoqgyr6oHkhQSyR13D9zYahuXj7W6TU45aYZGbd6B+STm4HDF4HHlkvqWMrCfJPs96zNasDNCEEHQl2XjZ5wRxD0Se7JjkB/JYLkxWMV8Cs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785787; c=relaxed/simple; bh=9d/xZUV5DOobkyOf82dKTLStsGJ8efl9s5uPWjspXrQ=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=gjs+ro8HgMsw0j/eyhC/vMvgDXG4FP/eVvrd7P2xRNu8VLwuk1d7GAMndVDjmlaV2TUD7GypvnpsTZC81md5h4MdgvGW6kSw/r088aUdgoAHATaMt1V0e9FOtAdwhjlwTHJbpSrBXocHu84eBmn3nBdbRA/ss/YI/PKXo13vegc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=kqDcRH+8; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="kqDcRH+8" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=UJn4Idi0q6cC9DeMCvqyd1Bvs15yorgEjA8iFwqskp8=; b=kqDcRH+88AJQ/aArjqJ5fXoOgB oud1AeYQ+n8OlgHUkBOcq8eVSJNpT6hWvSXPQRVJhN/4DBBG8oYqkB2NXvsx7L7LrPhBlzEFnBc83 yC5TM5O4pvRs2uIdApsexmuY3diSYpMAuTOsCZM+auoERzymDEPVbIXu5Hk2vZVXJzuuZlS9toZtZ TgWsivAln08JmypKk4S6pobWdxTSVAHAx1ha7l5T0LwfSO8S9pZYDYwb32nKM9QSOjg5c10OO0BfV BsS6KyUcHer8OjB1mEnKb86nec2+yxbpUzcZLDOHzXsNICdrFFViXLN//hLhfpgy1YGzYGwF9gwvt ZkS/7VtA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wU9-00G6Hj-0y; Wed, 16 Apr 2025 14:43:02 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:01 +0800 Date: Wed, 16 Apr 2025 14:43:01 +0800 Message-Id: <659b54ee6c62e804d17d7f49f711e114278450d8.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 08/67] crypto: riscv/ghash - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. As this was the last user relying on crypto/ghash.h for gf128mul.h, remove the unnecessary inclusion of gf128mul.h from crypto/ghash.h. Signed-off-by: Herbert Xu --- arch/riscv/crypto/ghash-riscv64-glue.c | 58 ++++++++------------------ include/crypto/ghash.h | 1 - 2 files changed, 18 insertions(+), 41 deletions(-) diff --git a/arch/riscv/crypto/ghash-riscv64-glue.c b/arch/riscv/crypto/ghash-riscv64-glue.c index 312e7891fd0a..d86073d25387 100644 --- a/arch/riscv/crypto/ghash-riscv64-glue.c +++ b/arch/riscv/crypto/ghash-riscv64-glue.c @@ -11,11 +11,16 @@ #include #include +#include +#include #include #include #include -#include +#include +#include +#include #include +#include asmlinkage void ghash_zvkg(be128 *accumulator, const be128 *key, const u8 *data, size_t len); @@ -26,8 +31,6 @@ struct riscv64_ghash_tfm_ctx { struct riscv64_ghash_desc_ctx { be128 accumulator; - u8 buffer[GHASH_BLOCK_SIZE]; - u32 bytes; }; static int riscv64_ghash_setkey(struct crypto_shash *tfm, const u8 *key, @@ -78,50 +81,24 @@ static int riscv64_ghash_update(struct shash_desc *desc, const u8 *src, { const struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); - unsigned int len; - if (dctx->bytes) { - if (dctx->bytes + srclen < GHASH_BLOCK_SIZE) { - memcpy(dctx->buffer + dctx->bytes, src, srclen); - dctx->bytes += srclen; - return 0; - } - memcpy(dctx->buffer + dctx->bytes, src, - GHASH_BLOCK_SIZE - dctx->bytes); - riscv64_ghash_blocks(tctx, dctx, dctx->buffer, - GHASH_BLOCK_SIZE); - src += GHASH_BLOCK_SIZE - dctx->bytes; - srclen -= GHASH_BLOCK_SIZE - dctx->bytes; - dctx->bytes = 0; - } - - len = round_down(srclen, GHASH_BLOCK_SIZE); - if (len) { - riscv64_ghash_blocks(tctx, dctx, src, len); - src += len; - srclen -= len; - } - - if (srclen) { - memcpy(dctx->buffer, src, srclen); - dctx->bytes = srclen; - } - - return 0; + riscv64_ghash_blocks(tctx, dctx, src, + round_down(srclen, GHASH_BLOCK_SIZE)); + return srclen - round_down(srclen, GHASH_BLOCK_SIZE); } -static int riscv64_ghash_final(struct shash_desc *desc, u8 *out) +static int riscv64_ghash_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *out) { const struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); - int i; - if (dctx->bytes) { - for (i = dctx->bytes; i < GHASH_BLOCK_SIZE; i++) - dctx->buffer[i] = 0; + if (len) { + u8 buf[GHASH_BLOCK_SIZE] = {}; - riscv64_ghash_blocks(tctx, dctx, dctx->buffer, - GHASH_BLOCK_SIZE); + memcpy(buf, src, len); + riscv64_ghash_blocks(tctx, dctx, buf, GHASH_BLOCK_SIZE); + memzero_explicit(buf, sizeof(buf)); } memcpy(out, &dctx->accumulator, GHASH_DIGEST_SIZE); @@ -131,7 +108,7 @@ static int riscv64_ghash_final(struct shash_desc *desc, u8 *out) static struct shash_alg riscv64_ghash_alg = { .init = riscv64_ghash_init, .update = riscv64_ghash_update, - .final = riscv64_ghash_final, + .finup = riscv64_ghash_finup, .setkey = riscv64_ghash_setkey, .descsize = sizeof(struct riscv64_ghash_desc_ctx), .digestsize = GHASH_DIGEST_SIZE, @@ -139,6 +116,7 @@ static struct shash_alg riscv64_ghash_alg = { .cra_blocksize = GHASH_BLOCK_SIZE, .cra_ctxsize = sizeof(struct riscv64_ghash_tfm_ctx), .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_name = "ghash", .cra_driver_name = "ghash-riscv64-zvkg", .cra_module = THIS_MODULE, diff --git a/include/crypto/ghash.h b/include/crypto/ghash.h index 16904f2b5184..043d938e9a2c 100644 --- a/include/crypto/ghash.h +++ b/include/crypto/ghash.h @@ -7,7 +7,6 @@ #define __CRYPTO_GHASH_H__ #include -#include #define GHASH_BLOCK_SIZE 16 #define GHASH_DIGEST_SIZE 16 From patchwork Wed Apr 16 06:43:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882034 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C31BB230BD1 for ; Wed, 16 Apr 2025 06:43:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785792; cv=none; b=IElb+VA/ueSH/JBLAETTXx0YF3NWwdxWLrljTfy4kpdrfms8mYu/H4Hd75291j/9WvesjL/DPLiwGiUPCDFr5z4qM5LnM+VWYmRWr7rxLtJUhijbnXeLIyqoxNRTDoUj7eiTfyy1XPA8L4VaE+6wCanzd7ChR3NT09fQDacgasI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785792; c=relaxed/simple; bh=UUg+UK4A5chcDyYL2ViEVx0oGFihLemOJcFFeUQs2/8=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=egCp524WC/ozJ1w6KbX5G7kHrEjQJI/2tnOIfnNqdYyTPL7eGHxtzh6G3Q5QChlIw12ULgZWl5Jsq/opb2KpLTJ9G/D70ErIxL9AJLMJcD94ss7iMPN1J1DikdSeJ1p9ikKHAzgHQ0pMhMVOQHOPK8RItGAhUOXs4fINUneae1M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=fdUlgkqh; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="fdUlgkqh" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=sKXf1yhTxOcA2agQIsrXOgEWt7rcWj9lIrRNRM6CC/c=; b=fdUlgkqhZJGQ4FAtwk1PuXTXA5 piTGr9TBRpHpvUtvEnOoJzMw+lB2wwWSJR/uvDW5Xxh1nWl2px4HEm/KV0PLRyacMbsuMA9kIE2DL sD7gq/paZ1kM2keAt37O0aedlFktsUYf9Ts/IaYcGuYbjEL6JimWJH19yEGGmaa8i/1pN5v0/Dpgu i5ZDouDJmMR5eqEkxZaW6Dy8cMHUoL4wnzr3QTCNpcZTDI+Ga+rxOrr8wGTJtjY8nmS5RtD08kaiQ sgA/myXYRmihaKM0UDzyAe4sMvYYzdWxoRa7du7U1JDDwuG8nO+NRHdDqWgEXQ6d5gxEPCc6KrL0Y Z1QV+HQg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUD-00G6I5-2y; Wed, 16 Apr 2025 14:43:06 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:05 +0800 Date: Wed, 16 Apr 2025 14:43:05 +0800 Message-Id: <32590ddbfbdc9675c254b3f82227a744ab37fc9b.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 10/67] crypto: x86/ghash - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/x86/crypto/ghash-clmulni-intel_asm.S | 5 +- arch/x86/crypto/ghash-clmulni-intel_glue.c | 301 +++------------------ 2 files changed, 43 insertions(+), 263 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S index 99cb983ded9e..c4fbaa82ed7a 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_asm.S +++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S @@ -103,8 +103,8 @@ SYM_FUNC_START(clmul_ghash_mul) SYM_FUNC_END(clmul_ghash_mul) /* - * void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, - * const le128 *shash); + * int clmul_ghash_update(char *dst, const char *src, unsigned int srclen, + * const le128 *shash); */ SYM_FUNC_START(clmul_ghash_update) FRAME_BEGIN @@ -127,6 +127,7 @@ SYM_FUNC_START(clmul_ghash_update) pshufb BSWAP, DATA movups DATA, (%rdi) .Lupdate_just_ret: + mov %rdx, %rax FRAME_END RET SYM_FUNC_END(clmul_ghash_update) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index c759ec808bf1..aea5d4d06be7 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -7,41 +7,27 @@ * Author: Huang Ying */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include #include #include +#include +#include +#include +#include +#include +#include +#include +#include #include -#define GHASH_BLOCK_SIZE 16 -#define GHASH_DIGEST_SIZE 16 +asmlinkage void clmul_ghash_mul(char *dst, const le128 *shash); -void clmul_ghash_mul(char *dst, const le128 *shash); +asmlinkage int clmul_ghash_update(char *dst, const char *src, + unsigned int srclen, const le128 *shash); -void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, - const le128 *shash); - -struct ghash_async_ctx { - struct cryptd_ahash *cryptd_tfm; -}; - -struct ghash_ctx { +struct x86_ghash_ctx { le128 shash; }; -struct ghash_desc_ctx { - u8 buffer[GHASH_BLOCK_SIZE]; - u32 bytes; -}; - static int ghash_init(struct shash_desc *desc) { struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); @@ -54,7 +40,7 @@ static int ghash_init(struct shash_desc *desc) static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { - struct ghash_ctx *ctx = crypto_shash_ctx(tfm); + struct x86_ghash_ctx *ctx = crypto_shash_ctx(tfm); u64 a, b; if (keylen != GHASH_BLOCK_SIZE) @@ -95,64 +81,38 @@ static int ghash_setkey(struct crypto_shash *tfm, static int ghash_update(struct shash_desc *desc, const u8 *src, unsigned int srclen) { + struct x86_ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); - struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); + u8 *dst = dctx->buffer; + int remain; + + kernel_fpu_begin(); + remain = clmul_ghash_update(dst, src, srclen, &ctx->shash); + kernel_fpu_end(); + return remain; +} + +static void ghash_flush(struct x86_ghash_ctx *ctx, struct ghash_desc_ctx *dctx, + const u8 *src, unsigned int len) +{ u8 *dst = dctx->buffer; kernel_fpu_begin(); - if (dctx->bytes) { - int n = min(srclen, dctx->bytes); - u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); - - dctx->bytes -= n; - srclen -= n; - - while (n--) - *pos++ ^= *src++; - - if (!dctx->bytes) - clmul_ghash_mul(dst, &ctx->shash); - } - - clmul_ghash_update(dst, src, srclen, &ctx->shash); - kernel_fpu_end(); - - if (srclen & 0xf) { - src += srclen - (srclen & 0xf); - srclen &= 0xf; - dctx->bytes = GHASH_BLOCK_SIZE - srclen; - while (srclen--) - *dst++ ^= *src++; - } - - return 0; -} - -static void ghash_flush(struct ghash_ctx *ctx, struct ghash_desc_ctx *dctx) -{ - u8 *dst = dctx->buffer; - - if (dctx->bytes) { - u8 *tmp = dst + (GHASH_BLOCK_SIZE - dctx->bytes); - - while (dctx->bytes--) - *tmp++ ^= 0; - - kernel_fpu_begin(); + if (len) { + crypto_xor(dst, src, len); clmul_ghash_mul(dst, &ctx->shash); - kernel_fpu_end(); } - - dctx->bytes = 0; + kernel_fpu_end(); } -static int ghash_final(struct shash_desc *desc, u8 *dst) +static int ghash_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *dst) { + struct x86_ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); - struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *buf = dctx->buffer; - ghash_flush(ctx, dctx); + ghash_flush(ctx, dctx, src, len); memcpy(dst, buf, GHASH_BLOCK_SIZE); return 0; @@ -162,186 +122,20 @@ static struct shash_alg ghash_alg = { .digestsize = GHASH_DIGEST_SIZE, .init = ghash_init, .update = ghash_update, - .final = ghash_final, + .finup = ghash_finup, .setkey = ghash_setkey, .descsize = sizeof(struct ghash_desc_ctx), .base = { - .cra_name = "__ghash", - .cra_driver_name = "__ghash-pclmulqdqni", - .cra_priority = 0, - .cra_flags = CRYPTO_ALG_INTERNAL, + .cra_name = "ghash", + .cra_driver_name = "ghash-pclmulqdqni", + .cra_priority = 400, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = GHASH_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct ghash_ctx), + .cra_ctxsize = sizeof(struct x86_ghash_ctx), .cra_module = THIS_MODULE, }, }; -static int ghash_async_init(struct ahash_request *req) -{ - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *cryptd_req = ahash_request_ctx(req); - struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; - struct shash_desc *desc = cryptd_shash_desc(cryptd_req); - struct crypto_shash *child = cryptd_ahash_child(cryptd_tfm); - - desc->tfm = child; - return crypto_shash_init(desc); -} - -static void ghash_init_cryptd_req(struct ahash_request *req) -{ - struct ahash_request *cryptd_req = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm); - struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; - - ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base); - ahash_request_set_callback(cryptd_req, req->base.flags, - req->base.complete, req->base.data); - ahash_request_set_crypt(cryptd_req, req->src, req->result, - req->nbytes); -} - -static int ghash_async_update(struct ahash_request *req) -{ - struct ahash_request *cryptd_req = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm); - struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; - - if (!crypto_simd_usable() || - (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { - ghash_init_cryptd_req(req); - return crypto_ahash_update(cryptd_req); - } else { - struct shash_desc *desc = cryptd_shash_desc(cryptd_req); - return shash_ahash_update(req, desc); - } -} - -static int ghash_async_final(struct ahash_request *req) -{ - struct ahash_request *cryptd_req = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm); - struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; - - if (!crypto_simd_usable() || - (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { - ghash_init_cryptd_req(req); - return crypto_ahash_final(cryptd_req); - } else { - struct shash_desc *desc = cryptd_shash_desc(cryptd_req); - return crypto_shash_final(desc, req->result); - } -} - -static int ghash_async_import(struct ahash_request *req, const void *in) -{ - struct ahash_request *cryptd_req = ahash_request_ctx(req); - struct shash_desc *desc = cryptd_shash_desc(cryptd_req); - struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); - - ghash_async_init(req); - memcpy(dctx, in, sizeof(*dctx)); - return 0; - -} - -static int ghash_async_export(struct ahash_request *req, void *out) -{ - struct ahash_request *cryptd_req = ahash_request_ctx(req); - struct shash_desc *desc = cryptd_shash_desc(cryptd_req); - struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); - - memcpy(out, dctx, sizeof(*dctx)); - return 0; - -} - -static int ghash_async_digest(struct ahash_request *req) -{ - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *cryptd_req = ahash_request_ctx(req); - struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm; - - if (!crypto_simd_usable() || - (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) { - ghash_init_cryptd_req(req); - return crypto_ahash_digest(cryptd_req); - } else { - struct shash_desc *desc = cryptd_shash_desc(cryptd_req); - struct crypto_shash *child = cryptd_ahash_child(cryptd_tfm); - - desc->tfm = child; - return shash_ahash_digest(req, desc); - } -} - -static int ghash_async_setkey(struct crypto_ahash *tfm, const u8 *key, - unsigned int keylen) -{ - struct ghash_async_ctx *ctx = crypto_ahash_ctx(tfm); - struct crypto_ahash *child = &ctx->cryptd_tfm->base; - - crypto_ahash_clear_flags(child, CRYPTO_TFM_REQ_MASK); - crypto_ahash_set_flags(child, crypto_ahash_get_flags(tfm) - & CRYPTO_TFM_REQ_MASK); - return crypto_ahash_setkey(child, key, keylen); -} - -static int ghash_async_init_tfm(struct crypto_tfm *tfm) -{ - struct cryptd_ahash *cryptd_tfm; - struct ghash_async_ctx *ctx = crypto_tfm_ctx(tfm); - - cryptd_tfm = cryptd_alloc_ahash("__ghash-pclmulqdqni", - CRYPTO_ALG_INTERNAL, - CRYPTO_ALG_INTERNAL); - if (IS_ERR(cryptd_tfm)) - return PTR_ERR(cryptd_tfm); - ctx->cryptd_tfm = cryptd_tfm; - crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), - sizeof(struct ahash_request) + - crypto_ahash_reqsize(&cryptd_tfm->base)); - - return 0; -} - -static void ghash_async_exit_tfm(struct crypto_tfm *tfm) -{ - struct ghash_async_ctx *ctx = crypto_tfm_ctx(tfm); - - cryptd_free_ahash(ctx->cryptd_tfm); -} - -static struct ahash_alg ghash_async_alg = { - .init = ghash_async_init, - .update = ghash_async_update, - .final = ghash_async_final, - .setkey = ghash_async_setkey, - .digest = ghash_async_digest, - .export = ghash_async_export, - .import = ghash_async_import, - .halg = { - .digestsize = GHASH_DIGEST_SIZE, - .statesize = sizeof(struct ghash_desc_ctx), - .base = { - .cra_name = "ghash", - .cra_driver_name = "ghash-clmulni", - .cra_priority = 400, - .cra_ctxsize = sizeof(struct ghash_async_ctx), - .cra_flags = CRYPTO_ALG_ASYNC, - .cra_blocksize = GHASH_BLOCK_SIZE, - .cra_module = THIS_MODULE, - .cra_init = ghash_async_init_tfm, - .cra_exit = ghash_async_exit_tfm, - }, - }, -}; - static const struct x86_cpu_id pcmul_cpu_id[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), /* Pickle-Mickle-Duck */ {} @@ -350,29 +144,14 @@ MODULE_DEVICE_TABLE(x86cpu, pcmul_cpu_id); static int __init ghash_pclmulqdqni_mod_init(void) { - int err; - if (!x86_match_cpu(pcmul_cpu_id)) return -ENODEV; - err = crypto_register_shash(&ghash_alg); - if (err) - goto err_out; - err = crypto_register_ahash(&ghash_async_alg); - if (err) - goto err_shash; - - return 0; - -err_shash: - crypto_unregister_shash(&ghash_alg); -err_out: - return err; + return crypto_register_shash(&ghash_alg); } static void __exit ghash_pclmulqdqni_mod_exit(void) { - crypto_unregister_ahash(&ghash_async_alg); crypto_unregister_shash(&ghash_alg); } From patchwork Wed Apr 16 06:43:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882033 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5428C230BD9 for ; Wed, 16 Apr 2025 06:43:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785796; cv=none; b=LiK+fDXazpuCbHc952KnzWpqVNMjtc99DKbxKpmMbn8D8MPOEB0+lSIJ9ZdXr9HISqdgMw+VScP8PkLKpufev9YIwDgFSagX/jT/2VaT4woO6lCrVZcDVELYiJ/9LJyMtd/M4tpx8qG9Ebe9W+AAUeTU69Y7tJ4NiTw/Dx6RSq4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785796; c=relaxed/simple; bh=Twewc4fHyn9M9mzHZI2F3Gt+JXJLbUWF3vsd44ZAkvg=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=WmdwI/rYjsqwaGyo/lM36k4OhMl5tD2Rn4EYLt4Sh+ebOC5XhzgFJoRJ98q/uOybY6ACqxMXrPLY2NFIXvBYzAKpeTVzR61Z0oRI6v5x02gpBRtX1Y6SLAyMB3zNURlERcrscbRhnkoZ/bbhOdG4EtoxenpPticeg4mVjoOW6yI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=dxtVpiWW; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="dxtVpiWW" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=dSlg1rZ27N56DAuVfpI/1V3H1J19I4hGBZYBTsc8gfY=; b=dxtVpiWWDS0vfV5T4HWnFcTwfr cAejZ/MCMiGdfH/vAfsePO/gtEnwfwmjqYrrVm6GUTbtpQOqHrjD2m3qRoOywHqx4G+PWCLn91jFL WEus0nmy4VbUMWRJYF5OtLShtAaDGa9d5EIihXB1TpleR3Ipj29PyWuZ+ZKpqhuhUnwTJcPmUNl0N S9uxzYZWh2Z5KKXKjMPi24/5EioAasBTyCojvD/qk5fEgD5r3pOIEExDmvNemRznMnA/hyZpTPGdV uOGW1mrvQZF5NeN006x7QCUrGJV8xQPAIdnvYoPrK+c8DzIrvoLuppgDhvJaPWXS9JQoAAiHegMN9 Pi129PVg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUI-00G6IR-1j; Wed, 16 Apr 2025 14:43:11 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:10 +0800 Date: Wed, 16 Apr 2025 14:43:10 +0800 Message-Id: <348f7934687cd69bb635f4a58c4dd02045ec15c4.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 12/67] crypto: mips/octeon-md5 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also switch to the generic export format. Signed-off-by: Herbert Xu --- arch/mips/cavium-octeon/crypto/octeon-md5.c | 119 +++++++++++--------- 1 file changed, 63 insertions(+), 56 deletions(-) diff --git a/arch/mips/cavium-octeon/crypto/octeon-md5.c b/arch/mips/cavium-octeon/crypto/octeon-md5.c index 5ee4ade99b99..fbc84eb7fedf 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-md5.c +++ b/arch/mips/cavium-octeon/crypto/octeon-md5.c @@ -19,22 +19,26 @@ * any later version. */ -#include -#include -#include -#include -#include -#include #include #include +#include +#include +#include +#include +#include #include "octeon-crypto.h" +struct octeon_md5_state { + __le32 hash[MD5_HASH_WORDS]; + u64 byte_count; +}; + /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ -static void octeon_md5_store_hash(struct md5_state *ctx) +static void octeon_md5_store_hash(struct octeon_md5_state *ctx) { u64 *hash = (u64 *)ctx->hash; @@ -42,7 +46,7 @@ static void octeon_md5_store_hash(struct md5_state *ctx) write_octeon_64bit_hash_dword(hash[1], 1); } -static void octeon_md5_read_hash(struct md5_state *ctx) +static void octeon_md5_read_hash(struct octeon_md5_state *ctx) { u64 *hash = (u64 *)ctx->hash; @@ -66,13 +70,12 @@ static void octeon_md5_transform(const void *_block) static int octeon_md5_init(struct shash_desc *desc) { - struct md5_state *mctx = shash_desc_ctx(desc); + struct octeon_md5_state *mctx = shash_desc_ctx(desc); - mctx->hash[0] = MD5_H0; - mctx->hash[1] = MD5_H1; - mctx->hash[2] = MD5_H2; - mctx->hash[3] = MD5_H3; - cpu_to_le32_array(mctx->hash, 4); + mctx->hash[0] = cpu_to_le32(MD5_H0); + mctx->hash[1] = cpu_to_le32(MD5_H1); + mctx->hash[2] = cpu_to_le32(MD5_H2); + mctx->hash[3] = cpu_to_le32(MD5_H3); mctx->byte_count = 0; return 0; @@ -81,52 +84,38 @@ static int octeon_md5_init(struct shash_desc *desc) static int octeon_md5_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct md5_state *mctx = shash_desc_ctx(desc); - const u32 avail = sizeof(mctx->block) - (mctx->byte_count & 0x3f); + struct octeon_md5_state *mctx = shash_desc_ctx(desc); struct octeon_cop2_state state; unsigned long flags; mctx->byte_count += len; - - if (avail > len) { - memcpy((char *)mctx->block + (sizeof(mctx->block) - avail), - data, len); - return 0; - } - - memcpy((char *)mctx->block + (sizeof(mctx->block) - avail), data, - avail); - flags = octeon_crypto_enable(&state); octeon_md5_store_hash(mctx); - octeon_md5_transform(mctx->block); - data += avail; - len -= avail; - - while (len >= sizeof(mctx->block)) { + do { octeon_md5_transform(data); - data += sizeof(mctx->block); - len -= sizeof(mctx->block); - } + data += MD5_HMAC_BLOCK_SIZE; + len -= MD5_HMAC_BLOCK_SIZE; + } while (len >= MD5_HMAC_BLOCK_SIZE); octeon_md5_read_hash(mctx); octeon_crypto_disable(&state, flags); - - memcpy(mctx->block, data, len); - - return 0; + mctx->byte_count -= len; + return len; } -static int octeon_md5_final(struct shash_desc *desc, u8 *out) +static int octeon_md5_finup(struct shash_desc *desc, const u8 *src, + unsigned int offset, u8 *out) { - struct md5_state *mctx = shash_desc_ctx(desc); - const unsigned int offset = mctx->byte_count & 0x3f; - char *p = (char *)mctx->block + offset; + struct octeon_md5_state *mctx = shash_desc_ctx(desc); int padding = 56 - (offset + 1); struct octeon_cop2_state state; + u32 block[MD5_BLOCK_WORDS]; unsigned long flags; + char *p; + p = memcpy(block, src, offset); + p += offset; *p++ = 0x80; flags = octeon_crypto_enable(&state); @@ -134,39 +123,56 @@ static int octeon_md5_final(struct shash_desc *desc, u8 *out) if (padding < 0) { memset(p, 0x00, padding + sizeof(u64)); - octeon_md5_transform(mctx->block); - p = (char *)mctx->block; + octeon_md5_transform(block); + p = (char *)block; padding = 56; } memset(p, 0, padding); - mctx->block[14] = mctx->byte_count << 3; - mctx->block[15] = mctx->byte_count >> 29; - cpu_to_le32_array(mctx->block + 14, 2); - octeon_md5_transform(mctx->block); + mctx->byte_count += offset; + block[14] = mctx->byte_count << 3; + block[15] = mctx->byte_count >> 29; + cpu_to_le32_array(block + 14, 2); + octeon_md5_transform(block); octeon_md5_read_hash(mctx); octeon_crypto_disable(&state, flags); + memzero_explicit(block, sizeof(block)); memcpy(out, mctx->hash, sizeof(mctx->hash)); - memset(mctx, 0, sizeof(*mctx)); return 0; } static int octeon_md5_export(struct shash_desc *desc, void *out) { - struct md5_state *ctx = shash_desc_ctx(desc); + struct octeon_md5_state *ctx = shash_desc_ctx(desc); + union { + u8 *u8; + u32 *u32; + u64 *u64; + } p = { .u8 = out }; + int i; - memcpy(out, ctx, sizeof(*ctx)); + for (i = 0; i < MD5_HASH_WORDS; i++) + put_unaligned(le32_to_cpu(ctx->hash[i]), p.u32++); + put_unaligned(ctx->byte_count, p.u64); return 0; } static int octeon_md5_import(struct shash_desc *desc, const void *in) { - struct md5_state *ctx = shash_desc_ctx(desc); + struct octeon_md5_state *ctx = shash_desc_ctx(desc); + union { + const u8 *u8; + const u32 *u32; + const u64 *u64; + } p = { .u8 = in }; + int i; - memcpy(ctx, in, sizeof(*ctx)); + for (i = 0; i < MD5_HASH_WORDS; i++) + ctx->hash[i] = cpu_to_le32(get_unaligned(p.u32++)); + ctx->byte_count = get_unaligned(p.u64); return 0; } @@ -174,15 +180,16 @@ static struct shash_alg alg = { .digestsize = MD5_DIGEST_SIZE, .init = octeon_md5_init, .update = octeon_md5_update, - .final = octeon_md5_final, + .finup = octeon_md5_finup, .export = octeon_md5_export, .import = octeon_md5_import, - .descsize = sizeof(struct md5_state), - .statesize = sizeof(struct md5_state), + .statesize = MD5_STATE_SIZE, + .descsize = sizeof(struct octeon_md5_state), .base = { .cra_name = "md5", .cra_driver_name= "octeon-md5", .cra_priority = OCTEON_CR_OPCODE_PRIORITY, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = MD5_HMAC_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:43:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882032 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03531230BD9 for ; Wed, 16 Apr 2025 06:43:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785800; cv=none; b=TloIq2kTHCqQtYLGeJcRcipXg1jxkwClYEwzF7sP0yu+UICllfwfsj/f96J6FsE5eR8sPufRoQ79tQ+uGX0W3jTmlipRr+UXeoMZDB8MvCyV+I4uLXpQ8lgHh8Gcke9DbCldDMnAvmRzeQMdDBnZGqHnDY/G8Eqk1O9OEb/+3Ss= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785800; c=relaxed/simple; bh=Cy7fC47cEuIs7KRgTcp01SVwQ8Lb7hWDBOauaUxM5hY=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=Xgyt2ySHMU3RcKERoEDwLC07i3ff2Rwq+6SPjQo1LUBvaRMmmVi81XZ1AfoJzWAVURbVObOzIEbfXD1KX8FUe7wgrmQ5WNGZrHIrLAM23VwZqedg4GD4Xh9r5VcmA+Q6cg1mgJKCaFGfxT0MKGvRvbNzJeafdhYnMc0eliT/KMo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=qb02u/vO; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="qb02u/vO" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=0LnuIPY/2WNGVJBnBlN/URT9vWJAi5us80zTkozX0vg=; b=qb02u/vOSdMjcQlYlBZdqafe5t RBZaEgnjW/iqB1mNNmwyyKA0siI8A3jp6C+A5WQN9t/YeBjYRXFOuwQQL73oL2fhVrOQ3ZzUJjQIl VdrZXk+UOBOApBhSYs7rm+qXZG6hJriIXKwuyzl8EXCvkjSzggFUAzKFdNlwWwAwvaeSK+8pV+QjL F4BP6fBS5C5dfLk9qCq5ivuz6KVbtNSvbbyIWAvav4o2260ZThYk0x7flVNWbZU8FjM1/1du6K/M/ gPfbJhkKIghoaZ9wursu0+vxkuomqWZkRIDOrU/lv9UZ18YvdgVahv25zNmWUvkzdqJgguvJoVdrm Ldn8u7uQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUN-00G6In-0Z; Wed, 16 Apr 2025 14:43:16 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:15 +0800 Date: Wed, 16 Apr 2025 14:43:15 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 14/67] crypto: sparc/md5 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also switch to the generic export format. Signed-off-by: Herbert Xu --- arch/sparc/crypto/md5_glue.c | 141 ++++++++++++++++------------------- 1 file changed, 63 insertions(+), 78 deletions(-) diff --git a/arch/sparc/crypto/md5_glue.c b/arch/sparc/crypto/md5_glue.c index 511db98d590a..5b018c6a376c 100644 --- a/arch/sparc/crypto/md5_glue.c +++ b/arch/sparc/crypto/md5_glue.c @@ -14,121 +14,105 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt -#include -#include -#include -#include -#include -#include - -#include #include +#include +#include +#include +#include +#include +#include +#include +#include #include "opcodes.h" -asmlinkage void md5_sparc64_transform(u32 *digest, const char *data, +struct sparc_md5_state { + __le32 hash[MD5_HASH_WORDS]; + u64 byte_count; +}; + +asmlinkage void md5_sparc64_transform(__le32 *digest, const char *data, unsigned int rounds); static int md5_sparc64_init(struct shash_desc *desc) { - struct md5_state *mctx = shash_desc_ctx(desc); + struct sparc_md5_state *mctx = shash_desc_ctx(desc); - mctx->hash[0] = MD5_H0; - mctx->hash[1] = MD5_H1; - mctx->hash[2] = MD5_H2; - mctx->hash[3] = MD5_H3; - le32_to_cpu_array(mctx->hash, 4); + mctx->hash[0] = cpu_to_le32(MD5_H0); + mctx->hash[1] = cpu_to_le32(MD5_H1); + mctx->hash[2] = cpu_to_le32(MD5_H2); + mctx->hash[3] = cpu_to_le32(MD5_H3); mctx->byte_count = 0; return 0; } -static void __md5_sparc64_update(struct md5_state *sctx, const u8 *data, - unsigned int len, unsigned int partial) -{ - unsigned int done = 0; - - sctx->byte_count += len; - if (partial) { - done = MD5_HMAC_BLOCK_SIZE - partial; - memcpy((u8 *)sctx->block + partial, data, done); - md5_sparc64_transform(sctx->hash, (u8 *)sctx->block, 1); - } - if (len - done >= MD5_HMAC_BLOCK_SIZE) { - const unsigned int rounds = (len - done) / MD5_HMAC_BLOCK_SIZE; - - md5_sparc64_transform(sctx->hash, data + done, rounds); - done += rounds * MD5_HMAC_BLOCK_SIZE; - } - - memcpy(sctx->block, data + done, len - done); -} - static int md5_sparc64_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct md5_state *sctx = shash_desc_ctx(desc); - unsigned int partial = sctx->byte_count % MD5_HMAC_BLOCK_SIZE; + struct sparc_md5_state *sctx = shash_desc_ctx(desc); - /* Handle the fast case right here */ - if (partial + len < MD5_HMAC_BLOCK_SIZE) { - sctx->byte_count += len; - memcpy((u8 *)sctx->block + partial, data, len); - } else - __md5_sparc64_update(sctx, data, len, partial); - - return 0; + sctx->byte_count += round_down(len, MD5_HMAC_BLOCK_SIZE); + md5_sparc64_transform(sctx->hash, data, len / MD5_HMAC_BLOCK_SIZE); + return len - round_down(len, MD5_HMAC_BLOCK_SIZE); } /* Add padding and return the message digest. */ -static int md5_sparc64_final(struct shash_desc *desc, u8 *out) +static int md5_sparc64_finup(struct shash_desc *desc, const u8 *src, + unsigned int offset, u8 *out) { - struct md5_state *sctx = shash_desc_ctx(desc); - unsigned int i, index, padlen; - u32 *dst = (u32 *)out; - __le64 bits; - static const u8 padding[MD5_HMAC_BLOCK_SIZE] = { 0x80, }; + struct sparc_md5_state *sctx = shash_desc_ctx(desc); + __le64 block[MD5_BLOCK_WORDS] = {}; + u8 *p = memcpy(block, src, offset); + __le32 *dst = (__le32 *)out; + __le64 *pbits; + int i; - bits = cpu_to_le64(sctx->byte_count << 3); - - /* Pad out to 56 mod 64 and append length */ - index = sctx->byte_count % MD5_HMAC_BLOCK_SIZE; - padlen = (index < 56) ? (56 - index) : ((MD5_HMAC_BLOCK_SIZE+56) - index); - - /* We need to fill a whole block for __md5_sparc64_update() */ - if (padlen <= 56) { - sctx->byte_count += padlen; - memcpy((u8 *)sctx->block + index, padding, padlen); - } else { - __md5_sparc64_update(sctx, padding, padlen, index); - } - __md5_sparc64_update(sctx, (const u8 *)&bits, sizeof(bits), 56); + src = p; + p += offset; + *p++ = 0x80; + sctx->byte_count += offset; + pbits = &block[(MD5_BLOCK_WORDS / (offset > 55 ? 1 : 2)) - 1]; + *pbits = cpu_to_le64(sctx->byte_count << 3); + md5_sparc64_transform(sctx->hash, src, (pbits - block + 1) / 8); + memzero_explicit(block, sizeof(block)); /* Store state in digest */ for (i = 0; i < MD5_HASH_WORDS; i++) dst[i] = sctx->hash[i]; - /* Wipe context */ - memset(sctx, 0, sizeof(*sctx)); - return 0; } static int md5_sparc64_export(struct shash_desc *desc, void *out) { - struct md5_state *sctx = shash_desc_ctx(desc); - - memcpy(out, sctx, sizeof(*sctx)); + struct sparc_md5_state *sctx = shash_desc_ctx(desc); + union { + u8 *u8; + u32 *u32; + u64 *u64; + } p = { .u8 = out }; + int i; + for (i = 0; i < MD5_HASH_WORDS; i++) + put_unaligned(le32_to_cpu(sctx->hash[i]), p.u32++); + put_unaligned(sctx->byte_count, p.u64); return 0; } static int md5_sparc64_import(struct shash_desc *desc, const void *in) { - struct md5_state *sctx = shash_desc_ctx(desc); - - memcpy(sctx, in, sizeof(*sctx)); + struct sparc_md5_state *sctx = shash_desc_ctx(desc); + union { + const u8 *u8; + const u32 *u32; + const u64 *u64; + } p = { .u8 = in }; + int i; + for (i = 0; i < MD5_HASH_WORDS; i++) + sctx->hash[i] = cpu_to_le32(get_unaligned(p.u32++)); + sctx->byte_count = get_unaligned(p.u64); return 0; } @@ -136,15 +120,16 @@ static struct shash_alg alg = { .digestsize = MD5_DIGEST_SIZE, .init = md5_sparc64_init, .update = md5_sparc64_update, - .final = md5_sparc64_final, + .finup = md5_sparc64_finup, .export = md5_sparc64_export, .import = md5_sparc64_import, - .descsize = sizeof(struct md5_state), - .statesize = sizeof(struct md5_state), + .descsize = sizeof(struct sparc_md5_state), + .statesize = sizeof(struct sparc_md5_state), .base = { .cra_name = "md5", .cra_driver_name= "md5-sparc64", .cra_priority = SPARC_CR_OPCODE_PRIORITY, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = MD5_HMAC_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:43:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882031 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99D49231A37 for ; Wed, 16 Apr 2025 06:43:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785805; cv=none; b=czNPr0o93z2Nsp4jTIlxEF7ElAyVk/hDaY4aBozUuPSzJmoG7u8iO8EUQ8cXuRVZ42Ub7Uj9VmxpopKjWnb7uCTFa5hQ2Qz5fm2TJiFqRsqSJLrxwagc1fzT+xNUh/bUDffaHVW/JASAwXraKriqi42xPzh4ZO9SbitzxD/Dn38= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785805; c=relaxed/simple; bh=Yv1TStM5qFq8TpMfM6Z+N6vrmz6+NO7lkLyCu6CPkd4=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=IeRb1f09wJnp9oJE43zya1/gixt7RceW8RRK1wFwLeXi0zFbLI0sSen9cwQL0uUqY7rvIUp06WrIPwXUB2V6kk5rxVrcBouidGfGb+5Issc0MZrFhG9dd2ooHboRuqRMDrt4hdIdDTI1/PckWr1cdJHITEgRscu1x3Kre78sMSw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=el/n03t5; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="el/n03t5" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=hnfVB+Rj7mD0zvjzpVYeiOaPqZ0w6vyNWu5t53OF54w=; b=el/n03t5d2ssGS0ktsb0sPTQsN c+Ljd/J9r6DJ6HBnJkx2SBBz3m/CyupYRAU/Oyq75i0kJhh2+2ETTbOUhZ+v9/Sp/hGmWjP2iSdSg XXRDdi/TFdFvw6YbqKTzBxuVIRsczFO2LjT90KBu0vr7RjV0ueMhu4FGi/X2EO8PTeccWgIR8BLgI oXx3/oz3i+pdZ5DUrpNmrSxfUJ2O2wemvNn6naqu16XYzokk79sApBw5CJob4VADcXA3sHt1cGBhe Nmsrnm8dvbeGbWnc/LEpQOzG2gl+8ExhZp+WOBSD6wuff+ZVAqBgsWhw4Lo8TuU0Z675CBewwhq2J 1hOzuj8w==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUR-00G6J9-2U; Wed, 16 Apr 2025 14:43:20 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:19 +0800 Date: Wed, 16 Apr 2025 14:43:19 +0800 Message-Id: <94c4c6056004b4a0b11b3fcc1c1497a1d2bd3ab6.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 16/67] crypto: arm64/sha1 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm64/crypto/sha1-ce-glue.c | 66 ++++++++------------------------ 1 file changed, 17 insertions(+), 49 deletions(-) diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c index cbd14f208f83..1f8c93fe1e64 100644 --- a/arch/arm64/crypto/sha1-ce-glue.c +++ b/arch/arm64/crypto/sha1-ce-glue.c @@ -7,14 +7,14 @@ #include #include -#include #include #include #include #include #include -#include +#include #include +#include MODULE_DESCRIPTION("SHA1 secure hash using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); @@ -56,79 +56,47 @@ static int sha1_ce_update(struct shash_desc *desc, const u8 *data, { struct sha1_ce_state *sctx = shash_desc_ctx(desc); - if (!crypto_simd_usable()) - return crypto_sha1_update(desc, data, len); - sctx->finalize = 0; - sha1_base_do_update(desc, data, len, sha1_ce_transform); - - return 0; + return sha1_base_do_update_blocks(desc, data, len, sha1_ce_transform); } static int sha1_ce_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct sha1_ce_state *sctx = shash_desc_ctx(desc); - bool finalize = !sctx->sst.count && !(len % SHA1_BLOCK_SIZE) && len; - - if (!crypto_simd_usable()) - return crypto_sha1_finup(desc, data, len, out); + bool finalized = false; /* * Allow the asm code to perform the finalization if there is no * partial data and the input is a round multiple of the block size. */ - sctx->finalize = finalize; + if (len >= SHA1_BLOCK_SIZE) { + unsigned int remain = len - round_down(len, SHA1_BLOCK_SIZE); - sha1_base_do_update(desc, data, len, sha1_ce_transform); - if (!finalize) - sha1_base_do_finalize(desc, sha1_ce_transform); + finalized = !remain; + sctx->finalize = finalized; + sha1_base_do_update_blocks(desc, data, len, sha1_ce_transform); + data += len - remain; + len = remain; + } + if (!finalized) + sha1_base_do_finup(desc, data, len, sha1_ce_transform); return sha1_base_finish(desc, out); } -static int sha1_ce_final(struct shash_desc *desc, u8 *out) -{ - struct sha1_ce_state *sctx = shash_desc_ctx(desc); - - if (!crypto_simd_usable()) - return crypto_sha1_finup(desc, NULL, 0, out); - - sctx->finalize = 0; - sha1_base_do_finalize(desc, sha1_ce_transform); - return sha1_base_finish(desc, out); -} - -static int sha1_ce_export(struct shash_desc *desc, void *out) -{ - struct sha1_ce_state *sctx = shash_desc_ctx(desc); - - memcpy(out, &sctx->sst, sizeof(struct sha1_state)); - return 0; -} - -static int sha1_ce_import(struct shash_desc *desc, const void *in) -{ - struct sha1_ce_state *sctx = shash_desc_ctx(desc); - - memcpy(&sctx->sst, in, sizeof(struct sha1_state)); - sctx->finalize = 0; - return 0; -} - static struct shash_alg alg = { .init = sha1_base_init, .update = sha1_ce_update, - .final = sha1_ce_final, .finup = sha1_ce_finup, - .import = sha1_ce_import, - .export = sha1_ce_export, .descsize = sizeof(struct sha1_ce_state), - .statesize = sizeof(struct sha1_state), + .statesize = SHA1_STATE_SIZE, .digestsize = SHA1_DIGEST_SIZE, .base = { .cra_name = "sha1", .cra_driver_name = "sha1-ce", .cra_priority = 200, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:43:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882030 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E46FE231CAE for ; Wed, 16 Apr 2025 06:43:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785809; cv=none; b=nqR2XnyQkTQgyurl+GpR5eahlTYVfYa6dUAROKXsh3LUOEzWMR99cb4sGOFjlaulMinAeYe8Q5dr0oRgNagfEj/ePpB2oECRQjhVKtkau2q09j3lLPa66ZyillfaFvIJ1ddhlAOm1vPxUpQl9sq13gFNM61CpC1Srxzet0fx1es= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785809; c=relaxed/simple; bh=30zMiw15QLQGv57WqWjAaZ4VZPbKM2MmTv/iX2E+7GY=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=DItGD36o4C7eYXXIeN+TaS3hIQ8FMlLdE4R6GezJd2HA+wPI0TVfqTAK6I29iiVnZbrWp53lq5EMg+llecpQZ73ddRNyyoHn2FbvMW8GUf5dUGHRxyjs5NW3XwMEYoJPTPfnnR1qgII+E84qAnbR3PmP6/aMtxhfkIt8ESseZTo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=SlcWW+ZE; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="SlcWW+ZE" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=O6GiwROsJLLGYEu1aD8o7mdqokr8ZgmuYBO2jyCNvqw=; b=SlcWW+ZESI5jo7XnHYTSPZh0Vc BzQoGQHDxiLylDwtPxH0Yw8g3dee2x/1D/VUoUBMe0rP4sTIY6A1jHrzk3lFxcOg0k4V1yoa+M69o oTMdU7DCteY5fPfzKRAOxg2wmffbdFU8ZcPawTBFClTs+daPTU0g4KSz6pn6041UBUH/rHWHsdVNQ WvOI/m6xApTFKE1suzKVPuUy1W7/NNS0Qz252Qtz1GbLgIPX2sSU6pd64IzH825ZsVC+IPprHd6GJ RUHLtW32IIjGtJk4zMXdNu2ZtqX1+AY+nHC/al0GS0IvMg0VcSB/lBZwX6qYmKAQlnTCHhquFC13E vRu+UJGw==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUW-00G6JV-1E; Wed, 16 Apr 2025 14:43:25 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:24 +0800 Date: Wed, 16 Apr 2025 14:43:24 +0800 Message-Id: <09eaa7d74bca91407a6732788068b0806b142a71.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 18/67] crypto: sha1-generic - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- crypto/sha1_generic.c | 33 ++++++++++++--------------------- include/crypto/sha1.h | 8 -------- 2 files changed, 12 insertions(+), 29 deletions(-) diff --git a/crypto/sha1_generic.c b/crypto/sha1_generic.c index 325b57fe28dc..7a3c837923b5 100644 --- a/crypto/sha1_generic.c +++ b/crypto/sha1_generic.c @@ -12,13 +12,11 @@ * Copyright (c) Jean-Francois Dive */ #include -#include -#include -#include -#include #include #include -#include +#include +#include +#include const u8 sha1_zero_message_hash[SHA1_DIGEST_SIZE] = { 0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d, @@ -39,38 +37,31 @@ static void sha1_generic_block_fn(struct sha1_state *sst, u8 const *src, memzero_explicit(temp, sizeof(temp)); } -int crypto_sha1_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static int crypto_sha1_update(struct shash_desc *desc, const u8 *data, + unsigned int len) { - return sha1_base_do_update(desc, data, len, sha1_generic_block_fn); + return sha1_base_do_update_blocks(desc, data, len, + sha1_generic_block_fn); } -EXPORT_SYMBOL(crypto_sha1_update); -static int sha1_final(struct shash_desc *desc, u8 *out) +static int crypto_sha1_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) { - sha1_base_do_finalize(desc, sha1_generic_block_fn); + sha1_base_do_finup(desc, data, len, sha1_generic_block_fn); return sha1_base_finish(desc, out); } -int crypto_sha1_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - sha1_base_do_update(desc, data, len, sha1_generic_block_fn); - return sha1_final(desc, out); -} -EXPORT_SYMBOL(crypto_sha1_finup); - static struct shash_alg alg = { .digestsize = SHA1_DIGEST_SIZE, .init = sha1_base_init, .update = crypto_sha1_update, - .final = sha1_final, .finup = crypto_sha1_finup, - .descsize = sizeof(struct sha1_state), + .descsize = SHA1_STATE_SIZE, .base = { .cra_name = "sha1", .cra_driver_name= "sha1-generic", .cra_priority = 100, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_module = THIS_MODULE, } diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h index dd6de4a4d6e6..f48230b1413c 100644 --- a/include/crypto/sha1.h +++ b/include/crypto/sha1.h @@ -26,14 +26,6 @@ struct sha1_state { u8 buffer[SHA1_BLOCK_SIZE]; }; -struct shash_desc; - -extern int crypto_sha1_update(struct shash_desc *desc, const u8 *data, - unsigned int len); - -extern int crypto_sha1_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *hash); - /* * An implementation of SHA-1's compression function. Don't use in new code! * You shouldn't be using SHA-1, and even if you *have* to use SHA-1, this isn't From patchwork Wed Apr 16 06:43:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882029 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B77F22356BD for ; Wed, 16 Apr 2025 06:43:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785814; cv=none; b=Iw66z+lfyM0w6B6oU6RmqJOdqI2T9nSKEEUDAp8kIyyc3KBMo+cyg8lQ373t0n9kCtzWpK8qIlyz20G8ZAP/Vop2Un+7TBha+jJTK8bjx4LJGQNUi5+kikd2U3nxqaXYzRiazOza5XnmoF2dT+Jw1FcfNGnOSst4UkedndUCxFc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785814; c=relaxed/simple; bh=8SQaBHwvgIVAqa1ME/+kmgtt/DpwMH+qyVnYnKXwgmI=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=C7Mj7m4+G/ri/N+6f29NVgHbKX0KXreSI4CMWH4QTTGZc7reNeSI/3ga3Ysb3D17638mCoeAvb02m96bV8jg9WI8SDupQykBCwiuRjy5WYJoUmDXI2X6etKJZhpdiB3iYOfeWLHd6tDVg1VJQh7BGxU9p5GWj79jgHv6Fw21Lrg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=QVEM5eiW; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="QVEM5eiW" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=AHoW+JVPdbwWKRHHomw0NvRPZbYDRwWQ6N1T6GB62hw=; b=QVEM5eiWq9Ycmewg9u+GKPJmBO 5QDk0e9A+Ojgc9YcGeq9QWaA6Yv+YJNYkZBtNu8ffe6yLJtm7IDwXOdUC1Ia1teHleqtZGGWVHToo kpnXKMVDoGlST/CAzY7HLhPJAmS8sqX/BqIK/Gt5ZJyj1KEqPT6iXramo6MM0VS8JNoufEaGcu4lq hzY5jpJnC8Rihw1faroTtN00j8K3wJq3cUotIQABT8vcRWZcID09Jbk2hOw0xo3skYEHIHF3FERqP XIAqA4DFq75fdUpvO7g3Zqakt/KVDATplCaLRJb3w7tfQ0T/5wV2F0h9+heWeOwYb8ZtBkWjY7Bhe ORxcFIPw==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUb-00G6Jr-00; Wed, 16 Apr 2025 14:43:30 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:28 +0800 Date: Wed, 16 Apr 2025 14:43:28 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 20/67] crypto: arm/sha1-neon - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm/crypto/sha1_neon_glue.c | 39 ++++++++------------------------ 1 file changed, 10 insertions(+), 29 deletions(-) diff --git a/arch/arm/crypto/sha1_neon_glue.c b/arch/arm/crypto/sha1_neon_glue.c index 9c70b87e69f7..d321850f22a6 100644 --- a/arch/arm/crypto/sha1_neon_glue.c +++ b/arch/arm/crypto/sha1_neon_glue.c @@ -13,18 +13,12 @@ * Copyright (c) Chandramouli Narayanan */ +#include #include -#include -#include -#include -#include -#include #include #include -#include -#include - -#include "sha1.h" +#include +#include asmlinkage void sha1_transform_neon(struct sha1_state *state_h, const u8 *data, int rounds); @@ -32,50 +26,37 @@ asmlinkage void sha1_transform_neon(struct sha1_state *state_h, static int sha1_neon_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct sha1_state *sctx = shash_desc_ctx(desc); - - if (!crypto_simd_usable() || - (sctx->count % SHA1_BLOCK_SIZE) + len < SHA1_BLOCK_SIZE) - return sha1_update_arm(desc, data, len); + int remain; kernel_neon_begin(); - sha1_base_do_update(desc, data, len, sha1_transform_neon); + remain = sha1_base_do_update_blocks(desc, data, len, + sha1_transform_neon); kernel_neon_end(); - return 0; + return remain; } static int sha1_neon_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - if (!crypto_simd_usable()) - return sha1_finup_arm(desc, data, len, out); - kernel_neon_begin(); - if (len) - sha1_base_do_update(desc, data, len, sha1_transform_neon); - sha1_base_do_finalize(desc, sha1_transform_neon); + sha1_base_do_finup(desc, data, len, sha1_transform_neon); kernel_neon_end(); return sha1_base_finish(desc, out); } -static int sha1_neon_final(struct shash_desc *desc, u8 *out) -{ - return sha1_neon_finup(desc, NULL, 0, out); -} - static struct shash_alg alg = { .digestsize = SHA1_DIGEST_SIZE, .init = sha1_base_init, .update = sha1_neon_update, - .final = sha1_neon_final, .finup = sha1_neon_finup, - .descsize = sizeof(struct sha1_state), + .descsize = SHA1_STATE_SIZE, .base = { .cra_name = "sha1", .cra_driver_name = "sha1-neon", .cra_priority = 250, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:43:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882028 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CDBF236A6A for ; Wed, 16 Apr 2025 06:43:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785818; cv=none; b=MLOwFrTR3bsOc0qcC6mzf7PJ+mw3PM8CcfyrR8hB+hH3YE6eMiwNb0J+IkFJwYnostTCFKB/Yp1aM6dEH8fkIKUbGPWqhgZsP72ou6p2HZLy6bzQi7svSRSDYCHT8+yhQvKGjGFf0jdS46lK4yxZ4JLuF0YPtLdD5o7SbxDNB+E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785818; c=relaxed/simple; bh=WjGD+JKwDOWANozhCmmKJ1JnQwYcsU+oBj9fQ9n5okA=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=N2KgwPczX/jHkS74yPnlwhWnDzTigwxYHOAygb6PQHWtTLDbXQO2jVQhiXGrgYSZT6xHF59cJU+/wICsSS9GTiSbjBTw0t+Q3zlBX4+kim+wzDt0Olv8rtm05yPc2RCoodaUQ6Y8U7o7XZH4ig8ZxTNCJkavTQzSjN6Ve/a0a+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=F0DIq/ga; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="F0DIq/ga" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=P7mo4IQNb4w10SURP8Sq3erLDXmkSgq6c7S5BK2/wcA=; b=F0DIq/gaDdj53nYbrF0toeaJ0z RmTGsaC4QNFdNBXIO+t/bqnZD8UUOVLYktFMWevBXxOUqq8sAZtW3/IsFhxUwO8X42LySdcOo7TzR ev1PlyUDE+saPPGg/ZS3mepd5GfnfpCmRdi+i/Psak3zcCKfOznQRmE+1++mwmyC7WsngnL3gKMwL sPn3ahcZRIa9pGTNVnvVKMApDBK8VJ0dVV9ShZbfLr0WZxnP7vyVuaMKLYCFJx4AuZTUnxOsK7inA Anx5FPxGvNeFoYBqOiKG6j7NruhSZlvbm4KA/vTiWeHFurmDKUWnikIo1NVSj+Sfncmj/blRZxyII kGY29fhw==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUf-00G6KE-1s; Wed, 16 Apr 2025 14:43:34 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:33 +0800 Date: Wed, 16 Apr 2025 14:43:33 +0800 Message-Id: <25e990c507fe35fef1c24831596f92805efc4155.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 22/67] crypto: powerpc/sha1 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- arch/powerpc/crypto/sha1.c | 101 ++++++++----------------------------- 1 file changed, 20 insertions(+), 81 deletions(-) diff --git a/arch/powerpc/crypto/sha1.c b/arch/powerpc/crypto/sha1.c index f283bbd3f121..4593946aa9b3 100644 --- a/arch/powerpc/crypto/sha1.c +++ b/arch/powerpc/crypto/sha1.c @@ -13,107 +13,46 @@ * Copyright (c) Jean-Francois Dive */ #include -#include -#include -#include -#include #include #include -#include +#include +#include -void powerpc_sha_transform(u32 *state, const u8 *src); +asmlinkage void powerpc_sha_transform(u32 *state, const u8 *src); + +static void powerpc_sha_block(struct sha1_state *sctx, const u8 *data, + int blocks) +{ + do { + powerpc_sha_transform(sctx->state, data); + data += 64; + } while (--blocks); +} static int powerpc_sha1_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct sha1_state *sctx = shash_desc_ctx(desc); - unsigned int partial, done; - const u8 *src; - - partial = sctx->count & 0x3f; - sctx->count += len; - done = 0; - src = data; - - if ((partial + len) > 63) { - - if (partial) { - done = -partial; - memcpy(sctx->buffer + partial, data, done + 64); - src = sctx->buffer; - } - - do { - powerpc_sha_transform(sctx->state, src); - done += 64; - src = data + done; - } while (done + 63 < len); - - partial = 0; - } - memcpy(sctx->buffer + partial, src, len - done); - - return 0; + return sha1_base_do_update_blocks(desc, data, len, powerpc_sha_block); } - /* Add padding and return the message digest. */ -static int powerpc_sha1_final(struct shash_desc *desc, u8 *out) +static int powerpc_sha1_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *out) { - struct sha1_state *sctx = shash_desc_ctx(desc); - __be32 *dst = (__be32 *)out; - u32 i, index, padlen; - __be64 bits; - static const u8 padding[64] = { 0x80, }; - - bits = cpu_to_be64(sctx->count << 3); - - /* Pad out to 56 mod 64 */ - index = sctx->count & 0x3f; - padlen = (index < 56) ? (56 - index) : ((64+56) - index); - powerpc_sha1_update(desc, padding, padlen); - - /* Append length */ - powerpc_sha1_update(desc, (const u8 *)&bits, sizeof(bits)); - - /* Store state in digest */ - for (i = 0; i < 5; i++) - dst[i] = cpu_to_be32(sctx->state[i]); - - /* Wipe context */ - memset(sctx, 0, sizeof *sctx); - - return 0; -} - -static int powerpc_sha1_export(struct shash_desc *desc, void *out) -{ - struct sha1_state *sctx = shash_desc_ctx(desc); - - memcpy(out, sctx, sizeof(*sctx)); - return 0; -} - -static int powerpc_sha1_import(struct shash_desc *desc, const void *in) -{ - struct sha1_state *sctx = shash_desc_ctx(desc); - - memcpy(sctx, in, sizeof(*sctx)); - return 0; + sha1_base_do_finup(desc, src, len, powerpc_sha_block); + return sha1_base_finish(desc, out); } static struct shash_alg alg = { .digestsize = SHA1_DIGEST_SIZE, .init = sha1_base_init, .update = powerpc_sha1_update, - .final = powerpc_sha1_final, - .export = powerpc_sha1_export, - .import = powerpc_sha1_import, - .descsize = sizeof(struct sha1_state), - .statesize = sizeof(struct sha1_state), + .finup = powerpc_sha1_finup, + .descsize = SHA1_STATE_SIZE, .base = { .cra_name = "sha1", .cra_driver_name= "sha1-powerpc", + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:43:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882027 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 200CC236A84 for ; Wed, 16 Apr 2025 06:43:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785824; cv=none; b=K7e5C9qH9yboh/ZonKsy+pv1y5s+7oznbp7fvrDNHvvDFTRfo/rTqxrVIe7oZWcKdDonLRH7voZVrfsGTktMtloHU7fY/lhj2hZ1n2lIAnNuqk+jyfgm6JFPOz8CRJj99ufyK9bDsZKITIc5LmmYDt834hSRuMZjmCG+iyodVHo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785824; c=relaxed/simple; bh=UXvbaJyq9NIKw/sAwp68/dgryXrJ3+tjxZruWNvHLN0=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=P8knDPFpr5ZlnQoRAZfLAPPXqZVGXJ5/60kOFAmGhdWgmmDHQ0Iy+43Vwu3QS3lujMag02LPEJ/EjpwD9TVkcIAJ5hMHEcPGaQvOSegAB02h9x6r0VTr8nxipcrcFO122bxIcv/B3z58HSzE5+jWe9HFbAT9sXC6YFKBUvZBWtI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=E9nHoYXf; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="E9nHoYXf" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=3hSKvRlcpN+7C5wssnesJEakYLpXDG5dtibTB8aB98I=; b=E9nHoYXfubq+dkuhQmji0IGnD6 iNVIOqieVdV+FD+Y6x44lFD5DLaTJrQwn+xLVEjGWjsefMf3pwHvz3qVNf9aEBtu9LsVLoCfJ9235 XWSs857aF9pp1TMhoZJBkKd5aZ6BKvoduuZ7yquHOmqX5TMszegEThFNp6QAyqAzawpdT7OY1MMjZ EYBIgGVd69vXpJaWHISF9zdPD1cQITPlw2v8xB2cT1DoTkhMFG7xLfYEzzz/5q2q6pEiim3G4FUM3 8vkB81YE0ka3gFNF82Vd9nvIG2Pe+xbfKWjgCrtyBdWPPiZdXtSebPHv81V4g9SQXqQwZG6NPTr6+ glWJB+uQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUk-00G6Ka-0g; Wed, 16 Apr 2025 14:43:39 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:38 +0800 Date: Wed, 16 Apr 2025 14:43:38 +0800 Message-Id: <6b971860a0853cc3af5c70f324dd2749795c3867.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 24/67] crypto: s390/sha1 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- arch/s390/crypto/sha.h | 13 +++--- arch/s390/crypto/sha1_s390.c | 22 +++++------ arch/s390/crypto/sha_common.c | 74 +++++++++++++++++++++++++++++++++++ 3 files changed, 93 insertions(+), 16 deletions(-) diff --git a/arch/s390/crypto/sha.h b/arch/s390/crypto/sha.h index 2bb22db54c31..b8aeb51b2f3d 100644 --- a/arch/s390/crypto/sha.h +++ b/arch/s390/crypto/sha.h @@ -10,27 +10,30 @@ #ifndef _CRYPTO_ARCH_S390_SHA_H #define _CRYPTO_ARCH_S390_SHA_H -#include -#include -#include #include +#include /* must be big enough for the largest SHA variant */ #define SHA3_STATE_SIZE 200 #define CPACF_MAX_PARMBLOCK_SIZE SHA3_STATE_SIZE #define SHA_MAX_BLOCK_SIZE SHA3_224_BLOCK_SIZE +#define S390_SHA_CTX_SIZE offsetof(struct s390_sha_ctx, buf) struct s390_sha_ctx { u64 count; /* message length in bytes */ u32 state[CPACF_MAX_PARMBLOCK_SIZE / sizeof(u32)]; - u8 buf[SHA_MAX_BLOCK_SIZE]; int func; /* KIMD function to use */ - int first_message_part; + bool first_message_part; + u8 buf[SHA_MAX_BLOCK_SIZE]; }; struct shash_desc; int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len); +int s390_sha_update_blocks(struct shash_desc *desc, const u8 *data, + unsigned int len); int s390_sha_final(struct shash_desc *desc, u8 *out); +int s390_sha_finup(struct shash_desc *desc, const u8 *src, unsigned int len, + u8 *out); #endif diff --git a/arch/s390/crypto/sha1_s390.c b/arch/s390/crypto/sha1_s390.c index bc3a22704e09..d229cbd2ba22 100644 --- a/arch/s390/crypto/sha1_s390.c +++ b/arch/s390/crypto/sha1_s390.c @@ -18,12 +18,12 @@ * Copyright (c) Andrew McDonald * Copyright (c) Jean-Francois Dive */ -#include -#include -#include -#include -#include #include +#include +#include +#include +#include +#include #include "sha.h" @@ -49,7 +49,6 @@ static int s390_sha1_export(struct shash_desc *desc, void *out) octx->count = sctx->count; memcpy(octx->state, sctx->state, sizeof(octx->state)); - memcpy(octx->buffer, sctx->buf, sizeof(octx->buffer)); return 0; } @@ -60,7 +59,6 @@ static int s390_sha1_import(struct shash_desc *desc, const void *in) sctx->count = ictx->count; memcpy(sctx->state, ictx->state, sizeof(ictx->state)); - memcpy(sctx->buf, ictx->buffer, sizeof(ictx->buffer)); sctx->func = CPACF_KIMD_SHA_1; return 0; } @@ -68,16 +66,18 @@ static int s390_sha1_import(struct shash_desc *desc, const void *in) static struct shash_alg alg = { .digestsize = SHA1_DIGEST_SIZE, .init = s390_sha1_init, - .update = s390_sha_update, - .final = s390_sha_final, + .update = s390_sha_update_blocks, + .finup = s390_sha_finup, .export = s390_sha1_export, .import = s390_sha1_import, - .descsize = sizeof(struct s390_sha_ctx), - .statesize = sizeof(struct sha1_state), + .descsize = S390_SHA_CTX_SIZE, + .statesize = SHA1_STATE_SIZE, .base = { .cra_name = "sha1", .cra_driver_name= "sha1-s390", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_module = THIS_MODULE, } diff --git a/arch/s390/crypto/sha_common.c b/arch/s390/crypto/sha_common.c index 961d7d522af1..013bb37ad3ef 100644 --- a/arch/s390/crypto/sha_common.c +++ b/arch/s390/crypto/sha_common.c @@ -58,6 +58,27 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len) } EXPORT_SYMBOL_GPL(s390_sha_update); +int s390_sha_update_blocks(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + unsigned int bsize = crypto_shash_blocksize(desc->tfm); + struct s390_sha_ctx *ctx = shash_desc_ctx(desc); + unsigned int n; + int fc; + + fc = ctx->func; + if (ctx->first_message_part) + fc |= test_facility(86) ? CPACF_KIMD_NIP : 0; + + /* process as many blocks as possible */ + n = (len / bsize) * bsize; + ctx->count += n; + cpacf_kimd(fc, ctx->state, data, n); + ctx->first_message_part = 0; + return len - n; +} +EXPORT_SYMBOL_GPL(s390_sha_update_blocks); + static int s390_crypto_shash_parmsize(int func) { switch (func) { @@ -132,5 +153,58 @@ int s390_sha_final(struct shash_desc *desc, u8 *out) } EXPORT_SYMBOL_GPL(s390_sha_final); +int s390_sha_finup(struct shash_desc *desc, const u8 *src, unsigned int len, + u8 *out) +{ + struct s390_sha_ctx *ctx = shash_desc_ctx(desc); + int mbl_offset, fc; + u64 bits; + + ctx->count += len; + + bits = ctx->count * 8; + mbl_offset = s390_crypto_shash_parmsize(ctx->func); + if (mbl_offset < 0) + return -EINVAL; + + mbl_offset = mbl_offset / sizeof(u32); + + /* set total msg bit length (mbl) in CPACF parmblock */ + switch (ctx->func) { + case CPACF_KLMD_SHA_1: + case CPACF_KLMD_SHA_256: + memcpy(ctx->state + mbl_offset, &bits, sizeof(bits)); + break; + case CPACF_KLMD_SHA_512: + /* + * the SHA512 parmblock has a 128-bit mbl field, clear + * high-order u64 field, copy bits to low-order u64 field + */ + memset(ctx->state + mbl_offset, 0x00, sizeof(bits)); + mbl_offset += sizeof(u64) / sizeof(u32); + memcpy(ctx->state + mbl_offset, &bits, sizeof(bits)); + break; + case CPACF_KLMD_SHA3_224: + case CPACF_KLMD_SHA3_256: + case CPACF_KLMD_SHA3_384: + case CPACF_KLMD_SHA3_512: + break; + default: + return -EINVAL; + } + + fc = ctx->func; + fc |= test_facility(86) ? CPACF_KLMD_DUFOP : 0; + if (ctx->first_message_part) + fc |= CPACF_KLMD_NIP; + cpacf_klmd(fc, ctx->state, src, len); + + /* copy digest to out */ + memcpy(out, ctx->state, crypto_shash_digestsize(desc->tfm)); + + return 0; +} +EXPORT_SYMBOL_GPL(s390_sha_finup); + MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("s390 SHA cipher common functions"); From patchwork Wed Apr 16 06:43:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882026 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51AF1237193 for ; Wed, 16 Apr 2025 06:43:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785828; cv=none; b=jGlUz0jZEOb0H81oj8fC6tpgV+YFqDGSEWreAcOJCEF2dTplJDxQayan0bqtm9Rv/FkX59XGQF6UAJklMEXRpD67Dvov6PRZmKiq/kagMrVVeQtk0mhpYWZyBur4V69ukppDZ49WCgpWOR1mbD8sN1le5IzTjVMnd7nTzCKscVs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785828; c=relaxed/simple; bh=LJOfjFeRkKp52blstXNQ1dHwvt6qnDB2X/KFBTwkrig=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=Mq3fkEjXFKRSfmlOxJ+ylg51SdTH7WJRdWFWoqrCA18Y4oEYEd3mJOe64aIlkVBGBDuzs5NuxIcQy5K1D7hDPXOC2vtO5odamh5KnR7dx5hUvIFOpxUPtmO9VIsenu6mDMxNvl2GOSytxcvWcG+YLHEqIEkrmN++k/LkDqBvBBU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=e71YsZhX; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="e71YsZhX" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=2yJu6M9xxUocmxNuKAzeh261+YrPvwTvdf6L3Ez4IX4=; b=e71YsZhXI+jxz/Z0G3S0TYzo5I oHkNTtWWh26q1rA5LczUVcQivMJLNGxeGgHxqSZ9+86uG2YvgmaNiJCkQ7ykMwnbY1b1InVz2m9Gl b1SmOVlgWzfcrXzcclzGnTJCIt3YtjYbEwzuf1b6qoFn52u9lRID82yOVK4gcFhH+pA9D0PveZdpv 5RuZ22sol3RUhgm0RnAmffCx+JX1Fd/CuK0JQ7pS0lD3PaEjNhkKk4vfLZlSk+62E+OsYTyO8jOKG iWyUW4tkU/2rRRJ/0xdy37ckkEbivm+JwNbbSONGtA00Qk7ACwJKh+iu9z6Ek18KduIADMiwo5kjY wRS5yTpQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUo-00G6Kx-2c; Wed, 16 Apr 2025 14:43:43 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:42 +0800 Date: Wed, 16 Apr 2025 14:43:42 +0800 Message-Id: <000b2683d3a561c526149540cda84ef0ed376942.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 26/67] crypto: sha1_base - Remove partial block helpers To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Now that all sha1_base users have been converted to use the API partial block handling, remove the partial block helpers. Signed-off-by: Herbert Xu --- include/crypto/sha1_base.h | 61 -------------------------------------- 1 file changed, 61 deletions(-) diff --git a/include/crypto/sha1_base.h b/include/crypto/sha1_base.h index b23cfad18ce2..62701d136c79 100644 --- a/include/crypto/sha1_base.h +++ b/include/crypto/sha1_base.h @@ -31,44 +31,6 @@ static inline int sha1_base_init(struct shash_desc *desc) return 0; } -static inline int sha1_base_do_update(struct shash_desc *desc, - const u8 *data, - unsigned int len, - sha1_block_fn *block_fn) -{ - struct sha1_state *sctx = shash_desc_ctx(desc); - unsigned int partial = sctx->count % SHA1_BLOCK_SIZE; - - sctx->count += len; - - if (unlikely((partial + len) >= SHA1_BLOCK_SIZE)) { - int blocks; - - if (partial) { - int p = SHA1_BLOCK_SIZE - partial; - - memcpy(sctx->buffer + partial, data, p); - data += p; - len -= p; - - block_fn(sctx, sctx->buffer, 1); - } - - blocks = len / SHA1_BLOCK_SIZE; - len %= SHA1_BLOCK_SIZE; - - if (blocks) { - block_fn(sctx, data, blocks); - data += blocks * SHA1_BLOCK_SIZE; - } - partial = 0; - } - if (len) - memcpy(sctx->buffer + partial, data, len); - - return 0; -} - static inline int sha1_base_do_update_blocks(struct shash_desc *desc, const u8 *data, unsigned int len, @@ -82,29 +44,6 @@ static inline int sha1_base_do_update_blocks(struct shash_desc *desc, return remain; } -static inline int sha1_base_do_finalize(struct shash_desc *desc, - sha1_block_fn *block_fn) -{ - const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64); - struct sha1_state *sctx = shash_desc_ctx(desc); - __be64 *bits = (__be64 *)(sctx->buffer + bit_offset); - unsigned int partial = sctx->count % SHA1_BLOCK_SIZE; - - sctx->buffer[partial++] = 0x80; - if (partial > bit_offset) { - memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial); - partial = 0; - - block_fn(sctx, sctx->buffer, 1); - } - - memset(sctx->buffer + partial, 0x0, bit_offset - partial); - *bits = cpu_to_be64(sctx->count << 3); - block_fn(sctx, sctx->buffer, 1); - - return 0; -} - static inline int sha1_base_do_finup(struct shash_desc *desc, const u8 *src, unsigned int len, sha1_block_fn *block_fn) From patchwork Wed Apr 16 06:43:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882025 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E91A31DDC1B for ; Wed, 16 Apr 2025 06:43:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785833; cv=none; b=ZNsnD9TfcGBbZnOdXoVhVTc+x3AZKZNoZsX9hY8vURhhlRH868j03tGsWte/afvRw1yLFWZwSOOFZdjWKBXOgJxISkvLtwVlEcP5TNSmMPaOn6THlL4pIfGdASyb5WH8UAZHzx64sDpqz9XVHZY5VBzWPaGqKzQ00NWCkXfAFnY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785833; c=relaxed/simple; bh=W1qg+gidWinFLhDwqnUdLrMzUi/pfmVJxwSn8AfBB50=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=SO1llVI9mHibNXvg6PVrp2PyIOsPPUYEs7d1eFbDc2VU5QMq1scrLyRzvxy2j4h/jN1EuayLIsMnONsisNWo8tYDP4gCMgyEELXNx0mEAKpaT1iTuXpAr10oFtAX8cxoxIPKLPyPurGctximQu3iBQTjMprQUqqNBSpPZRinJSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=rnjwlGqc; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="rnjwlGqc" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=8TT4gNsP+QLmNyfqe1qKCPGdeGgbjC8y8dQSrHTaOoE=; b=rnjwlGqc2unXNaQ2T1yZWVvbk7 dIWGbjrSHQSJYUpmUPVTSAHdai+/xOxa9jQSSLiad8A6Zw2SWmo35WMrag4ri++GR3Pr+WNa78j6Y MSSipIcqs9GtPOqDijqeE4DEMvnBbO98V7TfgTVTzSaaVrHeD7keWC/jFIE+X9dPqFGzGXwRTrvGd cyelnuN28vdTW4HXKxEjZ3a7ZSwGsJZOifLnizNalX5pb81kbfG/YFnxox9GgSVV1HVfojTZmP0q3 y52Nu4yl+jgfbA7e5HYz7rKiXhzU4nvbPorAywkTgbWPY3ZIeBB4bD1R6B1qUlaDPYq5gbMbc88c2 Ui/cl2tQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUt-00G6LJ-1I; Wed, 16 Apr 2025 14:43:48 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:47 +0800 Date: Wed, 16 Apr 2025 14:43:47 +0800 Message-Id: <26eb6134cc00ea3133cb0902bc28bbc0f7d3b3d5.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 28/67] crypto: mips/octeon-sha256 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- .../mips/cavium-octeon/crypto/octeon-sha256.c | 161 ++++-------------- 1 file changed, 37 insertions(+), 124 deletions(-) diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/arch/mips/cavium-octeon/crypto/octeon-sha256.c index 435e4a6e7f13..8e85ea65387c 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha256.c @@ -14,15 +14,12 @@ * SHA224 Support Copyright 2007 Intel Corporation */ -#include -#include -#include -#include -#include -#include -#include #include #include +#include +#include +#include +#include #include "octeon-crypto.h" @@ -30,7 +27,7 @@ * We pass everything as 64-bit. OCTEON can handle misaligned data. */ -static void octeon_sha256_store_hash(struct sha256_state *sctx) +static void octeon_sha256_store_hash(struct crypto_sha256_state *sctx) { u64 *hash = (u64 *)sctx->state; @@ -40,7 +37,7 @@ static void octeon_sha256_store_hash(struct sha256_state *sctx) write_octeon_64bit_hash_dword(hash[3], 3); } -static void octeon_sha256_read_hash(struct sha256_state *sctx) +static void octeon_sha256_read_hash(struct crypto_sha256_state *sctx) { u64 *hash = (u64 *)sctx->state; @@ -50,158 +47,72 @@ static void octeon_sha256_read_hash(struct sha256_state *sctx) hash[3] = read_octeon_64bit_hash_dword(3); } -static void octeon_sha256_transform(const void *_block) +static void octeon_sha256_transform(struct crypto_sha256_state *sctx, + const u8 *src, int blocks) { - const u64 *block = _block; + do { + const u64 *block = (const u64 *)src; - write_octeon_64bit_block_dword(block[0], 0); - write_octeon_64bit_block_dword(block[1], 1); - write_octeon_64bit_block_dword(block[2], 2); - write_octeon_64bit_block_dword(block[3], 3); - write_octeon_64bit_block_dword(block[4], 4); - write_octeon_64bit_block_dword(block[5], 5); - write_octeon_64bit_block_dword(block[6], 6); - octeon_sha256_start(block[7]); -} + write_octeon_64bit_block_dword(block[0], 0); + write_octeon_64bit_block_dword(block[1], 1); + write_octeon_64bit_block_dword(block[2], 2); + write_octeon_64bit_block_dword(block[3], 3); + write_octeon_64bit_block_dword(block[4], 4); + write_octeon_64bit_block_dword(block[5], 5); + write_octeon_64bit_block_dword(block[6], 6); + octeon_sha256_start(block[7]); -static void __octeon_sha256_update(struct sha256_state *sctx, const u8 *data, - unsigned int len) -{ - unsigned int partial; - unsigned int done; - const u8 *src; - - partial = sctx->count % SHA256_BLOCK_SIZE; - sctx->count += len; - done = 0; - src = data; - - if ((partial + len) >= SHA256_BLOCK_SIZE) { - if (partial) { - done = -partial; - memcpy(sctx->buf + partial, data, - done + SHA256_BLOCK_SIZE); - src = sctx->buf; - } - - do { - octeon_sha256_transform(src); - done += SHA256_BLOCK_SIZE; - src = data + done; - } while (done + SHA256_BLOCK_SIZE <= len); - - partial = 0; - } - memcpy(sctx->buf + partial, src, len - done); + src += SHA256_BLOCK_SIZE; + } while (--blocks); } static int octeon_sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct sha256_state *sctx = shash_desc_ctx(desc); + struct crypto_sha256_state *sctx = shash_desc_ctx(desc); struct octeon_cop2_state state; unsigned long flags; - - /* - * Small updates never reach the crypto engine, so the generic sha256 is - * faster because of the heavyweight octeon_crypto_enable() / - * octeon_crypto_disable(). - */ - if ((sctx->count % SHA256_BLOCK_SIZE) + len < SHA256_BLOCK_SIZE) - return crypto_sha256_update(desc, data, len); + int remain; flags = octeon_crypto_enable(&state); octeon_sha256_store_hash(sctx); - __octeon_sha256_update(sctx, data, len); + remain = sha256_base_do_update_blocks(desc, data, len, + octeon_sha256_transform); octeon_sha256_read_hash(sctx); octeon_crypto_disable(&state, flags); - - return 0; + return remain; } -static int octeon_sha256_final(struct shash_desc *desc, u8 *out) +static int octeon_sha256_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *out) { - struct sha256_state *sctx = shash_desc_ctx(desc); - static const u8 padding[64] = { 0x80, }; + struct crypto_sha256_state *sctx = shash_desc_ctx(desc); struct octeon_cop2_state state; - __be32 *dst = (__be32 *)out; - unsigned int pad_len; unsigned long flags; - unsigned int index; - __be64 bits; - int i; - - /* Save number of bits. */ - bits = cpu_to_be64(sctx->count << 3); - - /* Pad out to 56 mod 64. */ - index = sctx->count & 0x3f; - pad_len = (index < 56) ? (56 - index) : ((64+56) - index); flags = octeon_crypto_enable(&state); octeon_sha256_store_hash(sctx); - __octeon_sha256_update(sctx, padding, pad_len); - - /* Append length (before padding). */ - __octeon_sha256_update(sctx, (const u8 *)&bits, sizeof(bits)); + sha256_base_do_finup(desc, src, len, octeon_sha256_transform); octeon_sha256_read_hash(sctx); octeon_crypto_disable(&state, flags); - - /* Store state in digest */ - for (i = 0; i < 8; i++) - dst[i] = cpu_to_be32(sctx->state[i]); - - /* Zeroize sensitive information. */ - memset(sctx, 0, sizeof(*sctx)); - - return 0; -} - -static int octeon_sha224_final(struct shash_desc *desc, u8 *hash) -{ - u8 D[SHA256_DIGEST_SIZE]; - - octeon_sha256_final(desc, D); - - memcpy(hash, D, SHA224_DIGEST_SIZE); - memzero_explicit(D, SHA256_DIGEST_SIZE); - - return 0; -} - -static int octeon_sha256_export(struct shash_desc *desc, void *out) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - memcpy(out, sctx, sizeof(*sctx)); - return 0; -} - -static int octeon_sha256_import(struct shash_desc *desc, const void *in) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - memcpy(sctx, in, sizeof(*sctx)); - return 0; + return sha256_base_finish(desc, out); } static struct shash_alg octeon_sha256_algs[2] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, .update = octeon_sha256_update, - .final = octeon_sha256_final, - .export = octeon_sha256_export, - .import = octeon_sha256_import, - .descsize = sizeof(struct sha256_state), - .statesize = sizeof(struct sha256_state), + .finup = octeon_sha256_finup, + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha256", .cra_driver_name= "octeon-sha256", .cra_priority = OCTEON_CR_OPCODE_PRIORITY, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -209,11 +120,13 @@ static struct shash_alg octeon_sha256_algs[2] = { { .digestsize = SHA224_DIGEST_SIZE, .init = sha224_base_init, .update = octeon_sha256_update, - .final = octeon_sha224_final, - .descsize = sizeof(struct sha256_state), + .finup = octeon_sha256_finup, + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha224", .cra_driver_name= "octeon-sha224", + .cra_priority = OCTEON_CR_OPCODE_PRIORITY, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:43:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882024 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8026C238160 for ; Wed, 16 Apr 2025 06:43:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785837; cv=none; b=aAm51dY0WNpHDQiaHnonMXEPc3Gr7oQ/Rp6DmbBEyJgsoOE2OKz/+yRe/yvqgVJCfG7lvk4fHTUPxyX96F+wMzWvF3WWwj4hqCZN/qp/eCBlYuRBT+EgRvxLGW1LAo8P+Qe5OyzMQgetSpvkXejT9ahbDLPH6DzvZaxXAlfoejI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785837; c=relaxed/simple; bh=jHCno36IdNNpskUnZpJ/EGEdeuv8DatPENfEuj8sQjY=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=uuAu2Mx0ge8QrpktwWjr7gKLXc1vZqn99Pc51P0I9oUkRj6QkLt5qVTEBL1rUK+TbRqpxMZ1ySr3jO+vxh1D1geD7O7Yy4peMa23q94TASTZjRJbJWUUepx+MOLlENjbiRtwi1j3K3ckMHqPurAvkIBZd5NEfhIWP5lBeIo2fVg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=CmC3fwgg; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="CmC3fwgg" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=5HtykYdmz2P+WqQ5XjwWnFjRUPenLIF7DIXGSe5zMFg=; b=CmC3fwggGC+zBRDse+6uzHcDoN XAalNBaJHWo6TFY3Liib+xv2S7FgqxdKDQef0L/NZ/bJoV1MKIvfi2ln/BnecJm9evLOwav9dUbjR vzhWuWif+zs5sbSj3LPAZ+YvjxU1yXMIu8T3KOhKdX9nTf5QaA6Avy5RnPSfrOEPVvy5azR+jsduq ni9C5O9E9UGdi/4bfkYx2UYtRc8ELDO47+6ETATXjeFL8XAfmnikpSjwtOkodxw5Y6Acye+ASI0Kw wYH8Z7YkbAw9GiSuI/sbY9Ke1HarLkzJcI5wwTN8skZ06jNvg8VZdJ6F+fJe+KDN8G+5g7TRX2Ot3 I6Abqb2A==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wUx-00G6Lh-3C; Wed, 16 Apr 2025 14:43:53 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:51 +0800 Date: Wed, 16 Apr 2025 14:43:51 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 30/67] crypto: sha256-generic - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- crypto/sha256_generic.c | 50 +++++++++++++++++------------------------ include/crypto/sha2.h | 6 ----- 2 files changed, 21 insertions(+), 35 deletions(-) diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c index b00521f1a6d4..05084e5bbaec 100644 --- a/crypto/sha256_generic.c +++ b/crypto/sha256_generic.c @@ -8,14 +8,10 @@ * SHA224 Support Copyright 2007 Intel Corporation */ #include -#include -#include -#include -#include #include #include -#include -#include +#include +#include const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] = { 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, @@ -33,42 +29,37 @@ const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] = { }; EXPORT_SYMBOL_GPL(sha256_zero_message_hash); -int crypto_sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static void sha256_block(struct crypto_sha256_state *sctx, const u8 *input, + int blocks) { - sha256_update(shash_desc_ctx(desc), data, len); - return 0; -} -EXPORT_SYMBOL(crypto_sha256_update); - -static int crypto_sha256_final(struct shash_desc *desc, u8 *out) -{ - if (crypto_shash_digestsize(desc->tfm) == SHA224_DIGEST_SIZE) - sha224_final(shash_desc_ctx(desc), out); - else - sha256_final(shash_desc_ctx(desc), out); - return 0; + sha256_transform_blocks(sctx, input, blocks); } -int crypto_sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *hash) +static int crypto_sha256_update(struct shash_desc *desc, const u8 *data, + unsigned int len) { - sha256_update(shash_desc_ctx(desc), data, len); - return crypto_sha256_final(desc, hash); + return sha256_base_do_update_blocks(desc, data, len, sha256_block); +} + +static int crypto_sha256_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *hash) +{ + sha256_base_do_finup(desc, data, len, sha256_block); + return sha256_base_finish(desc, hash); } -EXPORT_SYMBOL(crypto_sha256_finup); static struct shash_alg sha256_algs[2] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, .update = crypto_sha256_update, - .final = crypto_sha256_final, .finup = crypto_sha256_finup, - .descsize = sizeof(struct sha256_state), + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha256", .cra_driver_name= "sha256-generic", .cra_priority = 100, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -76,13 +67,14 @@ static struct shash_alg sha256_algs[2] = { { .digestsize = SHA224_DIGEST_SIZE, .init = sha224_base_init, .update = crypto_sha256_update, - .final = crypto_sha256_final, .finup = crypto_sha256_finup, - .descsize = sizeof(struct sha256_state), + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha224", .cra_driver_name= "sha224-generic", .cra_priority = 100, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_module = THIS_MODULE, } diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index d9b1b9932393..a913bad5dd3b 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -83,12 +83,6 @@ struct sha512_state { struct shash_desc; -extern int crypto_sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len); - -extern int crypto_sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *hash); - extern int crypto_sha512_update(struct shash_desc *desc, const u8 *data, unsigned int len); From patchwork Wed Apr 16 06:43:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882023 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71816237194 for ; Wed, 16 Apr 2025 06:44:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785842; cv=none; b=ickYq4I3a+M1AGuxI/hRPuJ7v15TRBhsF3iOUMsY9rqIb283Ml/VTyqP765JqtLbfqMY5gRXPyGiFPJ7AaYTSE9yd3XAQ+u3vok6CjE9okFRx0oMHnvM1gORIUc42nBV9OaB/ycUs03ZMzlw+IWosfpb2jh51slZjXXW0MrWmeU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785842; c=relaxed/simple; bh=nhl9IjgXyB+Lem9FnIIyVxbkefTJYGHg5HvBojrNqMQ=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=l0BGmhWFMSugusOF2MraMkwdSQ9TeP85Pzg9BH9ydrWQX0SwCjIwZ2H3AsNddcGOao7FlQh+Yxm9jC0fiBsyo3DfVU1YZDk5VeCd/N60a2GByvu0kiqhCWw4a863ByBQv/1/DnAEspnxdDXdrtu1hbViIHYeBe2vlTerhyBsSdY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=LtjL5TXU; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="LtjL5TXU" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=JNI2XEsu5568TnyzsZojHx46a5dkC+22gxeHO7c1xFs=; b=LtjL5TXUaKhShFye4VtcIItTXd IwDEjwEf7Jzw68i4zCbKbxynA6RD+42rqJJ80gEZpj7dwgSu1OPCfdEDC9pfQfn1V0KvM9Jj4543D sXUHgc6OzAkJbgNAhOZwk5blOgX3k2pxeg+C4iRx/iIM9YwDwlw3NfirKfiBDtzHKvVdUAESD0XFI 7tIK1zqzuA43DqTyMKBJKyJQwPG1Mp40CowJ4rl8U95L7W0p0jOWn0dA2r5Bg2UnCsoQV4g3QHs/G +AGcIqqKsY8v8C8RAPdC4AyyvnM9abxy0QJ+7t9nRitBb6Lu6QphnMQ+KUL9vZxAm/pKXjt9224IC P/6U1jwg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wV2-00G6MQ-1q; Wed, 16 Apr 2025 14:43:57 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:43:56 +0800 Date: Wed, 16 Apr 2025 14:43:56 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 32/67] crypto: arm/sha256-neon - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm/crypto/sha256_neon_glue.c | 49 ++++++++++-------------------- 1 file changed, 16 insertions(+), 33 deletions(-) diff --git a/arch/arm/crypto/sha256_neon_glue.c b/arch/arm/crypto/sha256_neon_glue.c index ccdcfff71910..76eb3cdc21c9 100644 --- a/arch/arm/crypto/sha256_neon_glue.c +++ b/arch/arm/crypto/sha256_neon_glue.c @@ -9,69 +9,51 @@ * Copyright © 2014 Jussi Kivilinna */ +#include #include -#include -#include -#include #include #include -#include -#include -#include +#include +#include #include "sha256_glue.h" -asmlinkage void sha256_block_data_order_neon(struct sha256_state *digest, - const u8 *data, int num_blks); +asmlinkage void sha256_block_data_order_neon( + struct crypto_sha256_state *digest, const u8 *data, int num_blks); static int crypto_sha256_neon_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct sha256_state *sctx = shash_desc_ctx(desc); - - if (!crypto_simd_usable() || - (sctx->count % SHA256_BLOCK_SIZE) + len < SHA256_BLOCK_SIZE) - return crypto_sha256_arm_update(desc, data, len); + int remain; kernel_neon_begin(); - sha256_base_do_update(desc, data, len, sha256_block_data_order_neon); + remain = sha256_base_do_update_blocks(desc, data, len, + sha256_block_data_order_neon); kernel_neon_end(); - - return 0; + return remain; } static int crypto_sha256_neon_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - if (!crypto_simd_usable()) - return crypto_sha256_arm_finup(desc, data, len, out); - kernel_neon_begin(); - if (len) - sha256_base_do_update(desc, data, len, - sha256_block_data_order_neon); - sha256_base_do_finalize(desc, sha256_block_data_order_neon); + sha256_base_do_finup(desc, data, len, sha256_block_data_order_neon); kernel_neon_end(); - return sha256_base_finish(desc, out); } -static int crypto_sha256_neon_final(struct shash_desc *desc, u8 *out) -{ - return crypto_sha256_neon_finup(desc, NULL, 0, out); -} - struct shash_alg sha256_neon_algs[] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, .update = crypto_sha256_neon_update, - .final = crypto_sha256_neon_final, .finup = crypto_sha256_neon_finup, - .descsize = sizeof(struct sha256_state), + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha256", .cra_driver_name = "sha256-neon", .cra_priority = 250, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -79,13 +61,14 @@ struct shash_alg sha256_neon_algs[] = { { .digestsize = SHA224_DIGEST_SIZE, .init = sha224_base_init, .update = crypto_sha256_neon_update, - .final = crypto_sha256_neon_final, .finup = crypto_sha256_neon_finup, - .descsize = sizeof(struct sha256_state), + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha224", .cra_driver_name = "sha224-neon", .cra_priority = 250, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:44:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882022 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2528E22B8BC for ; Wed, 16 Apr 2025 06:44:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785847; cv=none; b=Zh0C8BjIXC4gIfNwgG1L/KjMFE7PXhbO4+8wgR1YSd3GGL3BfWC6+pEuBYG+va9zGMXBzzql9GGqhirVCYUl+VmayIxJo2+PIjrd7EZ2teQRKyYzR4UX7NuNu/2e5kh8mgBE/ffasc35DufjJnVnyoxO0x+xt7ocblKF7ZQMpdo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785847; c=relaxed/simple; bh=EzQTdQX4nKOaG+NE29ABjG5Kd1ERn4XcEi0KFs0a6zo=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=Rn/lV4mbbzopUi6/gljrPRkZBTpHPvw2/LxUr4CTbdvnd5Ehr3k3NHraKzJiiqjke/3yqYl58QF6NEIBIZW5GkCzq2Ol2ZLrfEXcUeO2Px9Qt94NzYLq3HXA+1OMFIiZSd0tIhuVsaEQdEWviCv+QjmVNXFMVTnWvDW5wPsb/hk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=d63iEHoh; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="d63iEHoh" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=Weml+9fW5NJY0MlmDSp0+EubRWyD/Ar09KevBxT+8Vs=; b=d63iEHohJ9qUUQx1StRiwI985x 4lZnWn8ijJHuY6RC4jJaEb3ooJJGK5ZsJylmvZHOjexvK4MGVBsjgcxbvXKqWuwuzO1Og+X9GM+zX a/ee7USWtO1pl0Hy1/eI3OdotxO52ErsB4T2SNxkBLirMDTsnMTKVyu6uxUwgFrPalMnp8Zocgyjp xGGi1+p/UDlEhuXuA5xJvKS5CmfV7cdg43UTg2dx+O398EmrskMnZTdcGe9SDFcHB6s9n/x2R9y9N JOAqfUcDRIbkbnE6JNw33oImQz4Y6WSlDcOUBIGodAKSGK06f6dDHqk10zDSI6Vj6kMvblmT/IZo0 2nuIC92g==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wV7-00G6Mm-0h; Wed, 16 Apr 2025 14:44:02 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:01 +0800 Date: Wed, 16 Apr 2025 14:44:01 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 34/67] crypto: arm64/sha256-ce - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm64/crypto/sha2-ce-glue.c | 90 +++++++------------------------- 1 file changed, 18 insertions(+), 72 deletions(-) diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 6b4866a88ded..912c215101eb 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -6,15 +6,13 @@ */ #include -#include -#include #include -#include #include #include #include -#include +#include #include +#include MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); @@ -23,7 +21,7 @@ MODULE_ALIAS_CRYPTO("sha224"); MODULE_ALIAS_CRYPTO("sha256"); struct sha256_ce_state { - struct sha256_state sst; + struct crypto_sha256_state sst; u32 finalize; }; @@ -33,7 +31,7 @@ extern const u32 sha256_ce_offsetof_finalize; asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src, int blocks); -static void sha256_ce_transform(struct sha256_state *sst, u8 const *src, +static void sha256_ce_transform(struct crypto_sha256_state *sst, u8 const *src, int blocks) { while (blocks) { @@ -54,42 +52,21 @@ const u32 sha256_ce_offsetof_count = offsetof(struct sha256_ce_state, const u32 sha256_ce_offsetof_finalize = offsetof(struct sha256_ce_state, finalize); -asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks); - -static void sha256_arm64_transform(struct sha256_state *sst, u8 const *src, - int blocks) -{ - sha256_block_data_order(sst->state, src, blocks); -} - static int sha256_ce_update(struct shash_desc *desc, const u8 *data, unsigned int len) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); - if (!crypto_simd_usable()) - return sha256_base_do_update(desc, data, len, - sha256_arm64_transform); - sctx->finalize = 0; - sha256_base_do_update(desc, data, len, sha256_ce_transform); - - return 0; + return sha256_base_do_update_blocks(desc, data, len, + sha256_ce_transform); } static int sha256_ce_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); - bool finalize = !sctx->sst.count && !(len % SHA256_BLOCK_SIZE) && len; - - if (!crypto_simd_usable()) { - if (len) - sha256_base_do_update(desc, data, len, - sha256_arm64_transform); - sha256_base_do_finalize(desc, sha256_arm64_transform); - return sha256_base_finish(desc, out); - } + bool finalize = !(len % SHA256_BLOCK_SIZE) && len; /* * Allow the asm code to perform the finalization if there is no @@ -97,23 +74,11 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data, */ sctx->finalize = finalize; - sha256_base_do_update(desc, data, len, sha256_ce_transform); - if (!finalize) - sha256_base_do_finalize(desc, sha256_ce_transform); - return sha256_base_finish(desc, out); -} - -static int sha256_ce_final(struct shash_desc *desc, u8 *out) -{ - struct sha256_ce_state *sctx = shash_desc_ctx(desc); - - if (!crypto_simd_usable()) { - sha256_base_do_finalize(desc, sha256_arm64_transform); - return sha256_base_finish(desc, out); - } - - sctx->finalize = 0; - sha256_base_do_finalize(desc, sha256_ce_transform); + if (finalize) + sha256_base_do_update_blocks(desc, data, len, + sha256_ce_transform); + else + sha256_base_do_finup(desc, data, len, sha256_ce_transform); return sha256_base_finish(desc, out); } @@ -124,55 +89,36 @@ static int sha256_ce_digest(struct shash_desc *desc, const u8 *data, return sha256_ce_finup(desc, data, len, out); } -static int sha256_ce_export(struct shash_desc *desc, void *out) -{ - struct sha256_ce_state *sctx = shash_desc_ctx(desc); - - memcpy(out, &sctx->sst, sizeof(struct sha256_state)); - return 0; -} - -static int sha256_ce_import(struct shash_desc *desc, const void *in) -{ - struct sha256_ce_state *sctx = shash_desc_ctx(desc); - - memcpy(&sctx->sst, in, sizeof(struct sha256_state)); - sctx->finalize = 0; - return 0; -} - static struct shash_alg algs[] = { { .init = sha224_base_init, .update = sha256_ce_update, - .final = sha256_ce_final, .finup = sha256_ce_finup, - .export = sha256_ce_export, - .import = sha256_ce_import, .descsize = sizeof(struct sha256_ce_state), - .statesize = sizeof(struct sha256_state), + .statesize = sizeof(struct crypto_sha256_state), .digestsize = SHA224_DIGEST_SIZE, .base = { .cra_name = "sha224", .cra_driver_name = "sha224-ce", .cra_priority = 200, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_module = THIS_MODULE, } }, { .init = sha256_base_init, .update = sha256_ce_update, - .final = sha256_ce_final, .finup = sha256_ce_finup, .digest = sha256_ce_digest, - .export = sha256_ce_export, - .import = sha256_ce_import, .descsize = sizeof(struct sha256_ce_state), - .statesize = sizeof(struct sha256_state), + .statesize = sizeof(struct crypto_sha256_state), .digestsize = SHA256_DIGEST_SIZE, .base = { .cra_name = "sha256", .cra_driver_name = "sha256-ce", .cra_priority = 200, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:44:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882021 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D69C238D2B for ; Wed, 16 Apr 2025 06:44:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785851; cv=none; b=OI/IVef6uaLUv4L1EEA5FpWNb2EueEwtPM1JJBKO7A0A/wE+ib5deXtjyOwKZVZEvZdWbrmddN+suZgXWqYUhgcSaiE7wCOc24pFV2oQ1kaczfUZo/gZTAnJNgrJ7vMvRuPCjJlrByY3kXpv5dqTAwxnDoLs2/yL+vI6rRmpYd4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785851; c=relaxed/simple; bh=eX6xV/nmDa0QcOCV6kZEhe57s6PCpXDkQA/O+TMM7aU=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=elquZ/NJEpXm0QGlQs5HtlhMW3dJ356uP7hRgHY90mHK8pD6ITlSoYQZLK4I1wyL2AJJzlZm+jbpHznBSMlsldy2zcRw0yQsiUDJ0LJ9+/Zh4OYbkmVhPL6Zro6FQeZeEnUDh9EQppQJgK91ydKWq7IwxjWwKEAu2UR33+lcbxI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=j23Zt/cf; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="j23Zt/cf" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=Dx+yYrikgblWwrLGpIRzgewqSEQ86JJ2P/ebhOklqdE=; b=j23Zt/cfT0l4L0AzLgmjkiBvlY VYXCypSVJ3ppJ+arsD45qcXTAt+N/PEt5X8q7wb9UoOLZfLABGOCEsR2MwAAyZ0aMgd/VzePZy16r LEtHZfHcDpyAuSwRyST+2p2GCLRV/WoJNWgO63bqVcQDvaaOxCq2DTIKzXEocIhwdOjh9ldhqgCI7 YBYC8Dso9X4MhpNhGFlVzYaE7/yo+mUbD9V43RcoZ29sqf0vp/sVj8NGjZat8bxy1EryXjsKnNscp KPHVcsXEm6G4IoQyAKd+9eww7jK4Ba60YUgUB6ZII74nxdVPEB6UNX5bayoi03cQ2MB2AFheBtYqe 2CCTo7cg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVB-00G6N8-2g; Wed, 16 Apr 2025 14:44:06 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:05 +0800 Date: Wed, 16 Apr 2025 14:44:05 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 36/67] crypto: powerpc/sha256-spe - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- arch/powerpc/crypto/sha256-spe-glue.c | 167 +++++--------------------- 1 file changed, 30 insertions(+), 137 deletions(-) diff --git a/arch/powerpc/crypto/sha256-spe-glue.c b/arch/powerpc/crypto/sha256-spe-glue.c index 2997d13236e0..42c76bf8062d 100644 --- a/arch/powerpc/crypto/sha256-spe-glue.c +++ b/arch/powerpc/crypto/sha256-spe-glue.c @@ -8,16 +8,13 @@ * Copyright (c) 2015 Markus Stockhausen */ +#include #include -#include -#include -#include -#include #include #include -#include -#include -#include +#include +#include +#include /* * MAX_BYTES defines the number of bytes that are allowed to be processed @@ -47,151 +44,48 @@ static void spe_end(void) preempt_enable(); } -static inline void ppc_sha256_clear_context(struct sha256_state *sctx) +static void ppc_spe_sha256_block(struct crypto_sha256_state *sctx, + const u8 *src, int blocks) { - int count = sizeof(struct sha256_state) >> 2; - u32 *ptr = (u32 *)sctx; + do { + /* cut input data into smaller blocks */ + int unit = min(blocks, MAX_BYTES / SHA256_BLOCK_SIZE); - /* make sure we can clear the fast way */ - BUILD_BUG_ON(sizeof(struct sha256_state) % 4); - do { *ptr++ = 0; } while (--count); + spe_begin(); + ppc_spe_sha256_transform(sctx->state, src, unit); + spe_end(); + + src += unit * SHA256_BLOCK_SIZE; + blocks -= unit; + } while (blocks); } static int ppc_spe_sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct sha256_state *sctx = shash_desc_ctx(desc); - const unsigned int offset = sctx->count & 0x3f; - const unsigned int avail = 64 - offset; - unsigned int bytes; - const u8 *src = data; - - if (avail > len) { - sctx->count += len; - memcpy((char *)sctx->buf + offset, src, len); - return 0; - } - - sctx->count += len; - - if (offset) { - memcpy((char *)sctx->buf + offset, src, avail); - - spe_begin(); - ppc_spe_sha256_transform(sctx->state, (const u8 *)sctx->buf, 1); - spe_end(); - - len -= avail; - src += avail; - } - - while (len > 63) { - /* cut input data into smaller blocks */ - bytes = (len > MAX_BYTES) ? MAX_BYTES : len; - bytes = bytes & ~0x3f; - - spe_begin(); - ppc_spe_sha256_transform(sctx->state, src, bytes >> 6); - spe_end(); - - src += bytes; - len -= bytes; - } - - memcpy((char *)sctx->buf, src, len); - return 0; + return sha256_base_do_update_blocks(desc, data, len, + ppc_spe_sha256_block); } -static int ppc_spe_sha256_final(struct shash_desc *desc, u8 *out) +static int ppc_spe_sha256_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *out) { - struct sha256_state *sctx = shash_desc_ctx(desc); - const unsigned int offset = sctx->count & 0x3f; - char *p = (char *)sctx->buf + offset; - int padlen; - __be64 *pbits = (__be64 *)(((char *)&sctx->buf) + 56); - __be32 *dst = (__be32 *)out; - - padlen = 55 - offset; - *p++ = 0x80; - - spe_begin(); - - if (padlen < 0) { - memset(p, 0x00, padlen + sizeof (u64)); - ppc_spe_sha256_transform(sctx->state, sctx->buf, 1); - p = (char *)sctx->buf; - padlen = 56; - } - - memset(p, 0, padlen); - *pbits = cpu_to_be64(sctx->count << 3); - ppc_spe_sha256_transform(sctx->state, sctx->buf, 1); - - spe_end(); - - dst[0] = cpu_to_be32(sctx->state[0]); - dst[1] = cpu_to_be32(sctx->state[1]); - dst[2] = cpu_to_be32(sctx->state[2]); - dst[3] = cpu_to_be32(sctx->state[3]); - dst[4] = cpu_to_be32(sctx->state[4]); - dst[5] = cpu_to_be32(sctx->state[5]); - dst[6] = cpu_to_be32(sctx->state[6]); - dst[7] = cpu_to_be32(sctx->state[7]); - - ppc_sha256_clear_context(sctx); - return 0; -} - -static int ppc_spe_sha224_final(struct shash_desc *desc, u8 *out) -{ - __be32 D[SHA256_DIGEST_SIZE >> 2]; - __be32 *dst = (__be32 *)out; - - ppc_spe_sha256_final(desc, (u8 *)D); - - /* avoid bytewise memcpy */ - dst[0] = D[0]; - dst[1] = D[1]; - dst[2] = D[2]; - dst[3] = D[3]; - dst[4] = D[4]; - dst[5] = D[5]; - dst[6] = D[6]; - - /* clear sensitive data */ - memzero_explicit(D, SHA256_DIGEST_SIZE); - return 0; -} - -static int ppc_spe_sha256_export(struct shash_desc *desc, void *out) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - memcpy(out, sctx, sizeof(*sctx)); - return 0; -} - -static int ppc_spe_sha256_import(struct shash_desc *desc, const void *in) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - memcpy(sctx, in, sizeof(*sctx)); - return 0; + sha256_base_do_finup(desc, src, len, ppc_spe_sha256_block); + return sha256_base_finish(desc, out); } static struct shash_alg algs[2] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, .update = ppc_spe_sha256_update, - .final = ppc_spe_sha256_final, - .export = ppc_spe_sha256_export, - .import = ppc_spe_sha256_import, - .descsize = sizeof(struct sha256_state), - .statesize = sizeof(struct sha256_state), + .finup = ppc_spe_sha256_finup, + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha256", .cra_driver_name= "sha256-ppc-spe", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -199,15 +93,14 @@ static struct shash_alg algs[2] = { { .digestsize = SHA224_DIGEST_SIZE, .init = sha224_base_init, .update = ppc_spe_sha256_update, - .final = ppc_spe_sha224_final, - .export = ppc_spe_sha256_export, - .import = ppc_spe_sha256_import, - .descsize = sizeof(struct sha256_state), - .statesize = sizeof(struct sha256_state), + .finup = ppc_spe_sha256_finup, + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha224", .cra_driver_name= "sha224-ppc-spe", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:44:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882020 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55A0A239594 for ; Wed, 16 Apr 2025 06:44:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785856; cv=none; b=axnQluG+C8cbL7BQXBpLCclKQI65GYTkVyyIlqY6Tc08gLeqj6UljFi7q5iaP6kcG3QajUO43PgBwdWwfLh1T3dbKqxmCEc1U1FpTBeViVJv54fcMbqq2cz0CjEq0jfTb6TcjOw4OrZsh4OBwkIKn9uajARTMMMnU5O98vwycWk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785856; c=relaxed/simple; bh=+d4CoQDdBddGoGLXVzH0k2ufKuE8P4zaCAzm4e5pQAg=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=VcnWQX3MvBxXardkbclxB6z6Ndi/AVdE7Hvpktw5oKrkOozz7mLSns7t4EKbUfVgoQ+QxtiFCEq0rCRdEUBi699HOqlsiXo4Yx+rFlTQ0X4wFro5zUO1vZktPez+Z4cdmzNugpxsovqGrM8/jIjiGmsdnk4D0CPdnUyg0j/E9Tg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=VAs9jflX; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="VAs9jflX" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=VHnIoj5CKuxux0RXlMUTGZNZu5IkIpJmAwU+AzNvvYg=; b=VAs9jflXCFldkUbmTUSAWjqsfO B3O6D7Vmo/WoOjxI+5Sou0LjNZjyGqeCKQ6jWCwdpz0pMg2D9G2zW32goHUMzUJ+AREuPNJu/9gxB xGVPhOv2mA55yjY8/RZgVU5eLflW5b/B6dBDfDDH2YNWIXqYVk+5ypVnVy8abHy56bQlQnPGFimhI Z4XrwjDjD69BS1ozB9E1Q5K8nQk8D8RwZR48m32nP5YJot7nC4GRhVQqpLNqQBm+f6sf8TgUa0GiT nY9OcjCFUAQnIvE1ZboSfF4cviu0R/y3SdAbJFynwjp4pZOCnPbtTzZeIDNnt4fKwjxJw/puXolvv PrBTczLA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVG-00G6NU-1M; Wed, 16 Apr 2025 14:44:11 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:10 +0800 Date: Wed, 16 Apr 2025 14:44:10 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 38/67] crypto: sparc/sha256 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- arch/sparc/crypto/sha256_glue.c | 121 ++++++-------------------------- 1 file changed, 20 insertions(+), 101 deletions(-) diff --git a/arch/sparc/crypto/sha256_glue.c b/arch/sparc/crypto/sha256_glue.c index 285561a1cde5..ddb250242faf 100644 --- a/arch/sparc/crypto/sha256_glue.c +++ b/arch/sparc/crypto/sha256_glue.c @@ -11,133 +11,50 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include +#include #include -#include -#include -#include -#include #include #include - -#include -#include +#include +#include #include "opcodes.h" asmlinkage void sha256_sparc64_transform(u32 *digest, const char *data, unsigned int rounds); -static void __sha256_sparc64_update(struct sha256_state *sctx, const u8 *data, - unsigned int len, unsigned int partial) +static void sha256_block(struct crypto_sha256_state *sctx, const u8 *src, + int blocks) { - unsigned int done = 0; - - sctx->count += len; - if (partial) { - done = SHA256_BLOCK_SIZE - partial; - memcpy(sctx->buf + partial, data, done); - sha256_sparc64_transform(sctx->state, sctx->buf, 1); - } - if (len - done >= SHA256_BLOCK_SIZE) { - const unsigned int rounds = (len - done) / SHA256_BLOCK_SIZE; - - sha256_sparc64_transform(sctx->state, data + done, rounds); - done += rounds * SHA256_BLOCK_SIZE; - } - - memcpy(sctx->buf, data + done, len - done); + sha256_sparc64_transform(sctx->state, src, blocks); } static int sha256_sparc64_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct sha256_state *sctx = shash_desc_ctx(desc); - unsigned int partial = sctx->count % SHA256_BLOCK_SIZE; - - /* Handle the fast case right here */ - if (partial + len < SHA256_BLOCK_SIZE) { - sctx->count += len; - memcpy(sctx->buf + partial, data, len); - } else - __sha256_sparc64_update(sctx, data, len, partial); - - return 0; + return sha256_base_do_update_blocks(desc, data, len, sha256_block); } -static int sha256_sparc64_final(struct shash_desc *desc, u8 *out) +static int sha256_sparc64_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *out) { - struct sha256_state *sctx = shash_desc_ctx(desc); - unsigned int i, index, padlen; - __be32 *dst = (__be32 *)out; - __be64 bits; - static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, }; - - bits = cpu_to_be64(sctx->count << 3); - - /* Pad out to 56 mod 64 and append length */ - index = sctx->count % SHA256_BLOCK_SIZE; - padlen = (index < 56) ? (56 - index) : ((SHA256_BLOCK_SIZE+56) - index); - - /* We need to fill a whole block for __sha256_sparc64_update() */ - if (padlen <= 56) { - sctx->count += padlen; - memcpy(sctx->buf + index, padding, padlen); - } else { - __sha256_sparc64_update(sctx, padding, padlen, index); - } - __sha256_sparc64_update(sctx, (const u8 *)&bits, sizeof(bits), 56); - - /* Store state in digest */ - for (i = 0; i < 8; i++) - dst[i] = cpu_to_be32(sctx->state[i]); - - /* Wipe context */ - memset(sctx, 0, sizeof(*sctx)); - - return 0; -} - -static int sha224_sparc64_final(struct shash_desc *desc, u8 *hash) -{ - u8 D[SHA256_DIGEST_SIZE]; - - sha256_sparc64_final(desc, D); - - memcpy(hash, D, SHA224_DIGEST_SIZE); - memzero_explicit(D, SHA256_DIGEST_SIZE); - - return 0; -} - -static int sha256_sparc64_export(struct shash_desc *desc, void *out) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - memcpy(out, sctx, sizeof(*sctx)); - return 0; -} - -static int sha256_sparc64_import(struct shash_desc *desc, const void *in) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - memcpy(sctx, in, sizeof(*sctx)); - return 0; + sha256_base_do_finup(desc, src, len, sha256_block); + return sha256_base_finish(desc, out); } static struct shash_alg sha256_alg = { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, .update = sha256_sparc64_update, - .final = sha256_sparc64_final, - .export = sha256_sparc64_export, - .import = sha256_sparc64_import, - .descsize = sizeof(struct sha256_state), - .statesize = sizeof(struct sha256_state), + .finup = sha256_sparc64_finup, + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha256", .cra_driver_name= "sha256-sparc64", .cra_priority = SPARC_CR_OPCODE_PRIORITY, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -147,12 +64,14 @@ static struct shash_alg sha224_alg = { .digestsize = SHA224_DIGEST_SIZE, .init = sha224_base_init, .update = sha256_sparc64_update, - .final = sha224_sparc64_final, - .descsize = sizeof(struct sha256_state), + .finup = sha256_sparc64_finup, + .descsize = sizeof(struct crypto_sha256_state), .base = { .cra_name = "sha224", .cra_driver_name= "sha224-sparc64", .cra_priority = SPARC_CR_OPCODE_PRIORITY, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:44:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882019 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 982D923A564 for ; Wed, 16 Apr 2025 06:44:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785860; cv=none; b=omDFXTI8Azc0mcalGpKLAp9x6ToGdunm8gDUNaW8J0y4Kl9K+z0v/S3GQPPIvxifiAuxiOOpWlm/nfMM2m76TwhYQ5rrLnIzaFWUIzGEZP9ytAZw+Q20FeACnmNuHSWcctjTXLn+JrTYsQAvBFAF/NI8tfGt7cjDye+Tqp5hsJE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785860; c=relaxed/simple; bh=8aCNfjZvuiYmFRY5VA44jitvM2M5nEvBLuzWursiLU4=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=k+sL/2U9HSFxKd6fejNJMu4yyPbkUGsLvMaxiUtokehQvNZouWbGP92a5pMTpCQrifRsisWO2ezjnJ1WvAuiuWGK8oMkCVpkogh61PA7rpIcB8NgEWshhYV0IaHb1mie5DH2dREMtQTdP0YBYt2uuweUs83/y1kfu3mgsLWM0r8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=Rje4vVCm; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="Rje4vVCm" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=IFi/+fu40NKUavhFBqUF69SVcZm4Bdoit9MyVoucG9M=; b=Rje4vVCm79HP/Iu0OuD0G/QPI2 /lGyU1h0NSpycBBM+Fd6JDIkaMXyu6+9Rqanne5rBrGFAAGzOqPGpqPp8Loi1u6a8+8b4eY6tKjyr XEe6uqKiSxpjKtLH/hlIeCR5oPFPtA16wDAKXlxfbsMt6E/grV0F8c8pAdCpEs07Jb/YEtY/tiO5B ANxLFWZxRS0LAc9B/GYVR66zv9jD+K8XA1YrkCxVmxKLX57Y02voahHQ9dnEGkxBX9ZADbRgNKUNF LWwl3a8+kflhmgZjmY/RSYAKsMcOxmzWECcsvVMK8yZY+4VphyZ+j9y7R0oVKe9QNmiWeOg+qOhWO VfejI10A==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVL-00G6OH-05; Wed, 16 Apr 2025 14:44:16 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:15 +0800 Date: Wed, 16 Apr 2025 14:44:15 +0800 Message-Id: <6f7f7dac1f3867ddd7bb572000b6b198e2781b5d.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 40/67] crypto: arm64/sha3-ce - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm64/crypto/sha3-ce-glue.c | 107 +++++++++++++------------------ arch/s390/crypto/sha.h | 1 - include/crypto/sha3.h | 4 +- 3 files changed, 49 insertions(+), 63 deletions(-) diff --git a/arch/arm64/crypto/sha3-ce-glue.c b/arch/arm64/crypto/sha3-ce-glue.c index 5662c3ac49e9..b4f1001046c9 100644 --- a/arch/arm64/crypto/sha3-ce-glue.c +++ b/arch/arm64/crypto/sha3-ce-glue.c @@ -12,13 +12,13 @@ #include #include #include -#include #include -#include #include #include -#include +#include #include +#include +#include MODULE_DESCRIPTION("SHA3 secure hash using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); @@ -35,74 +35,55 @@ static int sha3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { struct sha3_state *sctx = shash_desc_ctx(desc); - unsigned int digest_size = crypto_shash_digestsize(desc->tfm); + struct crypto_shash *tfm = desc->tfm; + unsigned int bs, ds; + int blocks; - if (!crypto_simd_usable()) - return crypto_sha3_update(desc, data, len); + ds = crypto_shash_digestsize(tfm); + bs = crypto_shash_blocksize(tfm); + blocks = len / bs; + len -= blocks * bs; + do { + int rem; - if ((sctx->partial + len) >= sctx->rsiz) { - int blocks; - - if (sctx->partial) { - int p = sctx->rsiz - sctx->partial; - - memcpy(sctx->buf + sctx->partial, data, p); - kernel_neon_begin(); - sha3_ce_transform(sctx->st, sctx->buf, 1, digest_size); - kernel_neon_end(); - - data += p; - len -= p; - sctx->partial = 0; - } - - blocks = len / sctx->rsiz; - len %= sctx->rsiz; - - while (blocks) { - int rem; - - kernel_neon_begin(); - rem = sha3_ce_transform(sctx->st, data, blocks, - digest_size); - kernel_neon_end(); - data += (blocks - rem) * sctx->rsiz; - blocks = rem; - } - } - - if (len) { - memcpy(sctx->buf + sctx->partial, data, len); - sctx->partial += len; - } - return 0; + kernel_neon_begin(); + rem = sha3_ce_transform(sctx->st, data, blocks, ds); + kernel_neon_end(); + data += (blocks - rem) * bs; + blocks = rem; + } while (blocks); + return len; } -static int sha3_final(struct shash_desc *desc, u8 *out) +static int sha3_finup(struct shash_desc *desc, const u8 *src, unsigned int len, + u8 *out) { struct sha3_state *sctx = shash_desc_ctx(desc); - unsigned int digest_size = crypto_shash_digestsize(desc->tfm); + struct crypto_shash *tfm = desc->tfm; __le64 *digest = (__le64 *)out; + u8 block[SHA3_224_BLOCK_SIZE]; + unsigned int bs, ds; int i; - if (!crypto_simd_usable()) - return crypto_sha3_final(desc, out); + ds = crypto_shash_digestsize(tfm); + bs = crypto_shash_blocksize(tfm); + memcpy(block, src, len); - sctx->buf[sctx->partial++] = 0x06; - memset(sctx->buf + sctx->partial, 0, sctx->rsiz - sctx->partial); - sctx->buf[sctx->rsiz - 1] |= 0x80; + block[len++] = 0x06; + memset(block + len, 0, bs - len); + block[bs - 1] |= 0x80; kernel_neon_begin(); - sha3_ce_transform(sctx->st, sctx->buf, 1, digest_size); + sha3_ce_transform(sctx->st, block, 1, ds); kernel_neon_end(); + memzero_explicit(block , sizeof(block)); - for (i = 0; i < digest_size / 8; i++) + for (i = 0; i < ds / 8; i++) put_unaligned_le64(sctx->st[i], digest++); - if (digest_size & 4) + if (ds & 4) put_unaligned_le32(sctx->st[i], (__le32 *)digest); - memzero_explicit(sctx, sizeof(*sctx)); return 0; } @@ -110,10 +91,11 @@ static struct shash_alg algs[] = { { .digestsize = SHA3_224_DIGEST_SIZE, .init = crypto_sha3_init, .update = sha3_update, - .final = sha3_final, - .descsize = sizeof(struct sha3_state), + .finup = sha3_finup, + .descsize = SHA3_STATE_SIZE, .base.cra_name = "sha3-224", .base.cra_driver_name = "sha3-224-ce", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = SHA3_224_BLOCK_SIZE, .base.cra_module = THIS_MODULE, .base.cra_priority = 200, @@ -121,10 +103,11 @@ static struct shash_alg algs[] = { { .digestsize = SHA3_256_DIGEST_SIZE, .init = crypto_sha3_init, .update = sha3_update, - .final = sha3_final, - .descsize = sizeof(struct sha3_state), + .finup = sha3_finup, + .descsize = SHA3_STATE_SIZE, .base.cra_name = "sha3-256", .base.cra_driver_name = "sha3-256-ce", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = SHA3_256_BLOCK_SIZE, .base.cra_module = THIS_MODULE, .base.cra_priority = 200, @@ -132,10 +115,11 @@ static struct shash_alg algs[] = { { .digestsize = SHA3_384_DIGEST_SIZE, .init = crypto_sha3_init, .update = sha3_update, - .final = sha3_final, - .descsize = sizeof(struct sha3_state), + .finup = sha3_finup, + .descsize = SHA3_STATE_SIZE, .base.cra_name = "sha3-384", .base.cra_driver_name = "sha3-384-ce", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = SHA3_384_BLOCK_SIZE, .base.cra_module = THIS_MODULE, .base.cra_priority = 200, @@ -143,10 +127,11 @@ static struct shash_alg algs[] = { { .digestsize = SHA3_512_DIGEST_SIZE, .init = crypto_sha3_init, .update = sha3_update, - .final = sha3_final, - .descsize = sizeof(struct sha3_state), + .finup = sha3_finup, + .descsize = SHA3_STATE_SIZE, .base.cra_name = "sha3-512", .base.cra_driver_name = "sha3-512-ce", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = SHA3_512_BLOCK_SIZE, .base.cra_module = THIS_MODULE, .base.cra_priority = 200, diff --git a/arch/s390/crypto/sha.h b/arch/s390/crypto/sha.h index b8aeb51b2f3d..d95437ebe1ca 100644 --- a/arch/s390/crypto/sha.h +++ b/arch/s390/crypto/sha.h @@ -14,7 +14,6 @@ #include /* must be big enough for the largest SHA variant */ -#define SHA3_STATE_SIZE 200 #define CPACF_MAX_PARMBLOCK_SIZE SHA3_STATE_SIZE #define SHA_MAX_BLOCK_SIZE SHA3_224_BLOCK_SIZE #define S390_SHA_CTX_SIZE offsetof(struct s390_sha_ctx, buf) diff --git a/include/crypto/sha3.h b/include/crypto/sha3.h index 080f60c2e6b1..661f196193cf 100644 --- a/include/crypto/sha3.h +++ b/include/crypto/sha3.h @@ -17,8 +17,10 @@ #define SHA3_512_DIGEST_SIZE (512 / 8) #define SHA3_512_BLOCK_SIZE (200 - 2 * SHA3_512_DIGEST_SIZE) +#define SHA3_STATE_SIZE 200 + struct sha3_state { - u64 st[25]; + u64 st[SHA3_STATE_SIZE / 8]; unsigned int rsiz; unsigned int rsizw; From patchwork Wed Apr 16 06:44:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882018 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5582023A99C for ; Wed, 16 Apr 2025 06:44:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785865; cv=none; b=pthJNU5aSIv5ODfst+0ZbddnNkP/WkUq/FycY8rqRKf8JoS48JIQDwHxyW8MlEA/0PEg75Y9SbXTmpwS2ge/HneLNAJzZijg8+n5GFZ0a8IrbC4miYpuz9kHPj3T3FfBZsf3B2BPo+w6AuXuvmEqnEQQ9afKoDBr/W1em3yAC3A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785865; c=relaxed/simple; bh=nuPhh7eQ5XW4GwUj+End0MtFEcON+VVTbRv82bsrVVk=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=WVUC+RuCSaFEhP8mg0Z4NyU5upfZEdKf1tbWs96QbQA30XL2j3mfY8X86q2v/toUr+9eHVNjx61ozPCBhJ8V18KCGgsWXeoKZvO2dIwPc7NTYXXrZDKrHVhqpoVLYlZWmLUtqLQOTzJCTogNFd78WVxM+1dJ9S5L3oWx1Wh3j38= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=nk1upeBJ; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="nk1upeBJ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=9yFqIiMvCsKvvzpu6cwAs+/+cq2XCDzg1Y4o9KGxuqk=; b=nk1upeBJh1IW6gIiYEKLaN0uyR +cvzeHzuFl65A4nENo++SSGI4gAh9GX6pC25BHt320WToNLsqcxVXq7JaJERXYQ0h5i14YtOUxCg/ WfegMC0ysfvDnQCQ1ZFvFx7pWtn9bHNOcQV7ZXQp3KGYLVQxRdlGVk4gd0zuoiEAEcPVsZIadZEzS IsMOQziqmLwrD7CfnZi1ne26ZL8GqCQHfkQZQowRgtTKYqOefFIdcrFNtBcCoaWtJE3kG7xdxfqKl JTz6tg1ZJ/mgO3VCJhGUJfbpGoQ+blhL3fDtiD6V48IiuqQ/uL/WMFJZXMH5O7pT7hjn1R7J9T9xG ijpzBdfg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVP-00G6Og-2H; Wed, 16 Apr 2025 14:44:20 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:19 +0800 Date: Wed, 16 Apr 2025 14:44:19 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 42/67] crypto: sha3-generic - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- crypto/sha3_generic.c | 101 ++++++++++++++++++------------------------ include/crypto/sha3.h | 7 +-- 2 files changed, 47 insertions(+), 61 deletions(-) diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c index b103642b56ea..41d1e506e6de 100644 --- a/crypto/sha3_generic.c +++ b/crypto/sha3_generic.c @@ -9,10 +9,10 @@ * Ard Biesheuvel */ #include -#include -#include -#include #include +#include +#include +#include #include /* @@ -161,68 +161,51 @@ static void keccakf(u64 st[25]) int crypto_sha3_init(struct shash_desc *desc) { struct sha3_state *sctx = shash_desc_ctx(desc); - unsigned int digest_size = crypto_shash_digestsize(desc->tfm); - - sctx->rsiz = 200 - 2 * digest_size; - sctx->rsizw = sctx->rsiz / 8; - sctx->partial = 0; memset(sctx->st, 0, sizeof(sctx->st)); return 0; } EXPORT_SYMBOL(crypto_sha3_init); -int crypto_sha3_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static int crypto_sha3_update(struct shash_desc *desc, const u8 *data, + unsigned int len) { + unsigned int rsiz = crypto_shash_blocksize(desc->tfm); struct sha3_state *sctx = shash_desc_ctx(desc); - unsigned int done; - const u8 *src; + unsigned int rsizw = rsiz / 8; - done = 0; - src = data; + do { + int i; - if ((sctx->partial + len) > (sctx->rsiz - 1)) { - if (sctx->partial) { - done = -sctx->partial; - memcpy(sctx->buf + sctx->partial, data, - done + sctx->rsiz); - src = sctx->buf; - } + for (i = 0; i < rsizw; i++) + sctx->st[i] ^= get_unaligned_le64(data + 8 * i); + keccakf(sctx->st); - do { - unsigned int i; - - for (i = 0; i < sctx->rsizw; i++) - sctx->st[i] ^= get_unaligned_le64(src + 8 * i); - keccakf(sctx->st); - - done += sctx->rsiz; - src = data + done; - } while (done + (sctx->rsiz - 1) < len); - - sctx->partial = 0; - } - memcpy(sctx->buf + sctx->partial, src, len - done); - sctx->partial += (len - done); - - return 0; + data += rsiz; + len -= rsiz; + } while (len >= rsiz); + return len; } -EXPORT_SYMBOL(crypto_sha3_update); -int crypto_sha3_final(struct shash_desc *desc, u8 *out) +static int crypto_sha3_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *out) { - struct sha3_state *sctx = shash_desc_ctx(desc); - unsigned int i, inlen = sctx->partial; unsigned int digest_size = crypto_shash_digestsize(desc->tfm); + unsigned int rsiz = crypto_shash_blocksize(desc->tfm); + struct sha3_state *sctx = shash_desc_ctx(desc); + __le64 block[SHA3_224_BLOCK_SIZE / 8] = {}; __le64 *digest = (__le64 *)out; + unsigned int rsizw = rsiz / 8; + u8 *p; + int i; - sctx->buf[inlen++] = 0x06; - memset(sctx->buf + inlen, 0, sctx->rsiz - inlen); - sctx->buf[sctx->rsiz - 1] |= 0x80; + p = memcpy(block, src, len); + p[len++] = 0x06; + p[rsiz - 1] |= 0x80; - for (i = 0; i < sctx->rsizw; i++) - sctx->st[i] ^= get_unaligned_le64(sctx->buf + 8 * i); + for (i = 0; i < rsizw; i++) + sctx->st[i] ^= le64_to_cpu(block[i]); + memzero_explicit(block, sizeof(block)); keccakf(sctx->st); @@ -232,49 +215,51 @@ int crypto_sha3_final(struct shash_desc *desc, u8 *out) if (digest_size & 4) put_unaligned_le32(sctx->st[i], (__le32 *)digest); - memset(sctx, 0, sizeof(*sctx)); return 0; } -EXPORT_SYMBOL(crypto_sha3_final); static struct shash_alg algs[] = { { .digestsize = SHA3_224_DIGEST_SIZE, .init = crypto_sha3_init, .update = crypto_sha3_update, - .final = crypto_sha3_final, - .descsize = sizeof(struct sha3_state), + .finup = crypto_sha3_finup, + .descsize = SHA3_STATE_SIZE, .base.cra_name = "sha3-224", .base.cra_driver_name = "sha3-224-generic", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = SHA3_224_BLOCK_SIZE, .base.cra_module = THIS_MODULE, }, { .digestsize = SHA3_256_DIGEST_SIZE, .init = crypto_sha3_init, .update = crypto_sha3_update, - .final = crypto_sha3_final, - .descsize = sizeof(struct sha3_state), + .finup = crypto_sha3_finup, + .descsize = SHA3_STATE_SIZE, .base.cra_name = "sha3-256", .base.cra_driver_name = "sha3-256-generic", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = SHA3_256_BLOCK_SIZE, .base.cra_module = THIS_MODULE, }, { .digestsize = SHA3_384_DIGEST_SIZE, .init = crypto_sha3_init, .update = crypto_sha3_update, - .final = crypto_sha3_final, - .descsize = sizeof(struct sha3_state), + .finup = crypto_sha3_finup, + .descsize = SHA3_STATE_SIZE, .base.cra_name = "sha3-384", .base.cra_driver_name = "sha3-384-generic", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = SHA3_384_BLOCK_SIZE, .base.cra_module = THIS_MODULE, }, { .digestsize = SHA3_512_DIGEST_SIZE, .init = crypto_sha3_init, .update = crypto_sha3_update, - .final = crypto_sha3_final, - .descsize = sizeof(struct sha3_state), + .finup = crypto_sha3_finup, + .descsize = SHA3_STATE_SIZE, .base.cra_name = "sha3-512", .base.cra_driver_name = "sha3-512-generic", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = SHA3_512_BLOCK_SIZE, .base.cra_module = THIS_MODULE, } }; @@ -289,7 +274,7 @@ static void __exit sha3_generic_mod_fini(void) crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); } -subsys_initcall(sha3_generic_mod_init); +module_init(sha3_generic_mod_init); module_exit(sha3_generic_mod_fini); MODULE_LICENSE("GPL"); diff --git a/include/crypto/sha3.h b/include/crypto/sha3.h index 661f196193cf..420b90c5f08a 100644 --- a/include/crypto/sha3.h +++ b/include/crypto/sha3.h @@ -5,6 +5,8 @@ #ifndef __CRYPTO_SHA3_H__ #define __CRYPTO_SHA3_H__ +#include + #define SHA3_224_DIGEST_SIZE (224 / 8) #define SHA3_224_BLOCK_SIZE (200 - 2 * SHA3_224_DIGEST_SIZE) @@ -19,6 +21,8 @@ #define SHA3_STATE_SIZE 200 +struct shash_desc; + struct sha3_state { u64 st[SHA3_STATE_SIZE / 8]; unsigned int rsiz; @@ -29,8 +33,5 @@ struct sha3_state { }; int crypto_sha3_init(struct shash_desc *desc); -int crypto_sha3_update(struct shash_desc *desc, const u8 *data, - unsigned int len); -int crypto_sha3_final(struct shash_desc *desc, u8 *out); #endif From patchwork Wed Apr 16 06:44:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882017 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A40422CBF7 for ; Wed, 16 Apr 2025 06:44:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785870; cv=none; b=aKsuIzwOwbv7fIgNlt0tZ3/J4Kmt22wOBVHJz5rqyzYLoEEirYWECHQpanS5EQDXojP/SJmFtKJO231JFiUCk52swGLidRyb/edhvaEdeuyqmivnjhUi2uApCr1urZuTQe4PkdKc24zznGb+TIq7VpwVB5/clUp8D1EDVrktlqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785870; c=relaxed/simple; bh=PGPXyoRkh+P/KyHpgJxeayybp+DWdUZpptPRS7QEKuI=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=NHy5TcDHLMZ9yFjkzxmKk1duUki0rTrsIKYpflngI7N47Mb6XjR24mRnCh9qH/VJL4iK1OmfG642HSuuD0VlAer70+s/mLu1xrHwFJ8sFR7Pa2sqCevaXO7Q6EdD0NmHlDeymX1JMkvMtzMYw6O30dYfyh9tifgb9y1no1hRsqU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=Yab83KyK; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="Yab83KyK" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=b94ooEKlqlYjFWw6S6vLac/TUl6o8Q6QLEq1tvu/81Y=; b=Yab83KyKltaihL30f1mndytTUb leb47SxspS7yXvbcl8xMXwKuc43IkExvjyD2uFmUoNkiJLIt2zrHNi/KeF4+exfNxMAOo5G5Og91w ZR54LvoJ11ZHo2yV8mOGLTC26wfo9NPJfNOyQmorNcfdhTkY44tiz8NnrFL9Q2aBPpcrTdqS4gSlq UYMBKFl2UWV4njhDq4T46q87tQg3D1EGHlGqaGgwYYIU/0VhokqOPq1guic+Ph+s9vAXA7T97/Jo9 S0zWomGocBo9U4gyzue4ATN8nPHGXjgOPm92ivrUhz0DWNF6aEdzgzghkHVd4zQridnlnPE0v8bOV hmjHl07g==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVU-00G6P6-17; Wed, 16 Apr 2025 14:44:25 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:24 +0800 Date: Wed, 16 Apr 2025 14:44:24 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 44/67] crypto: x86/sha512 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/x86/crypto/sha512_ssse3_glue.c | 79 ++++++++++------------------- include/crypto/sha2.h | 1 + include/crypto/sha512_base.h | 54 ++++++++++++++++++-- 3 files changed, 77 insertions(+), 57 deletions(-) diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 6d3b85e53d0e..067684c54395 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -27,17 +27,13 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt -#include -#include -#include -#include -#include -#include -#include -#include -#include #include #include +#include +#include +#include +#include +#include asmlinkage void sha512_transform_ssse3(struct sha512_state *state, const u8 *data, int blocks); @@ -45,11 +41,7 @@ asmlinkage void sha512_transform_ssse3(struct sha512_state *state, static int sha512_update(struct shash_desc *desc, const u8 *data, unsigned int len, sha512_block_fn *sha512_xform) { - struct sha512_state *sctx = shash_desc_ctx(desc); - - if (!crypto_simd_usable() || - (sctx->count[0] % SHA512_BLOCK_SIZE) + len < SHA512_BLOCK_SIZE) - return crypto_sha512_update(desc, data, len); + int remain; /* * Make sure struct sha512_state begins directly with the SHA512 @@ -58,22 +50,17 @@ static int sha512_update(struct shash_desc *desc, const u8 *data, BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0); kernel_fpu_begin(); - sha512_base_do_update(desc, data, len, sha512_xform); + remain = sha512_base_do_update_blocks(desc, data, len, sha512_xform); kernel_fpu_end(); - return 0; + return remain; } static int sha512_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out, sha512_block_fn *sha512_xform) { - if (!crypto_simd_usable()) - return crypto_sha512_finup(desc, data, len, out); - kernel_fpu_begin(); - if (len) - sha512_base_do_update(desc, data, len, sha512_xform); - sha512_base_do_finalize(desc, sha512_xform); + sha512_base_do_finup(desc, data, len, sha512_xform); kernel_fpu_end(); return sha512_base_finish(desc, out); @@ -91,23 +78,18 @@ static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, return sha512_finup(desc, data, len, out, sha512_transform_ssse3); } -/* Add padding and return the message digest. */ -static int sha512_ssse3_final(struct shash_desc *desc, u8 *out) -{ - return sha512_ssse3_finup(desc, NULL, 0, out); -} - static struct shash_alg sha512_ssse3_algs[] = { { .digestsize = SHA512_DIGEST_SIZE, .init = sha512_base_init, .update = sha512_ssse3_update, - .final = sha512_ssse3_final, .finup = sha512_ssse3_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .base = { .cra_name = "sha512", .cra_driver_name = "sha512-ssse3", .cra_priority = 150, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -115,13 +97,14 @@ static struct shash_alg sha512_ssse3_algs[] = { { .digestsize = SHA384_DIGEST_SIZE, .init = sha384_base_init, .update = sha512_ssse3_update, - .final = sha512_ssse3_final, .finup = sha512_ssse3_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .base = { .cra_name = "sha384", .cra_driver_name = "sha384-ssse3", .cra_priority = 150, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -167,23 +150,18 @@ static int sha512_avx_finup(struct shash_desc *desc, const u8 *data, return sha512_finup(desc, data, len, out, sha512_transform_avx); } -/* Add padding and return the message digest. */ -static int sha512_avx_final(struct shash_desc *desc, u8 *out) -{ - return sha512_avx_finup(desc, NULL, 0, out); -} - static struct shash_alg sha512_avx_algs[] = { { .digestsize = SHA512_DIGEST_SIZE, .init = sha512_base_init, .update = sha512_avx_update, - .final = sha512_avx_final, .finup = sha512_avx_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .base = { .cra_name = "sha512", .cra_driver_name = "sha512-avx", .cra_priority = 160, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -191,13 +169,14 @@ static struct shash_alg sha512_avx_algs[] = { { .digestsize = SHA384_DIGEST_SIZE, .init = sha384_base_init, .update = sha512_avx_update, - .final = sha512_avx_final, .finup = sha512_avx_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .base = { .cra_name = "sha384", .cra_driver_name = "sha384-avx", .cra_priority = 160, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -233,23 +212,18 @@ static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data, return sha512_finup(desc, data, len, out, sha512_transform_rorx); } -/* Add padding and return the message digest. */ -static int sha512_avx2_final(struct shash_desc *desc, u8 *out) -{ - return sha512_avx2_finup(desc, NULL, 0, out); -} - static struct shash_alg sha512_avx2_algs[] = { { .digestsize = SHA512_DIGEST_SIZE, .init = sha512_base_init, .update = sha512_avx2_update, - .final = sha512_avx2_final, .finup = sha512_avx2_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .base = { .cra_name = "sha512", .cra_driver_name = "sha512-avx2", .cra_priority = 170, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -257,13 +231,14 @@ static struct shash_alg sha512_avx2_algs[] = { { .digestsize = SHA384_DIGEST_SIZE, .init = sha384_base_init, .update = sha512_avx2_update, - .final = sha512_avx2_final, .finup = sha512_avx2_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .base = { .cra_name = "sha384", .cra_driver_name = "sha384-avx2", .cra_priority = 170, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_module = THIS_MODULE, } diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index a913bad5dd3b..e9ad7ab955aa 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -19,6 +19,7 @@ #define SHA512_DIGEST_SIZE 64 #define SHA512_BLOCK_SIZE 128 +#define SHA512_STATE_SIZE 80 #define SHA224_H0 0xc1059ed8UL #define SHA224_H1 0x367cd507UL diff --git a/include/crypto/sha512_base.h b/include/crypto/sha512_base.h index 679916a84cb2..8cb172e52dc0 100644 --- a/include/crypto/sha512_base.h +++ b/include/crypto/sha512_base.h @@ -10,10 +10,7 @@ #include #include -#include -#include #include - #include typedef void (sha512_block_fn)(struct sha512_state *sst, u8 const *src, @@ -93,6 +90,55 @@ static inline int sha512_base_do_update(struct shash_desc *desc, return 0; } +static inline int sha512_base_do_update_blocks(struct shash_desc *desc, + const u8 *data, + unsigned int len, + sha512_block_fn *block_fn) +{ + unsigned int remain = len - round_down(len, SHA512_BLOCK_SIZE); + struct sha512_state *sctx = shash_desc_ctx(desc); + + len -= remain; + sctx->count[0] += len; + if (sctx->count[0] < len) + sctx->count[1]++; + block_fn(sctx, data, len / SHA512_BLOCK_SIZE); + return remain; +} + +static inline int sha512_base_do_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, + sha512_block_fn *block_fn) +{ + unsigned int bit_offset = SHA512_BLOCK_SIZE / 8 - 2; + struct sha512_state *sctx = shash_desc_ctx(desc); + union { + __be64 b64[SHA512_BLOCK_SIZE / 4]; + u8 u8[SHA512_BLOCK_SIZE * 2]; + } block = {}; + + if (len >= SHA512_BLOCK_SIZE) { + int remain; + + remain = sha512_base_do_update_blocks(desc, src, len, block_fn); + src += len - remain; + len = remain; + } + + if (len >= bit_offset * 8) + bit_offset += SHA512_BLOCK_SIZE / 8; + memcpy(&block, src, len); + block.u8[len] = 0x80; + sctx->count[0] += len; + block.b64[bit_offset] = cpu_to_be64(sctx->count[1] << 3 | + sctx->count[0] >> 61); + block.b64[bit_offset + 1] = cpu_to_be64(sctx->count[0] << 3); + block_fn(sctx, block.u8, (bit_offset + 2) * 8 / SHA512_BLOCK_SIZE); + memzero_explicit(&block, sizeof(block)); + + return 0; +} + static inline int sha512_base_do_finalize(struct shash_desc *desc, sha512_block_fn *block_fn) { @@ -126,8 +172,6 @@ static inline int sha512_base_finish(struct shash_desc *desc, u8 *out) for (i = 0; digest_size > 0; i++, digest_size -= sizeof(__be64)) put_unaligned_be64(sctx->state[i], digest++); - - memzero_explicit(sctx, sizeof(*sctx)); return 0; } From patchwork Wed Apr 16 06:44:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882016 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86DFA22CBF7 for ; Wed, 16 Apr 2025 06:44:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785874; cv=none; b=TE5NbMtTkg3DBslcjZFBEEAIgqYV85KmV9nPmsJBUrLmnnJnguvcUfXk9n/bIeZU1kulnGZr9r1SKzjpBWGY4YXWc6EbNdJzGOq+AmczY9nbOjaCjOsZ4vQlTJyzRy+o6rCjP1mxxopO4fQDwg7sAdBqaJPa8/rSOBsdPvi0gLQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785874; c=relaxed/simple; bh=oCWIsU7it7RFCaB85UZtYguR/Qmiyl7CYoWLM6V2N0A=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=LmFKrbf/pW9l7BOnY5eHPasSeUytY3u8FjfLOh5PPtx2MDWjVCYfZTNwvPsZk5hKag0qIx7p6TmNXOdVfB5EQgbobUhGrbfHgXdiNnAzCT8dHdeVj2lqz9FAa8VCe7z7coN7cREa9rTBk9z3I19dU/WVrHM4K5Bsz3J5smgE6Vk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=S2YYnObJ; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="S2YYnObJ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=d4Huz3C69B7cDotE3V00uqfYDPm3xqAct2NTblL48lQ=; b=S2YYnObJns1tF6VYIWfkEsWZFh SeVlclYpkN3XQg1ZZzPpvmK2YzNdZwiVPRiSLKhBbiDfWannfH55vTO/kfN52aNb8bkzJ/wINl9lk hIs6yAtUlLQXnELbkSpeL3CXlh9sXqZsJu+dmcL7cJSaQZV7u+EbPOW8tCRzyH6yh0ASnAtIOAfC2 P0/xsC11cXBh41vQf6+WwJx9Agc0TZc9NhpFMdlwuG26Gv/jXjmyvz4OAD1HW9usLLPBclLr7K3AN zshBJUunwODVlTb5uR+/1bYwAlYpTkKUJiSmZjLL0bOK9xVE+hSJkTFAX7NGJXL5YDPlEgOviO26X mXSA6sgQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVY-00G6PV-3C; Wed, 16 Apr 2025 14:44:30 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:28 +0800 Date: Wed, 16 Apr 2025 14:44:28 +0800 Message-Id: <95e06d5e7d8139a2e8dc89bb6c7ad0eb4112ad43.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 46/67] crypto: riscv/sha512 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- arch/riscv/crypto/sha512-riscv64-glue.c | 47 ++++++++++--------------- crypto/sha512_generic.c | 5 +-- include/crypto/sha512_base.h | 6 ++++ 3 files changed, 28 insertions(+), 30 deletions(-) diff --git a/arch/riscv/crypto/sha512-riscv64-glue.c b/arch/riscv/crypto/sha512-riscv64-glue.c index 43b56a08aeb5..4634fca78ae2 100644 --- a/arch/riscv/crypto/sha512-riscv64-glue.c +++ b/arch/riscv/crypto/sha512-riscv64-glue.c @@ -14,7 +14,7 @@ #include #include #include -#include +#include #include /* @@ -24,8 +24,8 @@ asmlinkage void sha512_transform_zvknhb_zvkb( struct sha512_state *state, const u8 *data, int num_blocks); -static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static void sha512_block(struct sha512_state *state, const u8 *data, + int num_blocks) { /* * Ensure struct sha512_state begins directly with the SHA-512 @@ -35,35 +35,24 @@ static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data, if (crypto_simd_usable()) { kernel_vector_begin(); - sha512_base_do_update(desc, data, len, - sha512_transform_zvknhb_zvkb); + sha512_transform_zvknhb_zvkb(state, data, num_blocks); kernel_vector_end(); } else { - crypto_sha512_update(desc, data, len); + sha512_generic_block_fn(state, data, num_blocks); } - return 0; +} + +static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + return sha512_base_do_update_blocks(desc, data, len, sha512_block); } static int riscv64_sha512_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - if (crypto_simd_usable()) { - kernel_vector_begin(); - if (len) - sha512_base_do_update(desc, data, len, - sha512_transform_zvknhb_zvkb); - sha512_base_do_finalize(desc, sha512_transform_zvknhb_zvkb); - kernel_vector_end(); - - return sha512_base_finish(desc, out); - } - - return crypto_sha512_finup(desc, data, len, out); -} - -static int riscv64_sha512_final(struct shash_desc *desc, u8 *out) -{ - return riscv64_sha512_finup(desc, NULL, 0, out); + sha512_base_do_finup(desc, data, len, sha512_block); + return sha512_base_finish(desc, out); } static int riscv64_sha512_digest(struct shash_desc *desc, const u8 *data, @@ -77,14 +66,15 @@ static struct shash_alg riscv64_sha512_algs[] = { { .init = sha512_base_init, .update = riscv64_sha512_update, - .final = riscv64_sha512_final, .finup = riscv64_sha512_finup, .digest = riscv64_sha512_digest, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .digestsize = SHA512_DIGEST_SIZE, .base = { .cra_blocksize = SHA512_BLOCK_SIZE, .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_name = "sha512", .cra_driver_name = "sha512-riscv64-zvknhb-zvkb", .cra_module = THIS_MODULE, @@ -92,13 +82,14 @@ static struct shash_alg riscv64_sha512_algs[] = { }, { .init = sha384_base_init, .update = riscv64_sha512_update, - .final = riscv64_sha512_final, .finup = riscv64_sha512_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .digestsize = SHA384_DIGEST_SIZE, .base = { .cra_blocksize = SHA384_BLOCK_SIZE, .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_name = "sha384", .cra_driver_name = "sha384-riscv64-zvknhb-zvkb", .cra_module = THIS_MODULE, diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c index ed81813bd420..27a7346ff143 100644 --- a/crypto/sha512_generic.c +++ b/crypto/sha512_generic.c @@ -145,14 +145,15 @@ sha512_transform(u64 *state, const u8 *input) state[4] += e; state[5] += f; state[6] += g; state[7] += h; } -static void sha512_generic_block_fn(struct sha512_state *sst, u8 const *src, - int blocks) +void sha512_generic_block_fn(struct sha512_state *sst, u8 const *src, + int blocks) { while (blocks--) { sha512_transform(sst->state, src); src += SHA512_BLOCK_SIZE; } } +EXPORT_SYMBOL_GPL(sha512_generic_block_fn); int crypto_sha512_update(struct shash_desc *desc, const u8 *data, unsigned int len) diff --git a/include/crypto/sha512_base.h b/include/crypto/sha512_base.h index 8cb172e52dc0..e9f302ec3ede 100644 --- a/include/crypto/sha512_base.h +++ b/include/crypto/sha512_base.h @@ -10,7 +10,10 @@ #include #include +#include +#include #include +#include #include typedef void (sha512_block_fn)(struct sha512_state *sst, u8 const *src, @@ -175,4 +178,7 @@ static inline int sha512_base_finish(struct shash_desc *desc, u8 *out) return 0; } +void sha512_generic_block_fn(struct sha512_state *sst, u8 const *src, + int blocks); + #endif /* _CRYPTO_SHA512_BASE_H */ From patchwork Wed Apr 16 06:44:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882015 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64EE922CBF3 for ; Wed, 16 Apr 2025 06:44:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785879; cv=none; b=MdaQFfTHWC9YPnMtfLMkjgnLktWVqwJa5PD6oHrWcIBKd3OmdHGHG45+uc671MlKelXisQLX3i4KO6QK96DLoUhNibo1aULMsu6OH3zpAH9hACQFkBpfF6SBgr3mQeyYkosTUX8XCU4HcPtTzn5Ex5ThCHaL5F2nj6cs+fJLAn8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785879; c=relaxed/simple; bh=WFel2lcfF40CtI+5pi/mtEJYsjfb1jByfwxae7FUOkQ=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=fnDz+vv5UE1fV2AnG+27zQHq4KISXPQfrdGu6VBzw7mCEa4P+kUxE0xLoqP3EjnjOu7WlEO2PXRZeGSxJHKOtsMugBXHQkweS7rb7ORoih2AozJm8glfkA9v6F1sXHsHDYWOlBQ0ZV/R6K4uYMFu2lUaMfEg/z9lgGh6WHXJ8Ws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=Ueq0fzuv; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="Ueq0fzuv" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=pbRt3upg29IOkf4ssPbIGkAzvj/4gxMnCwkRnKI5ONU=; b=Ueq0fzuv4EbMGIFP3GTqm9jgAu t5A5DnHM+kAzX3V7bg3e0EYCmjhA7HYdo01wRbroIwA9MarrOTwE7sUEd98Vae7lBTOt3xNXN6i1q BRkafCFh6SlzxJ6LnXRW70UFHKJ/R8lgnB4zzNDNxN5CMWNnVEMbSZvndZsFZkXqpF2odZxsF/eRW Zx+wTWn89aMaenbatx2qSCrgN1oRB4rzo6JxwjrQyo75QcxjzX5tLzPE249sor+W6FsidNsnv7CEu gqPvTAEbdnljP0afEQ2QthbNy5tm0T7zyO2VHyBkYJZ/xbvsCKEf+7Ka+/A1DZEK7XEENsACPBI4J bIgV91MA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVd-00G6Pu-1u; Wed, 16 Apr 2025 14:44:34 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:33 +0800 Date: Wed, 16 Apr 2025 14:44:33 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 48/67] crypto: arm/sha512-neon - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm/crypto/sha512-neon-glue.c | 43 +++++++++--------------------- 1 file changed, 13 insertions(+), 30 deletions(-) diff --git a/arch/arm/crypto/sha512-neon-glue.c b/arch/arm/crypto/sha512-neon-glue.c index c6e58fe475ac..bd528077fefb 100644 --- a/arch/arm/crypto/sha512-neon-glue.c +++ b/arch/arm/crypto/sha512-neon-glue.c @@ -5,16 +5,13 @@ * Copyright (C) 2015 Linaro Ltd */ +#include #include -#include #include #include -#include +#include #include -#include -#include - #include "sha512.h" MODULE_ALIAS_CRYPTO("sha384-neon"); @@ -26,51 +23,36 @@ asmlinkage void sha512_block_data_order_neon(struct sha512_state *state, static int sha512_neon_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct sha512_state *sctx = shash_desc_ctx(desc); - - if (!crypto_simd_usable() || - (sctx->count[0] % SHA512_BLOCK_SIZE) + len < SHA512_BLOCK_SIZE) - return sha512_arm_update(desc, data, len); + int remain; kernel_neon_begin(); - sha512_base_do_update(desc, data, len, sha512_block_data_order_neon); + remain = sha512_base_do_update_blocks(desc, data, len, + sha512_block_data_order_neon); kernel_neon_end(); - - return 0; + return remain; } static int sha512_neon_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - if (!crypto_simd_usable()) - return sha512_arm_finup(desc, data, len, out); - kernel_neon_begin(); - if (len) - sha512_base_do_update(desc, data, len, - sha512_block_data_order_neon); - sha512_base_do_finalize(desc, sha512_block_data_order_neon); + sha512_base_do_finup(desc, data, len, sha512_block_data_order_neon); kernel_neon_end(); - return sha512_base_finish(desc, out); } -static int sha512_neon_final(struct shash_desc *desc, u8 *out) -{ - return sha512_neon_finup(desc, NULL, 0, out); -} - struct shash_alg sha512_neon_algs[] = { { .init = sha384_base_init, .update = sha512_neon_update, - .final = sha512_neon_final, .finup = sha512_neon_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .digestsize = SHA384_DIGEST_SIZE, .base = { .cra_name = "sha384", .cra_driver_name = "sha384-neon", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_module = THIS_MODULE, @@ -78,14 +60,15 @@ struct shash_alg sha512_neon_algs[] = { { }, { .init = sha512_base_init, .update = sha512_neon_update, - .final = sha512_neon_final, .finup = sha512_neon_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .digestsize = SHA512_DIGEST_SIZE, .base = { .cra_name = "sha512", .cra_driver_name = "sha512-neon", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_module = THIS_MODULE, } From patchwork Wed Apr 16 06:44:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882014 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB8A622B8A7 for ; Wed, 16 Apr 2025 06:44:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785883; cv=none; b=PsO5E1hbglRly4NIPt0kIoMzIXYPvXtiiy57zcs8s5r9G36D2nldQGXjGrlZiFUJVMEQcge4eA5I4PgKJI9Wgb/WpJkSAMrD/zjD00TlJFKeAMasbfCRSu3gttpptxuLGXQHq/x1E47PgtJhXiAhiMiLdg4srEiKPbezq8rlT+U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785883; c=relaxed/simple; bh=gYZH+Vqo109zeyYwqDdRwIXBNPTJXfJKJNJejmv7QtE=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=eCjJKYw9eRX2MgNvsF9lpgV2BZJgAfFEqTdGiKeENiFnMjgZBZfrWRrc4OINCxob275/7P6Mbm1yuJZ3roQliiWRgovNvOdbwwDw7g+a4/fKrXsX4c0xdGVOz0x4CIE9hQYkAf6s3ovb85eKAgqeNFxMLPsLsUe7L71aznLgx7M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=SQLe0+/h; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="SQLe0+/h" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=TY11+pDWY2ylq+3e5vyuVPraIVXbyt6c+7SP/PFvlfQ=; b=SQLe0+/hKC8cuuPiRTTRTs3ErJ yPv4f3pkTxPKrDaxaRlObgDH6U4M+wkMnNuFmDZVTL7rQvixCAqzrhQwbWSvQJEgA7/6r5FNGqEGn uK8ZORoTx8SLgdSFNQA0Mg+zsZVvdVpwkHM09Q7+3FfT9WRdWHo5y6m/N8rHnZsb6OVY+fFXtbUNU f38biSFCXzIu8IMQJVR5PK1dmAJ3omG3rex7eXY641uRri6XY9vA+UsxMfLS5xHI8sWuUhh7gmmFV 73dB3HssGHtKXaYrYHfqdYWw96xLDjtOMKIFvdxzbyM+U0xyf0lfZJ7ux+dNw2JcwJKKKcn/iOtWR xIQ8bBuQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVi-00G6QJ-0h; Wed, 16 Apr 2025 14:44:39 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:38 +0800 Date: Wed, 16 Apr 2025 14:44:38 +0800 Message-Id: <40ba9848bba723f73096ae178e8ed235d3fcd6dc.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 50/67] crypto: arm64/sha512-ce - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm64/crypto/sha512-ce-glue.c | 49 ++++++++---------------------- 1 file changed, 12 insertions(+), 37 deletions(-) diff --git a/arch/arm64/crypto/sha512-ce-glue.c b/arch/arm64/crypto/sha512-ce-glue.c index 071f64293227..6fb3001fa2c9 100644 --- a/arch/arm64/crypto/sha512-ce-glue.c +++ b/arch/arm64/crypto/sha512-ce-glue.c @@ -10,14 +10,11 @@ */ #include -#include -#include #include -#include #include #include #include -#include +#include #include MODULE_DESCRIPTION("SHA-384/SHA-512 secure hash using ARMv8 Crypto Extensions"); @@ -29,12 +26,10 @@ MODULE_ALIAS_CRYPTO("sha512"); asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src, int blocks); -asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks); - static void sha512_ce_transform(struct sha512_state *sst, u8 const *src, int blocks) { - while (blocks) { + do { int rem; kernel_neon_begin(); @@ -42,67 +37,47 @@ static void sha512_ce_transform(struct sha512_state *sst, u8 const *src, kernel_neon_end(); src += (blocks - rem) * SHA512_BLOCK_SIZE; blocks = rem; - } -} - -static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src, - int blocks) -{ - sha512_block_data_order(sst->state, src, blocks); + } while (blocks); } static int sha512_ce_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform - : sha512_arm64_transform; - - sha512_base_do_update(desc, data, len, fn); - return 0; + return sha512_base_do_update_blocks(desc, data, len, + sha512_ce_transform); } static int sha512_ce_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform - : sha512_arm64_transform; - - sha512_base_do_update(desc, data, len, fn); - sha512_base_do_finalize(desc, fn); - return sha512_base_finish(desc, out); -} - -static int sha512_ce_final(struct shash_desc *desc, u8 *out) -{ - sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform - : sha512_arm64_transform; - - sha512_base_do_finalize(desc, fn); + sha512_base_do_finup(desc, data, len, sha512_ce_transform); return sha512_base_finish(desc, out); } static struct shash_alg algs[] = { { .init = sha384_base_init, .update = sha512_ce_update, - .final = sha512_ce_final, .finup = sha512_ce_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .digestsize = SHA384_DIGEST_SIZE, .base.cra_name = "sha384", .base.cra_driver_name = "sha384-ce", .base.cra_priority = 200, + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .base.cra_blocksize = SHA512_BLOCK_SIZE, .base.cra_module = THIS_MODULE, }, { .init = sha512_base_init, .update = sha512_ce_update, - .final = sha512_ce_final, .finup = sha512_ce_finup, - .descsize = sizeof(struct sha512_state), + .descsize = SHA512_STATE_SIZE, .digestsize = SHA512_DIGEST_SIZE, .base.cra_name = "sha512", .base.cra_driver_name = "sha512-ce", .base.cra_priority = 200, + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .base.cra_blocksize = SHA512_BLOCK_SIZE, .base.cra_module = THIS_MODULE, } }; From patchwork Wed Apr 16 06:44:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882013 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81C9B22FDEC for ; Wed, 16 Apr 2025 06:44:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785888; cv=none; b=myAQMT7ZvKV/aLecW+Dio8EHaOt1CKhaavEC3mIi6BCGqdzoy9Oij2gyCd7CvsN1ARlSmu7XAe7n2w5AjHuzncam2aLcgCzksqXXIuQICUhztWo6Wh3TFApaJBoNrgs6Oaoi5v5MEMbJv2Bq3J026zsnl7O5Mom7U8bO2JygMUg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785888; c=relaxed/simple; bh=/D8pmQd17lFZpferQVIrbaSOdkFDu/Fyj3keE+ky0N0=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=KlLxqKTX25Y+zXvMNSAOecwXKTvsVUmK1SA2mDFbnrwuu4X0DQmZ1dYAtpyBjar/YYHv0rY0s9Irgqc49dNqCXa/P/4SVkehwcDu8uJzycTIE8bRNUc5Y1DpuNgn2EwHT7emm2+1czsL3SjhH9LfHe7yxRu2B1KvpsHWWNB05pY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=J/SwYF1y; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="J/SwYF1y" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=6sWZPkZ7sTQaY7Me3UGzKNk2/7nTobCLXJtToAZzvVM=; b=J/SwYF1yVLYFlOw4zF75bRYL24 CMo4ZQjenNI10hhN0AcEpJKDi+KHckOTd73Yf6e/4uXsDOnsSDKdWW5VcIskTa360jYAHmrOpWhg+ RJ0sZMZjfc0MUdf4AqJCdTib437nyddegAja1E4cJw9j+U4siy8ww2Enw75WUlDO/ZU9tIzSYAUFZ GUWkIKuNnGIVIRR5bnxg9oUF1V/g3otzGIAmKPa+NKrXly6xYkYHEIxSY8qH9tVU/akeDRCfNVAZJ qighoipK4z1UKQiENYggbNk7KuVpr4rDvqWydgBF0R70i0msp+2u1mQlc39MV+r+vs2hurpwhHy2m HrMkOsaw==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVm-00G6Qi-2s; Wed, 16 Apr 2025 14:44:43 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:42 +0800 Date: Wed, 16 Apr 2025 14:44:42 +0800 Message-Id: <48a7acfd12df9a7ae1fb2ad1c407084e20283d33.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 52/67] crypto: s390/sha512 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- arch/s390/crypto/sha.h | 14 ++-- arch/s390/crypto/sha512_s390.c | 45 ++++++------ arch/s390/crypto/sha_common.c | 124 ++++----------------------------- 3 files changed, 47 insertions(+), 136 deletions(-) diff --git a/arch/s390/crypto/sha.h b/arch/s390/crypto/sha.h index d95437ebe1ca..0a3cc1739144 100644 --- a/arch/s390/crypto/sha.h +++ b/arch/s390/crypto/sha.h @@ -10,28 +10,32 @@ #ifndef _CRYPTO_ARCH_S390_SHA_H #define _CRYPTO_ARCH_S390_SHA_H +#include #include #include /* must be big enough for the largest SHA variant */ #define CPACF_MAX_PARMBLOCK_SIZE SHA3_STATE_SIZE #define SHA_MAX_BLOCK_SIZE SHA3_224_BLOCK_SIZE -#define S390_SHA_CTX_SIZE offsetof(struct s390_sha_ctx, buf) +#define S390_SHA_CTX_SIZE sizeof(struct s390_sha_ctx) struct s390_sha_ctx { u64 count; /* message length in bytes */ - u32 state[CPACF_MAX_PARMBLOCK_SIZE / sizeof(u32)]; + union { + u32 state[CPACF_MAX_PARMBLOCK_SIZE / sizeof(u32)]; + struct { + u64 state[SHA512_DIGEST_SIZE]; + u64 count_hi; + } sha512; + }; int func; /* KIMD function to use */ bool first_message_part; - u8 buf[SHA_MAX_BLOCK_SIZE]; }; struct shash_desc; -int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len); int s390_sha_update_blocks(struct shash_desc *desc, const u8 *data, unsigned int len); -int s390_sha_final(struct shash_desc *desc, u8 *out); int s390_sha_finup(struct shash_desc *desc, const u8 *src, unsigned int len, u8 *out); diff --git a/arch/s390/crypto/sha512_s390.c b/arch/s390/crypto/sha512_s390.c index 04f11c407763..14818fcc9cd4 100644 --- a/arch/s390/crypto/sha512_s390.c +++ b/arch/s390/crypto/sha512_s390.c @@ -7,14 +7,13 @@ * Copyright IBM Corp. 2007 * Author(s): Jan Glauber (jang@de.ibm.com) */ +#include #include #include +#include #include -#include #include #include -#include -#include #include "sha.h" @@ -22,15 +21,16 @@ static int sha512_init(struct shash_desc *desc) { struct s390_sha_ctx *ctx = shash_desc_ctx(desc); - *(__u64 *)&ctx->state[0] = SHA512_H0; - *(__u64 *)&ctx->state[2] = SHA512_H1; - *(__u64 *)&ctx->state[4] = SHA512_H2; - *(__u64 *)&ctx->state[6] = SHA512_H3; - *(__u64 *)&ctx->state[8] = SHA512_H4; - *(__u64 *)&ctx->state[10] = SHA512_H5; - *(__u64 *)&ctx->state[12] = SHA512_H6; - *(__u64 *)&ctx->state[14] = SHA512_H7; + ctx->sha512.state[0] = SHA512_H0; + ctx->sha512.state[2] = SHA512_H1; + ctx->sha512.state[4] = SHA512_H2; + ctx->sha512.state[6] = SHA512_H3; + ctx->sha512.state[8] = SHA512_H4; + ctx->sha512.state[10] = SHA512_H5; + ctx->sha512.state[12] = SHA512_H6; + ctx->sha512.state[14] = SHA512_H7; ctx->count = 0; + ctx->sha512.count_hi = 0; ctx->func = CPACF_KIMD_SHA_512; return 0; @@ -42,9 +42,8 @@ static int sha512_export(struct shash_desc *desc, void *out) struct sha512_state *octx = out; octx->count[0] = sctx->count; - octx->count[1] = 0; + octx->count[1] = sctx->sha512.count_hi; memcpy(octx->state, sctx->state, sizeof(octx->state)); - memcpy(octx->buf, sctx->buf, sizeof(octx->buf)); return 0; } @@ -53,12 +52,10 @@ static int sha512_import(struct shash_desc *desc, const void *in) struct s390_sha_ctx *sctx = shash_desc_ctx(desc); const struct sha512_state *ictx = in; - if (unlikely(ictx->count[1])) - return -ERANGE; sctx->count = ictx->count[0]; + sctx->sha512.count_hi = ictx->count[1]; memcpy(sctx->state, ictx->state, sizeof(ictx->state)); - memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf)); sctx->func = CPACF_KIMD_SHA_512; return 0; } @@ -66,16 +63,18 @@ static int sha512_import(struct shash_desc *desc, const void *in) static struct shash_alg sha512_alg = { .digestsize = SHA512_DIGEST_SIZE, .init = sha512_init, - .update = s390_sha_update, - .final = s390_sha_final, + .update = s390_sha_update_blocks, + .finup = s390_sha_finup, .export = sha512_export, .import = sha512_import, .descsize = sizeof(struct s390_sha_ctx), - .statesize = sizeof(struct sha512_state), + .statesize = SHA512_STATE_SIZE, .base = { .cra_name = "sha512", .cra_driver_name= "sha512-s390", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_module = THIS_MODULE, } @@ -104,17 +103,19 @@ static int sha384_init(struct shash_desc *desc) static struct shash_alg sha384_alg = { .digestsize = SHA384_DIGEST_SIZE, .init = sha384_init, - .update = s390_sha_update, - .final = s390_sha_final, + .update = s390_sha_update_blocks, + .finup = s390_sha_finup, .export = sha512_export, .import = sha512_import, .descsize = sizeof(struct s390_sha_ctx), - .statesize = sizeof(struct sha512_state), + .statesize = SHA512_STATE_SIZE, .base = { .cra_name = "sha384", .cra_driver_name= "sha384-s390", .cra_priority = 300, .cra_blocksize = SHA384_BLOCK_SIZE, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_ctxsize = sizeof(struct s390_sha_ctx), .cra_module = THIS_MODULE, } diff --git a/arch/s390/crypto/sha_common.c b/arch/s390/crypto/sha_common.c index 69e23e0c5394..b5e2c365ea05 100644 --- a/arch/s390/crypto/sha_common.c +++ b/arch/s390/crypto/sha_common.c @@ -13,51 +13,6 @@ #include #include "sha.h" -int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len) -{ - struct s390_sha_ctx *ctx = shash_desc_ctx(desc); - unsigned int bsize = crypto_shash_blocksize(desc->tfm); - unsigned int index, n; - int fc; - - /* how much is already in the buffer? */ - index = ctx->count % bsize; - ctx->count += len; - - if ((index + len) < bsize) - goto store; - - fc = ctx->func; - if (ctx->first_message_part) - fc |= CPACF_KIMD_NIP; - - /* process one stored block */ - if (index) { - memcpy(ctx->buf + index, data, bsize - index); - cpacf_kimd(fc, ctx->state, ctx->buf, bsize); - ctx->first_message_part = 0; - fc &= ~CPACF_KIMD_NIP; - data += bsize - index; - len -= bsize - index; - index = 0; - } - - /* process as many blocks as possible */ - if (len >= bsize) { - n = (len / bsize) * bsize; - cpacf_kimd(fc, ctx->state, data, n); - ctx->first_message_part = 0; - data += n; - len -= n; - } -store: - if (len) - memcpy(ctx->buf + index , data, len); - - return 0; -} -EXPORT_SYMBOL_GPL(s390_sha_update); - int s390_sha_update_blocks(struct shash_desc *desc, const u8 *data, unsigned int len) { @@ -73,6 +28,13 @@ int s390_sha_update_blocks(struct shash_desc *desc, const u8 *data, /* process as many blocks as possible */ n = (len / bsize) * bsize; ctx->count += n; + switch (ctx->func) { + case CPACF_KLMD_SHA_512: + case CPACF_KLMD_SHA3_384: + if (ctx->count < n) + ctx->sha512.count_hi++; + break; + } cpacf_kimd(fc, ctx->state, data, n); ctx->first_message_part = 0; return len - n; @@ -98,61 +60,6 @@ static int s390_crypto_shash_parmsize(int func) } } -int s390_sha_final(struct shash_desc *desc, u8 *out) -{ - struct s390_sha_ctx *ctx = shash_desc_ctx(desc); - unsigned int bsize = crypto_shash_blocksize(desc->tfm); - u64 bits; - unsigned int n; - int mbl_offset, fc; - - n = ctx->count % bsize; - bits = ctx->count * 8; - mbl_offset = s390_crypto_shash_parmsize(ctx->func); - if (mbl_offset < 0) - return -EINVAL; - - mbl_offset = mbl_offset / sizeof(u32); - - /* set total msg bit length (mbl) in CPACF parmblock */ - switch (ctx->func) { - case CPACF_KLMD_SHA_1: - case CPACF_KLMD_SHA_256: - memcpy(ctx->state + mbl_offset, &bits, sizeof(bits)); - break; - case CPACF_KLMD_SHA_512: - /* - * the SHA512 parmblock has a 128-bit mbl field, clear - * high-order u64 field, copy bits to low-order u64 field - */ - memset(ctx->state + mbl_offset, 0x00, sizeof(bits)); - mbl_offset += sizeof(u64) / sizeof(u32); - memcpy(ctx->state + mbl_offset, &bits, sizeof(bits)); - break; - case CPACF_KLMD_SHA3_224: - case CPACF_KLMD_SHA3_256: - case CPACF_KLMD_SHA3_384: - case CPACF_KLMD_SHA3_512: - break; - default: - return -EINVAL; - } - - fc = ctx->func; - fc |= test_facility(86) ? CPACF_KLMD_DUFOP : 0; - if (ctx->first_message_part) - fc |= CPACF_KLMD_NIP; - cpacf_klmd(fc, ctx->state, ctx->buf, n); - - /* copy digest to out */ - memcpy(out, ctx->state, crypto_shash_digestsize(desc->tfm)); - /* wipe context */ - memset(ctx, 0, sizeof *ctx); - - return 0; -} -EXPORT_SYMBOL_GPL(s390_sha_final); - int s390_sha_finup(struct shash_desc *desc, const u8 *src, unsigned int len, u8 *out) { @@ -171,19 +78,18 @@ int s390_sha_finup(struct shash_desc *desc, const u8 *src, unsigned int len, /* set total msg bit length (mbl) in CPACF parmblock */ switch (ctx->func) { + case CPACF_KLMD_SHA_512: + /* The SHA512 parmblock has a 128-bit mbl field. */ + if (ctx->count < len) + ctx->sha512.count_hi++; + ctx->sha512.count_hi <<= 3; + ctx->sha512.count_hi |= ctx->count >> 61; + mbl_offset += sizeof(u64) / sizeof(u32); + fallthrough; case CPACF_KLMD_SHA_1: case CPACF_KLMD_SHA_256: memcpy(ctx->state + mbl_offset, &bits, sizeof(bits)); break; - case CPACF_KLMD_SHA_512: - /* - * the SHA512 parmblock has a 128-bit mbl field, clear - * high-order u64 field, copy bits to low-order u64 field - */ - memset(ctx->state + mbl_offset, 0x00, sizeof(bits)); - mbl_offset += sizeof(u64) / sizeof(u32); - memcpy(ctx->state + mbl_offset, &bits, sizeof(bits)); - break; case CPACF_KLMD_SHA3_224: case CPACF_KLMD_SHA3_256: case CPACF_KLMD_SHA3_384: From patchwork Wed Apr 16 06:44:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882012 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DE0F23AE70 for ; Wed, 16 Apr 2025 06:44:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785893; cv=none; b=Ht+qd18RuZgIFiHadVVdrKKUdmfbyGC6Xk2z7wUBYwrUOPYcAX2Fy0zRuRE7NW5jPWJNzeEpUUilf6rnHjnQEXQoHbwbcqPvmKKTrfcISrH6bZLaU+O84ylMphi6plF+AWW3/4ohP5bDHbhWE11pKWZAAQLP7TKHLMLQ9IAeQCw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785893; c=relaxed/simple; bh=MbniKo4qDbSZiHPQhoSku+RYWmJsHOR03V1RBXttMjU=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=hyvdBqX22pXvBb3HHi/4sDlXWMxQUjOK/UFUtjDg6ryo3dXaeZ9B7Ynxi/d40ohkiNraRnq8zHtHku0RGVguxSyNG0Kh7Kzy3UmDv85yL+c8AO6Fo5Wabf5dUiRYQo0DcmC0xh1lwNV7VwAXlpkcUw8OHjdhkgHT5m43hAkdPSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=XQyucpg+; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="XQyucpg+" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=Jr0PdFGN3moQZRt/K8WjIWb1E7OqeV5e/K+iB/Tp00I=; b=XQyucpg+qAx1OcsLjSsew5YfV4 uut7dQWe4FOXH7sy5OM41lZux2NpJcF2BRpGYf/ZQWR/cAWeKgUkzYY09+/5/mmS+o2ogp/Pl7rSo abbcdggHtygEO0tYBllQpp92r6YhPg2/ZoXhig1Xc63B5lgzNg9NoGJ1cw5Gq57wYD6m2QX92mo02 dSDow6s8qKRm22G9fCvU00f2/Z9zl34HMYx8tyBT4WblNoMlmdFDD9x2Ui4+pIgGh0a1VkGbM4UIN zM69lq7KZwJC9YJ/QvdM6i1h1OPlGkEr0Slng6Eds7VAnSj7fAcGOqOPv1kYhFNEHlTCeRY4Iswxw 34LK4Vzw==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVr-00G6R7-1s; Wed, 16 Apr 2025 14:44:48 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:47 +0800 Date: Wed, 16 Apr 2025 14:44:47 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 54/67] crypto: sha512_base - Remove partial block helpers To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Now that all sha256_base users have been converted to use the API partial block handling, remove the partial block helpers. Signed-off-by: Herbert Xu --- include/crypto/sha512_base.h | 64 ------------------------------------ 1 file changed, 64 deletions(-) diff --git a/include/crypto/sha512_base.h b/include/crypto/sha512_base.h index e9f302ec3ede..aa814bab442d 100644 --- a/include/crypto/sha512_base.h +++ b/include/crypto/sha512_base.h @@ -53,46 +53,6 @@ static inline int sha512_base_init(struct shash_desc *desc) return 0; } -static inline int sha512_base_do_update(struct shash_desc *desc, - const u8 *data, - unsigned int len, - sha512_block_fn *block_fn) -{ - struct sha512_state *sctx = shash_desc_ctx(desc); - unsigned int partial = sctx->count[0] % SHA512_BLOCK_SIZE; - - sctx->count[0] += len; - if (sctx->count[0] < len) - sctx->count[1]++; - - if (unlikely((partial + len) >= SHA512_BLOCK_SIZE)) { - int blocks; - - if (partial) { - int p = SHA512_BLOCK_SIZE - partial; - - memcpy(sctx->buf + partial, data, p); - data += p; - len -= p; - - block_fn(sctx, sctx->buf, 1); - } - - blocks = len / SHA512_BLOCK_SIZE; - len %= SHA512_BLOCK_SIZE; - - if (blocks) { - block_fn(sctx, data, blocks); - data += blocks * SHA512_BLOCK_SIZE; - } - partial = 0; - } - if (len) - memcpy(sctx->buf + partial, data, len); - - return 0; -} - static inline int sha512_base_do_update_blocks(struct shash_desc *desc, const u8 *data, unsigned int len, @@ -142,30 +102,6 @@ static inline int sha512_base_do_finup(struct shash_desc *desc, const u8 *src, return 0; } -static inline int sha512_base_do_finalize(struct shash_desc *desc, - sha512_block_fn *block_fn) -{ - const int bit_offset = SHA512_BLOCK_SIZE - sizeof(__be64[2]); - struct sha512_state *sctx = shash_desc_ctx(desc); - __be64 *bits = (__be64 *)(sctx->buf + bit_offset); - unsigned int partial = sctx->count[0] % SHA512_BLOCK_SIZE; - - sctx->buf[partial++] = 0x80; - if (partial > bit_offset) { - memset(sctx->buf + partial, 0x0, SHA512_BLOCK_SIZE - partial); - partial = 0; - - block_fn(sctx, sctx->buf, 1); - } - - memset(sctx->buf + partial, 0x0, bit_offset - partial); - bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61); - bits[1] = cpu_to_be64(sctx->count[0] << 3); - block_fn(sctx, sctx->buf, 1); - - return 0; -} - static inline int sha512_base_finish(struct shash_desc *desc, u8 *out) { unsigned int digest_size = crypto_shash_digestsize(desc->tfm); From patchwork Wed Apr 16 06:44:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882011 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 125A223BCEC for ; Wed, 16 Apr 2025 06:44:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785897; cv=none; b=SCgNcfdbtAuMSW/aCPBPgtD1PDVNjYe85H+OdVMA6AcdAff68xkYB+qmdBru5Lk2RTK4XMTe2VkRnaHABJWT3pm6cbSJ8jAgw2oiZP2Lg8oiXMlIh1EJwkw6Y60UEJFr0pNVGcTq3oz15xBvCPsyeKcLq6F4Uvy9HgslHDXsZtY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785897; c=relaxed/simple; bh=CL8cRF1NNWrZpA/XuzaI9xesWNY75T6eDlGj1DolE4M=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=P1ym/OEF/IZunqc2/0Sm5Rn9NtgYA2EIc14ZZ2OGa6ZdZTBs3NzU3G46bMbtALpSskp4ploZZCXaab/jfeShdSvjBo+EHNxRVXRWIGeiZvFNxmILCHyatVm3cI9fKwkGqovRLbGjQhSdSIeNrFckUDxxWWlajr1v2pjthAlly0Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=N/gZv44p; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="N/gZv44p" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=Q/hvU7RsoKjdYCNAHKjwgyCOQnly1jJaXI3c67GAA/E=; b=N/gZv44ptIo/6TfnPKHdXwbs4j fg+e7+/8M8eUXwGZJYVV/E82gLrHoF6IM8dcynpg8cbLUlHe8+sSzbRqKQjAjsX7lMCBUBsE46eBB w5I8H5HPkMwkP8ks5U/OGqvB7jOxfbi1igpKFzh6Mw5DScQiX9IEnahdwX/7JiT3KUTJ0J07f9mbp jV8OvvKFryo87UPCbi7+TueSkiF5SMV1orhs2/X0xWWKPfr2sSDbETnXKy174zUCbVFFl0oi5f31I kYMbw+x3kbM0jIP3IZAJWmjgLAPUNRMCaUp291RbpSOOuoHt6/NZfP9iRzRb9WCHqCNA65Dv+kn4W wOcIZICw==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wVw-00G6Rv-15; Wed, 16 Apr 2025 14:44:53 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:52 +0800 Date: Wed, 16 Apr 2025 14:44:52 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 56/67] crypto: arm64/sm3-ce - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm64/crypto/sm3-ce-glue.c | 48 ++++++--------------------------- 1 file changed, 8 insertions(+), 40 deletions(-) diff --git a/arch/arm64/crypto/sm3-ce-glue.c b/arch/arm64/crypto/sm3-ce-glue.c index 1a71788c4cda..eac6f5fa0abe 100644 --- a/arch/arm64/crypto/sm3-ce-glue.c +++ b/arch/arm64/crypto/sm3-ce-glue.c @@ -6,14 +6,11 @@ */ #include -#include -#include #include -#include #include #include #include -#include +#include #include MODULE_DESCRIPTION("SM3 secure hash using ARMv8 Crypto Extensions"); @@ -26,50 +23,20 @@ asmlinkage void sm3_ce_transform(struct sm3_state *sst, u8 const *src, static int sm3_ce_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - if (!crypto_simd_usable()) { - sm3_update(shash_desc_ctx(desc), data, len); - return 0; - } + int remain; kernel_neon_begin(); - sm3_base_do_update(desc, data, len, sm3_ce_transform); + remain = sm3_base_do_update_blocks(desc, data, len, sm3_ce_transform); kernel_neon_end(); - - return 0; -} - -static int sm3_ce_final(struct shash_desc *desc, u8 *out) -{ - if (!crypto_simd_usable()) { - sm3_final(shash_desc_ctx(desc), out); - return 0; - } - - kernel_neon_begin(); - sm3_base_do_finalize(desc, sm3_ce_transform); - kernel_neon_end(); - - return sm3_base_finish(desc, out); + return remain; } static int sm3_ce_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - if (!crypto_simd_usable()) { - struct sm3_state *sctx = shash_desc_ctx(desc); - - if (len) - sm3_update(sctx, data, len); - sm3_final(sctx, out); - return 0; - } - kernel_neon_begin(); - if (len) - sm3_base_do_update(desc, data, len, sm3_ce_transform); - sm3_base_do_finalize(desc, sm3_ce_transform); + sm3_base_do_finup(desc, data, len, sm3_ce_transform); kernel_neon_end(); - return sm3_base_finish(desc, out); } @@ -77,11 +44,12 @@ static struct shash_alg sm3_alg = { .digestsize = SM3_DIGEST_SIZE, .init = sm3_base_init, .update = sm3_ce_update, - .final = sm3_ce_final, .finup = sm3_ce_finup, - .descsize = sizeof(struct sm3_state), + .descsize = SM3_STATE_SIZE, .base.cra_name = "sm3", .base.cra_driver_name = "sm3-ce", + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .base.cra_blocksize = SM3_BLOCK_SIZE, .base.cra_module = THIS_MODULE, .base.cra_priority = 400, From patchwork Wed Apr 16 06:44:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882010 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F34F1230985 for ; Wed, 16 Apr 2025 06:45:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785902; cv=none; b=fXH9CcCzgceIQraBcZQcqcjIjRhFbxcxRUXM/6B2KbSLSQAX0myzWg6fz7gZQe6JTm0auchOVnqMOdexvHYW7cD8z6J/T6NMHN/+1p22J4MO8660EEa/JzJ24K/InFJ/57skZbJAN9F4yusBSkWZ3CU+y7bJlo9vbjmVYfLMTQI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785902; c=relaxed/simple; bh=s+n0rZWN9OHNb9jSxTXJswwTYy3YvIl9k49PjkssynY=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=gprT7RWWEv9CZwga7ND5emB03obc/VxOWYZXxLsOVwp8Brb/5cLE4ZG4fpQrnm11h7mw9ykFhyA1yprPxe4jjVgcF7oxSqTROEKuCgK7XGt9uC1t2DLCxEN6wIUCqiTHSNVWE6BXJF1R3j+GQQVhYPeV52SY7SeAAoZdreUEaWQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=E8d1D+7y; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="E8d1D+7y" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=mKtFmevMOThjVA+5PEADUD4DtOovsjOOmURMwNvdZzk=; b=E8d1D+7y2G594UhT9piv20obda KdeFYlosCHjTgp6rAGPmUunpB18jGc8VMKY5QT0zCt84Bkary1Cf30FetUfLrsFVH2xlzvbrdVEtf uUT8Fq102HLMG2YHlguJsnPKEkgn4owRgX2fjiVP1OkgXU+ibNHq9eVle+FCnXaw5UMElNKAS2hRW UAbgXbZ0J0mJE7U2kzU8ssNMipMewdLWIutAvtGoBNHTTYgY8rcgJohimY3kTh17LdnXltzCeu1CG IKuaiRvWQvr8xyE9+cjBsweF5GJXxk4oujpKa7BjsmuwpTEsJFSKQOynZMQUTs38b7Ji+DxIRF40c nBZy1mtA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wW1-00G6SK-0a; Wed, 16 Apr 2025 14:44:58 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:44:57 +0800 Date: Wed, 16 Apr 2025 14:44:57 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 58/67] crypto: riscv/sm3 - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- arch/riscv/crypto/sm3-riscv64-glue.c | 49 ++++++++++------------------ 1 file changed, 17 insertions(+), 32 deletions(-) diff --git a/arch/riscv/crypto/sm3-riscv64-glue.c b/arch/riscv/crypto/sm3-riscv64-glue.c index e1737a970c7c..abdfe4a63a27 100644 --- a/arch/riscv/crypto/sm3-riscv64-glue.c +++ b/arch/riscv/crypto/sm3-riscv64-glue.c @@ -13,8 +13,9 @@ #include #include #include +#include #include -#include +#include #include /* @@ -24,8 +25,8 @@ asmlinkage void sm3_transform_zvksh_zvkb( struct sm3_state *state, const u8 *data, int num_blocks); -static int riscv64_sm3_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static void sm3_block(struct sm3_state *state, const u8 *data, + int num_blocks) { /* * Ensure struct sm3_state begins directly with the SM3 @@ -35,52 +36,36 @@ static int riscv64_sm3_update(struct shash_desc *desc, const u8 *data, if (crypto_simd_usable()) { kernel_vector_begin(); - sm3_base_do_update(desc, data, len, sm3_transform_zvksh_zvkb); + sm3_transform_zvksh_zvkb(state, data, num_blocks); kernel_vector_end(); } else { - sm3_update(shash_desc_ctx(desc), data, len); + sm3_block_generic(state, data, num_blocks); } - return 0; +} + +static int riscv64_sm3_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + return sm3_base_do_update_blocks(desc, data, len, sm3_block); } static int riscv64_sm3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - struct sm3_state *ctx; - - if (crypto_simd_usable()) { - kernel_vector_begin(); - if (len) - sm3_base_do_update(desc, data, len, - sm3_transform_zvksh_zvkb); - sm3_base_do_finalize(desc, sm3_transform_zvksh_zvkb); - kernel_vector_end(); - - return sm3_base_finish(desc, out); - } - - ctx = shash_desc_ctx(desc); - if (len) - sm3_update(ctx, data, len); - sm3_final(ctx, out); - - return 0; -} - -static int riscv64_sm3_final(struct shash_desc *desc, u8 *out) -{ - return riscv64_sm3_finup(desc, NULL, 0, out); + sm3_base_do_finup(desc, data, len, sm3_block); + return sm3_base_finish(desc, out); } static struct shash_alg riscv64_sm3_alg = { .init = sm3_base_init, .update = riscv64_sm3_update, - .final = riscv64_sm3_final, .finup = riscv64_sm3_finup, - .descsize = sizeof(struct sm3_state), + .descsize = SM3_STATE_SIZE, .digestsize = SM3_DIGEST_SIZE, .base = { .cra_blocksize = SM3_BLOCK_SIZE, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINUP_MAX, .cra_priority = 300, .cra_name = "sm3", .cra_driver_name = "sm3-riscv64-zvksh-zvkb", From patchwork Wed Apr 16 06:45:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882009 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1403522CBFE for ; Wed, 16 Apr 2025 06:45:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785907; cv=none; b=oIleGLBAHrGgG97btB05aDRuyai+OkvTTWnTfSGJ3aUEXWkcv84hKhmMHmei4jmvo9QkMLF08Gz11LFGsf+4Nlc1USkEatdKrRlfGzZbjvi1lzAG5Bi8pwOskUpLINbtDBYCkUHm1d+JORkLOY5HUBW9yZh08EnOePc+7jqMpUU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785907; c=relaxed/simple; bh=dfNbIsXQHToDrEGE+EtMScO6oM8R6Due51q6kz8zv5U=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=HFueUNZ0N1vLQRPjdLgfI9fn4dUYZImfdb0LuNp2Nmt423JvwaJtwXShr0yNpoLj72m0le+ABS1VX4GtjrsJvt4r4aKOj2fzVy3XeeKHchzb+Ebxj6g7WXZ8em1SESgd8MBef42E/AAu+BV3/7LEj47iVWTomC6uxr8I47eR38w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=L7jWX87E; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="L7jWX87E" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=EdZlhMvgzRu6AICH3kL4akeIuzBk8xM74RpK6P87z9I=; b=L7jWX87ESmzBv7rEObZV9eCJvZ YGBi1WU2JzEZQYL8ErVM23vqSRjZizTPnXszjwuR/FljrGGf9vWyM7cVxZr2QA8Q/cbv/B2FW1Ejc zESWP8Kmv+MweNtAkqDhfKBj29BIfA2Hio6nNLNyfu2vsV4Zygi6XmyYHZlCJ0v6EqvTySUwjzg0Y qvBCMBvD76Mywcm18fA9C4bw9sSACJPUxYeXuez8U6fg/U/3dpTyAB5IltFa5JTMtp3lcc8fUZEW2 mm9C+O2JdyXFMQo2CM+mvGxtTTUeoAdM57WAPuh2u6OTJNFTXlj1q+P6LkEAndeNBx/wwNFuKan7n e0rLd3zA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wW5-00G6T7-39; Wed, 16 Apr 2025 14:45:03 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:45:01 +0800 Date: Wed, 16 Apr 2025 14:45:01 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 60/67] crypto: lib/sm3 - Remove partial block helpers To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Now that all sm3_base users have been converted to use the API partial block handling, remove the partial block helpers as well as the lib/crypto functions. Signed-off-by: Herbert Xu --- include/crypto/sm3.h | 2 -- include/crypto/sm3_base.h | 64 ++---------------------------------- lib/crypto/sm3.c | 68 +++------------------------------------ 3 files changed, 6 insertions(+), 128 deletions(-) diff --git a/include/crypto/sm3.h b/include/crypto/sm3.h index 6dc95264a836..c8d02c86c298 100644 --- a/include/crypto/sm3.h +++ b/include/crypto/sm3.h @@ -60,7 +60,5 @@ static inline void sm3_init(struct sm3_state *sctx) } void sm3_block_generic(struct sm3_state *sctx, u8 const *data, int blocks); -void sm3_update(struct sm3_state *sctx, const u8 *data, unsigned int len); -void sm3_final(struct sm3_state *sctx, u8 *out); #endif diff --git a/include/crypto/sm3_base.h b/include/crypto/sm3_base.h index 9460589c8cb8..7c53570bc05e 100644 --- a/include/crypto/sm3_base.h +++ b/include/crypto/sm3_base.h @@ -11,9 +11,10 @@ #include #include -#include +#include #include #include +#include #include typedef void (sm3_block_fn)(struct sm3_state *sst, u8 const *src, int blocks); @@ -24,44 +25,6 @@ static inline int sm3_base_init(struct shash_desc *desc) return 0; } -static inline int sm3_base_do_update(struct shash_desc *desc, - const u8 *data, - unsigned int len, - sm3_block_fn *block_fn) -{ - struct sm3_state *sctx = shash_desc_ctx(desc); - unsigned int partial = sctx->count % SM3_BLOCK_SIZE; - - sctx->count += len; - - if (unlikely((partial + len) >= SM3_BLOCK_SIZE)) { - int blocks; - - if (partial) { - int p = SM3_BLOCK_SIZE - partial; - - memcpy(sctx->buffer + partial, data, p); - data += p; - len -= p; - - block_fn(sctx, sctx->buffer, 1); - } - - blocks = len / SM3_BLOCK_SIZE; - len %= SM3_BLOCK_SIZE; - - if (blocks) { - block_fn(sctx, data, blocks); - data += blocks * SM3_BLOCK_SIZE; - } - partial = 0; - } - if (len) - memcpy(sctx->buffer + partial, data, len); - - return 0; -} - static inline int sm3_base_do_update_blocks(struct shash_desc *desc, const u8 *data, unsigned int len, sm3_block_fn *block_fn) @@ -105,29 +68,6 @@ static inline int sm3_base_do_finup(struct shash_desc *desc, return 0; } -static inline int sm3_base_do_finalize(struct shash_desc *desc, - sm3_block_fn *block_fn) -{ - const int bit_offset = SM3_BLOCK_SIZE - sizeof(__be64); - struct sm3_state *sctx = shash_desc_ctx(desc); - __be64 *bits = (__be64 *)(sctx->buffer + bit_offset); - unsigned int partial = sctx->count % SM3_BLOCK_SIZE; - - sctx->buffer[partial++] = 0x80; - if (partial > bit_offset) { - memset(sctx->buffer + partial, 0x0, SM3_BLOCK_SIZE - partial); - partial = 0; - - block_fn(sctx, sctx->buffer, 1); - } - - memset(sctx->buffer + partial, 0x0, bit_offset - partial); - *bits = cpu_to_be64(sctx->count << 3); - block_fn(sctx, sctx->buffer, 1); - - return 0; -} - static inline int sm3_base_finish(struct shash_desc *desc, u8 *out) { struct sm3_state *sctx = shash_desc_ctx(desc); diff --git a/lib/crypto/sm3.c b/lib/crypto/sm3.c index de64aa913280..efff0e267d84 100644 --- a/lib/crypto/sm3.c +++ b/lib/crypto/sm3.c @@ -8,9 +8,11 @@ * Copyright (C) 2021 Tianjia Zhang */ -#include -#include #include +#include +#include +#include +#include static const u32 ____cacheline_aligned K[64] = { 0x79cc4519, 0xf3988a32, 0xe7311465, 0xce6228cb, @@ -179,67 +181,5 @@ void sm3_block_generic(struct sm3_state *sctx, u8 const *data, int blocks) } EXPORT_SYMBOL_GPL(sm3_block_generic); -void sm3_update(struct sm3_state *sctx, const u8 *data, unsigned int len) -{ - unsigned int partial = sctx->count % SM3_BLOCK_SIZE; - - sctx->count += len; - - if ((partial + len) >= SM3_BLOCK_SIZE) { - int blocks; - - if (partial) { - int p = SM3_BLOCK_SIZE - partial; - - memcpy(sctx->buffer + partial, data, p); - data += p; - len -= p; - - sm3_block_generic(sctx, sctx->buffer, 1); - } - - blocks = len / SM3_BLOCK_SIZE; - len %= SM3_BLOCK_SIZE; - - if (blocks) { - sm3_block_generic(sctx, data, blocks); - data += blocks * SM3_BLOCK_SIZE; - } - - partial = 0; - } - if (len) - memcpy(sctx->buffer + partial, data, len); -} -EXPORT_SYMBOL_GPL(sm3_update); - -void sm3_final(struct sm3_state *sctx, u8 *out) -{ - const int bit_offset = SM3_BLOCK_SIZE - sizeof(u64); - __be64 *bits = (__be64 *)(sctx->buffer + bit_offset); - __be32 *digest = (__be32 *)out; - unsigned int partial = sctx->count % SM3_BLOCK_SIZE; - int i; - - sctx->buffer[partial++] = 0x80; - if (partial > bit_offset) { - memset(sctx->buffer + partial, 0, SM3_BLOCK_SIZE - partial); - partial = 0; - - sm3_block_generic(sctx, sctx->buffer, 1); - } - - memset(sctx->buffer + partial, 0, bit_offset - partial); - *bits = cpu_to_be64(sctx->count << 3); - sm3_block_generic(sctx, sctx->buffer, 1); - - for (i = 0; i < 8; i++) - put_unaligned_be32(sctx->state[i], digest++); - - /* Zeroize sensitive information. */ - memzero_explicit(sctx, sizeof(*sctx)); -} -EXPORT_SYMBOL_GPL(sm3_final); - MODULE_DESCRIPTION("Generic SM3 library"); MODULE_LICENSE("GPL v2"); From patchwork Wed Apr 16 06:45:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882008 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46EC42459DC for ; Wed, 16 Apr 2025 06:45:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785912; cv=none; b=XrlO+Y/1pSVhzbGLoJdgHpsaLo20FzKYmW8VBX3WwpvkwJdjmuhLVOW0X94sXeQj4PrPC2AikE/27GEcZzdAQLm4jD6EBiAm5mCVuVggK1AKiEbqNl05wocrcotZAkJxuIwj2AMGsMauxM5mJjBQmFcvrobuiUpmsF19xDTAsVQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785912; c=relaxed/simple; bh=6mvU8fpTmGT0HkqqZuGpu9iMuXxwdDK7+T/2VIowkTQ=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=RFGmyslcfiXCkItANQnpZrMxC1Ob8yNCFfC7KS8k6R6FlpnGVyG2STnHXB2PFcKGWk1KCn9HmScQHDkqMLAM9+1DVtxM8fGoiONv3OhxI5a6IU5PVJ3ChWPANmX8NW+vaP5pd6aLU7/nV+yMSMln7X5/Mzla+lFN/G26Kgxpx0I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=sSPuLkPP; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="sSPuLkPP" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=Lei4oiAlMxMRMszs7W37Sq8JqffuFwfF0gn2jhmCkrs=; b=sSPuLkPP6+Bf8r4VXF2TsPdSJa pBn14FplJfV6ZcemcUCE911QI0bkU6xjbHNmgI6caEgNfKppfn6ccsj2ZoZqwC3WKhfQbJWFinZ25 iejr2MUF8muUNw9Yu3zYEAbn6hmhp6miNPOpH8kz33dJa1/7MWVEwckOxBdZe5mGNqlR1fkPi9G8m 6aoGCQUrXGsPvKkdyWps7lEYovHe27a1TViA2z/YL6ddU9KvMbN1NszZkypoSHxlUpbT1HVfYoLLo o87D0Beh0ijtnGHEwAbXXlMqRzsYF1E5rifFFYYlBwJE9AWJn8BeyhzOzEaBnw9FPI/gpofL2dBc3 CBSH4iiQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wWA-00G6Tu-29; Wed, 16 Apr 2025 14:45:07 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:45:06 +0800 Date: Wed, 16 Apr 2025 14:45:06 +0800 Message-Id: <7b5fa584e845a66571ee78a7bb4e85573bd70b33.1744784515.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 62/67] crypto: cmac - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Signed-off-by: Herbert Xu --- crypto/cmac.c | 92 ++++++++++----------------------------------------- 1 file changed, 18 insertions(+), 74 deletions(-) diff --git a/crypto/cmac.c b/crypto/cmac.c index c66a0f4d8808..f297042a324b 100644 --- a/crypto/cmac.c +++ b/crypto/cmac.c @@ -13,9 +13,12 @@ #include #include +#include #include #include #include +#include +#include /* * +------------------------ @@ -31,22 +34,6 @@ struct cmac_tfm_ctx { __be64 consts[]; }; -/* - * +------------------------ - * | - * +------------------------ - * | cmac_desc_ctx - * +------------------------ - * | odds (block size) - * +------------------------ - * | prev (block size) - * +------------------------ - */ -struct cmac_desc_ctx { - unsigned int len; - u8 odds[]; -}; - static int crypto_cmac_digest_setkey(struct crypto_shash *parent, const u8 *inkey, unsigned int keylen) { @@ -102,13 +89,10 @@ static int crypto_cmac_digest_setkey(struct crypto_shash *parent, static int crypto_cmac_digest_init(struct shash_desc *pdesc) { - struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); int bs = crypto_shash_blocksize(pdesc->tfm); - u8 *prev = &ctx->odds[bs]; + u8 *prev = shash_desc_ctx(pdesc); - ctx->len = 0; memset(prev, 0, bs); - return 0; } @@ -117,77 +101,36 @@ static int crypto_cmac_digest_update(struct shash_desc *pdesc, const u8 *p, { struct crypto_shash *parent = pdesc->tfm; struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent); - struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct crypto_cipher *tfm = tctx->child; int bs = crypto_shash_blocksize(parent); - u8 *odds = ctx->odds; - u8 *prev = odds + bs; + u8 *prev = shash_desc_ctx(pdesc); - /* checking the data can fill the block */ - if ((ctx->len + len) <= bs) { - memcpy(odds + ctx->len, p, len); - ctx->len += len; - return 0; - } - - /* filling odds with new data and encrypting it */ - memcpy(odds + ctx->len, p, bs - ctx->len); - len -= bs - ctx->len; - p += bs - ctx->len; - - crypto_xor(prev, odds, bs); - crypto_cipher_encrypt_one(tfm, prev, prev); - - /* clearing the length */ - ctx->len = 0; - - /* encrypting the rest of data */ - while (len > bs) { + do { crypto_xor(prev, p, bs); crypto_cipher_encrypt_one(tfm, prev, prev); p += bs; len -= bs; - } - - /* keeping the surplus of blocksize */ - if (len) { - memcpy(odds, p, len); - ctx->len = len; - } - - return 0; + } while (len >= bs); + return len; } -static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out) +static int crypto_cmac_digest_finup(struct shash_desc *pdesc, const u8 *src, + unsigned int len, u8 *out) { struct crypto_shash *parent = pdesc->tfm; struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent); - struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct crypto_cipher *tfm = tctx->child; int bs = crypto_shash_blocksize(parent); - u8 *odds = ctx->odds; - u8 *prev = odds + bs; + u8 *prev = shash_desc_ctx(pdesc); unsigned int offset = 0; - if (ctx->len != bs) { - unsigned int rlen; - u8 *p = odds + ctx->len; - - *p = 0x80; - p++; - - rlen = bs - ctx->len - 1; - if (rlen) - memset(p, 0, rlen); - + crypto_xor(prev, src, len); + if (len != bs) { + prev[len] ^= 0x80; offset += bs; } - - crypto_xor(prev, odds, bs); crypto_xor(prev, (const u8 *)tctx->consts + offset, bs); - crypto_cipher_encrypt_one(tfm, out, prev); - return 0; } @@ -269,13 +212,14 @@ static int cmac_create(struct crypto_template *tmpl, struct rtattr **tb) inst->alg.base.cra_blocksize = alg->cra_blocksize; inst->alg.base.cra_ctxsize = sizeof(struct cmac_tfm_ctx) + alg->cra_blocksize * 2; + inst->alg.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINAL_NONZERO; inst->alg.digestsize = alg->cra_blocksize; - inst->alg.descsize = sizeof(struct cmac_desc_ctx) + - alg->cra_blocksize * 2; + inst->alg.descsize = alg->cra_blocksize; inst->alg.init = crypto_cmac_digest_init; inst->alg.update = crypto_cmac_digest_update; - inst->alg.final = crypto_cmac_digest_final; + inst->alg.finup = crypto_cmac_digest_finup; inst->alg.setkey = crypto_cmac_digest_setkey; inst->alg.init_tfm = cmac_init_tfm; inst->alg.clone_tfm = cmac_clone_tfm; From patchwork Wed Apr 16 06:45:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882007 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99985158520 for ; Wed, 16 Apr 2025 06:45:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785917; cv=none; b=buW2lCcQEczutG0Bj9tc/Bvjuz7asLHgW0QuwuSJy1MiRKJngFwEJolYkGl+hO3tOTs5gqyw6wopss/pV160POY8eE2IDxdEysL6G3SnsprrWu6hUK1mmmVp6Ex0z+7doVHaDHjXU7fAuEiyn8MnVkOm6OcDdgZRvOtjZkVpdtE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785917; c=relaxed/simple; bh=HfqqVtN7ALkr6leMASfH62frilF1xF/Rg3Og2YXYg7M=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=VM6Xt8EsZ+Qz27Zjox0XJqwRXaxRW+jvwGIy+GCce2jo+9e59HG7055L7Q/suSygZfAIG8DoZ3m/3t+6TQ6IusgprQdQMJU0J9RkqtvlQgF6mIjFtA16ycZNamgqXQhUo0wUw13Pp3Mz1JvKyo0Rg/GyP8aDyciReehLSitv7KU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=HJdOV1hI; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="HJdOV1hI" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=i3VhU+8lbuf4q8n2wSSl2qo0ToRnnviqh/r11Tz525A=; b=HJdOV1hI7ow+fZVSj/HAg/2rbc m7nnZ4kDNKa7XjJCoZPUHia96btEYBcCApnDt6TrmTr2s+r9PqhR7UjoWBLtjQ7uwDI5oC0KWaCUC 3C7L5NUJS+sfxtyG+uq1kheJU0lfOhFoH6bX4PonVsaiZbiu5bZ1MtS/5hCN4tWR8wAuDMQ9+1uCm URxlW4ssCyh7BY8RkiaG9b4q1xV0SjHQMxpbxdef2mWA7OlYguBpg9xWyPsBUtZXPEBXRXXNCfAMS w1OXz7t4i21kyuLhad2xLhh2e+j+fZ1sTBObflI2XlSNxOESQF+Sfnh8L8PTdN3ezqKaRb0OUIiCt dZEvWLtg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wWF-00G6VU-16; Wed, 16 Apr 2025 14:45:13 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:45:11 +0800 Date: Wed, 16 Apr 2025 14:45:11 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 64/67] crypto: arm64/aes - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu --- arch/arm64/crypto/aes-glue.c | 122 ++++++++++++----------------------- 1 file changed, 41 insertions(+), 81 deletions(-) diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 5ca3b5661749..81560f722b9d 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -5,19 +5,20 @@ * Copyright (C) 2013 - 2017 Linaro Ltd */ -#include #include -#include +#include #include #include -#include #include -#include #include #include -#include -#include +#include +#include #include +#include +#include +#include +#include #include "aes-ce-setkey.h" @@ -130,7 +131,6 @@ struct mac_tfm_ctx { }; struct mac_desc_ctx { - unsigned int len; u8 dg[AES_BLOCK_SIZE]; }; @@ -869,109 +869,64 @@ static int mac_init(struct shash_desc *desc) struct mac_desc_ctx *ctx = shash_desc_ctx(desc); memset(ctx->dg, 0, AES_BLOCK_SIZE); - ctx->len = 0; - return 0; } static void mac_do_update(struct crypto_aes_ctx *ctx, u8 const in[], int blocks, - u8 dg[], int enc_before, int enc_after) + u8 dg[], int enc_before) { int rounds = 6 + ctx->key_length / 4; + int rem; - if (crypto_simd_usable()) { - int rem; - - do { - kernel_neon_begin(); - rem = aes_mac_update(in, ctx->key_enc, rounds, blocks, - dg, enc_before, enc_after); - kernel_neon_end(); - in += (blocks - rem) * AES_BLOCK_SIZE; - blocks = rem; - enc_before = 0; - } while (blocks); - } else { - if (enc_before) - aes_encrypt(ctx, dg, dg); - - while (blocks--) { - crypto_xor(dg, in, AES_BLOCK_SIZE); - in += AES_BLOCK_SIZE; - - if (blocks || enc_after) - aes_encrypt(ctx, dg, dg); - } - } + do { + kernel_neon_begin(); + rem = aes_mac_update(in, ctx->key_enc, rounds, blocks, + dg, enc_before, !enc_before); + kernel_neon_end(); + in += (blocks - rem) * AES_BLOCK_SIZE; + blocks = rem; + } while (blocks); } static int mac_update(struct shash_desc *desc, const u8 *p, unsigned int len) { struct mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); struct mac_desc_ctx *ctx = shash_desc_ctx(desc); + int blocks = len / AES_BLOCK_SIZE; - while (len > 0) { - unsigned int l; - - if ((ctx->len % AES_BLOCK_SIZE) == 0 && - (ctx->len + len) > AES_BLOCK_SIZE) { - - int blocks = len / AES_BLOCK_SIZE; - - len %= AES_BLOCK_SIZE; - - mac_do_update(&tctx->key, p, blocks, ctx->dg, - (ctx->len != 0), (len != 0)); - - p += blocks * AES_BLOCK_SIZE; - - if (!len) { - ctx->len = AES_BLOCK_SIZE; - break; - } - ctx->len = 0; - } - - l = min(len, AES_BLOCK_SIZE - ctx->len); - - if (l <= AES_BLOCK_SIZE) { - crypto_xor(ctx->dg + ctx->len, p, l); - ctx->len += l; - len -= l; - p += l; - } - } - - return 0; + len %= AES_BLOCK_SIZE; + mac_do_update(&tctx->key, p, blocks, ctx->dg, 0); + return len; } -static int cbcmac_final(struct shash_desc *desc, u8 *out) +static int cbcmac_finup(struct shash_desc *desc, const u8 *src, + unsigned int len, u8 *out) { struct mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); struct mac_desc_ctx *ctx = shash_desc_ctx(desc); - mac_do_update(&tctx->key, NULL, 0, ctx->dg, (ctx->len != 0), 0); - + if (len) { + crypto_xor(ctx->dg, src, len); + mac_do_update(&tctx->key, NULL, 0, ctx->dg, 1); + } memcpy(out, ctx->dg, AES_BLOCK_SIZE); - return 0; } -static int cmac_final(struct shash_desc *desc, u8 *out) +static int cmac_finup(struct shash_desc *desc, const u8 *src, unsigned int len, + u8 *out) { struct mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); struct mac_desc_ctx *ctx = shash_desc_ctx(desc); u8 *consts = tctx->consts; - if (ctx->len != AES_BLOCK_SIZE) { - ctx->dg[ctx->len] ^= 0x80; + crypto_xor(ctx->dg, src, len); + if (len != AES_BLOCK_SIZE) { + ctx->dg[len] ^= 0x80; consts += AES_BLOCK_SIZE; } - - mac_do_update(&tctx->key, consts, 1, ctx->dg, 0, 1); - + mac_do_update(&tctx->key, consts, 1, ctx->dg, 0); memcpy(out, ctx->dg, AES_BLOCK_SIZE); - return 0; } @@ -979,6 +934,8 @@ static struct shash_alg mac_algs[] = { { .base.cra_name = "cmac(aes)", .base.cra_driver_name = "cmac-aes-" MODE, .base.cra_priority = PRIO, + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINAL_NONZERO, .base.cra_blocksize = AES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct mac_tfm_ctx) + 2 * AES_BLOCK_SIZE, @@ -987,13 +944,15 @@ static struct shash_alg mac_algs[] = { { .digestsize = AES_BLOCK_SIZE, .init = mac_init, .update = mac_update, - .final = cmac_final, + .finup = cmac_finup, .setkey = cmac_setkey, .descsize = sizeof(struct mac_desc_ctx), }, { .base.cra_name = "xcbc(aes)", .base.cra_driver_name = "xcbc-aes-" MODE, .base.cra_priority = PRIO, + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINAL_NONZERO, .base.cra_blocksize = AES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct mac_tfm_ctx) + 2 * AES_BLOCK_SIZE, @@ -1002,13 +961,14 @@ static struct shash_alg mac_algs[] = { { .digestsize = AES_BLOCK_SIZE, .init = mac_init, .update = mac_update, - .final = cmac_final, + .finup = cmac_finup, .setkey = xcbc_setkey, .descsize = sizeof(struct mac_desc_ctx), }, { .base.cra_name = "cbcmac(aes)", .base.cra_driver_name = "cbcmac-aes-" MODE, .base.cra_priority = PRIO, + .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .base.cra_blocksize = AES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct mac_tfm_ctx), .base.cra_module = THIS_MODULE, @@ -1016,7 +976,7 @@ static struct shash_alg mac_algs[] = { { .digestsize = AES_BLOCK_SIZE, .init = mac_init, .update = mac_update, - .final = cbcmac_final, + .finup = cbcmac_finup, .setkey = cbcmac_setkey, .descsize = sizeof(struct mac_desc_ctx), } }; From patchwork Wed Apr 16 06:45:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 882006 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9442E22FF39 for ; Wed, 16 Apr 2025 06:45:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785923; cv=none; b=p+6ZHwNE7QwlKSCQyLmZCYko1Zwn7NGnC9Fbmagoz0pApe2Xiubd2FaSaOaj+73kaed1CVilLDUKEhw9Wu6Nw1uJe/NzLMOF9k2WrXK6FByMaFASOaRPMhI+C8PMC323nZlUcd2oUSVb+HVTHNIJ5Fa0GBnF4aPHYzPBBql5h9M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744785923; c=relaxed/simple; bh=cpkeOZqQXw28VPMShYvmfWmO6+Etmelgc0EFDxpkTeQ=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=enuyRCvdoIDUUTdRKd+KeA6glYZwRx+K45jpDaGkWETUii6xXGAlH1ZGKQQQQHlnqXrSL+V4+bCLA86g/8U2vUdJMQ84jl9HJVo+Sq4SYX8X9kRS7oi3j5vHqEpvL2I3vJa7VReq/pMmIAHaCfAeckrpYydYYeFb1EtmSYZ6bbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=PVH1U2fG; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="PVH1U2fG" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=46cBG9CxAlz0AslgzxhTDWL5S99886idVQ7ORj8X7qI=; b=PVH1U2fG15dVa5falj4vXhB4NP 6Kq/maJQd9plQBrvXgX+c3Z/19P3Mm3GL7U3by3Irvw2Oqljmj1hweneW0u/PZnkUPiRqi2ABYA1v 8R2SGzcTi10PNHhn9VZkaInYw8jqH3ILvNcCtq2bVbrG/9Zz5P4mSskN6DWR3FIHlT7mh5oWuj28Y j4AHzNYwa4oiAvkGHc25BTTQ5JnFAWXIh4ZGahBG6v13jAOrhfwlM/OaYJvvSVMU/UPIiEcG2e26A Sury2FzSx3wQwJi1uTCeLwfSEqJq0w9RQVZSpF55/dh+9J92ApUTcJl0sT1bg+Bv6HCJj4ahiMnKU dFP4ApMg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1u4wWK-00G6WJ-2C; Wed, 16 Apr 2025 14:45:17 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 16 Apr 2025 14:45:16 +0800 Date: Wed, 16 Apr 2025 14:45:16 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [PATCH 66/67] crypto: nx - Use API partial block handling To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the Crypto API partial block handling. Also switch to the generic export format. Signed-off-by: Herbert Xu --- drivers/crypto/nx/nx-aes-xcbc.c | 128 +++++++++------------------- drivers/crypto/nx/nx-sha256.c | 130 ++++++++++++----------------- drivers/crypto/nx/nx-sha512.c | 143 +++++++++++++------------------- drivers/crypto/nx/nx.c | 15 ++-- drivers/crypto/nx/nx.h | 6 +- 5 files changed, 161 insertions(+), 261 deletions(-) diff --git a/drivers/crypto/nx/nx-aes-xcbc.c b/drivers/crypto/nx/nx-aes-xcbc.c index eb5c8f689360..bf465d824e2c 100644 --- a/drivers/crypto/nx/nx-aes-xcbc.c +++ b/drivers/crypto/nx/nx-aes-xcbc.c @@ -7,13 +7,14 @@ * Author: Kent Yoder */ -#include #include -#include +#include +#include +#include +#include #include -#include -#include -#include +#include +#include #include "nx_csbcpb.h" #include "nx.h" @@ -21,8 +22,6 @@ struct xcbc_state { u8 state[AES_BLOCK_SIZE]; - unsigned int count; - u8 buffer[AES_BLOCK_SIZE]; }; static int nx_xcbc_set_key(struct crypto_shash *desc, @@ -58,7 +57,7 @@ static int nx_xcbc_set_key(struct crypto_shash *desc, */ static int nx_xcbc_empty(struct shash_desc *desc, u8 *out) { - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc->tfm); struct nx_csbcpb *csbcpb = nx_ctx->csbcpb; struct nx_sg *in_sg, *out_sg; u8 keys[2][AES_BLOCK_SIZE]; @@ -135,9 +134,9 @@ static int nx_xcbc_empty(struct shash_desc *desc, u8 *out) return rc; } -static int nx_crypto_ctx_aes_xcbc_init2(struct crypto_tfm *tfm) +static int nx_crypto_ctx_aes_xcbc_init2(struct crypto_shash *tfm) { - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm); + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(tfm); struct nx_csbcpb *csbcpb = nx_ctx->csbcpb; int err; @@ -166,31 +165,24 @@ static int nx_xcbc_update(struct shash_desc *desc, const u8 *data, unsigned int len) { + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc->tfm); struct xcbc_state *sctx = shash_desc_ctx(desc); - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); struct nx_csbcpb *csbcpb = nx_ctx->csbcpb; struct nx_sg *in_sg; struct nx_sg *out_sg; - u32 to_process = 0, leftover, total; unsigned int max_sg_len; unsigned long irq_flags; + u32 to_process, total; int rc = 0; int data_len; spin_lock_irqsave(&nx_ctx->lock, irq_flags); + memcpy(csbcpb->cpb.aes_xcbc.out_cv_mac, sctx->state, AES_BLOCK_SIZE); + NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE; + NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; - total = sctx->count + len; - - /* 2 cases for total data len: - * 1: <= AES_BLOCK_SIZE: copy into state, return 0 - * 2: > AES_BLOCK_SIZE: process X blocks, copy in leftover - */ - if (total <= AES_BLOCK_SIZE) { - memcpy(sctx->buffer + sctx->count, data, len); - sctx->count += len; - goto out; - } + total = len; in_sg = nx_ctx->in_sg; max_sg_len = min_t(u64, nx_driver.of.max_sg_len/sizeof(struct nx_sg), @@ -200,7 +192,7 @@ static int nx_xcbc_update(struct shash_desc *desc, data_len = AES_BLOCK_SIZE; out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state, - &len, nx_ctx->ap->sglen); + &data_len, nx_ctx->ap->sglen); if (data_len != AES_BLOCK_SIZE) { rc = -EINVAL; @@ -210,56 +202,21 @@ static int nx_xcbc_update(struct shash_desc *desc, nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg); do { - to_process = total - to_process; - to_process = to_process & ~(AES_BLOCK_SIZE - 1); + to_process = total & ~(AES_BLOCK_SIZE - 1); - leftover = total - to_process; - - /* the hardware will not accept a 0 byte operation for this - * algorithm and the operation MUST be finalized to be correct. - * So if we happen to get an update that falls on a block sized - * boundary, we must save off the last block to finalize with - * later. */ - if (!leftover) { - to_process -= AES_BLOCK_SIZE; - leftover = AES_BLOCK_SIZE; - } - - if (sctx->count) { - data_len = sctx->count; - in_sg = nx_build_sg_list(nx_ctx->in_sg, - (u8 *) sctx->buffer, - &data_len, - max_sg_len); - if (data_len != sctx->count) { - rc = -EINVAL; - goto out; - } - } - - data_len = to_process - sctx->count; in_sg = nx_build_sg_list(in_sg, (u8 *) data, - &data_len, + &to_process, max_sg_len); - if (data_len != to_process - sctx->count) { - rc = -EINVAL; - goto out; - } - nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg); /* we've hit the nx chip previously and we're updating again, * so copy over the partial digest */ - if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) { - memcpy(csbcpb->cpb.aes_xcbc.cv, - csbcpb->cpb.aes_xcbc.out_cv_mac, - AES_BLOCK_SIZE); - } + memcpy(csbcpb->cpb.aes_xcbc.cv, + csbcpb->cpb.aes_xcbc.out_cv_mac, AES_BLOCK_SIZE); - NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE; if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) { rc = -EINVAL; goto out; @@ -271,28 +228,24 @@ static int nx_xcbc_update(struct shash_desc *desc, atomic_inc(&(nx_ctx->stats->aes_ops)); - /* everything after the first update is continuation */ - NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; - total -= to_process; - data += to_process - sctx->count; - sctx->count = 0; + data += to_process; in_sg = nx_ctx->in_sg; - } while (leftover > AES_BLOCK_SIZE); + } while (total >= AES_BLOCK_SIZE); - /* copy the leftover back into the state struct */ - memcpy(sctx->buffer, data, leftover); - sctx->count = leftover; + rc = total; + memcpy(sctx->state, csbcpb->cpb.aes_xcbc.out_cv_mac, AES_BLOCK_SIZE); out: spin_unlock_irqrestore(&nx_ctx->lock, irq_flags); return rc; } -static int nx_xcbc_final(struct shash_desc *desc, u8 *out) +static int nx_xcbc_finup(struct shash_desc *desc, const u8 *src, + unsigned int nbytes, u8 *out) { + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc->tfm); struct xcbc_state *sctx = shash_desc_ctx(desc); - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); struct nx_csbcpb *csbcpb = nx_ctx->csbcpb; struct nx_sg *in_sg, *out_sg; unsigned long irq_flags; @@ -301,12 +254,10 @@ static int nx_xcbc_final(struct shash_desc *desc, u8 *out) spin_lock_irqsave(&nx_ctx->lock, irq_flags); - if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) { - /* we've hit the nx chip previously, now we're finalizing, - * so copy over the partial digest */ - memcpy(csbcpb->cpb.aes_xcbc.cv, - csbcpb->cpb.aes_xcbc.out_cv_mac, AES_BLOCK_SIZE); - } else if (sctx->count == 0) { + if (nbytes) { + /* non-zero final, so copy over the partial digest */ + memcpy(csbcpb->cpb.aes_xcbc.cv, sctx->state, AES_BLOCK_SIZE); + } else { /* * we've never seen an update, so this is a 0 byte op. The * hardware cannot handle a 0 byte op, so just ECB to @@ -320,11 +271,11 @@ static int nx_xcbc_final(struct shash_desc *desc, u8 *out) * this is not an intermediate operation */ NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE; - len = sctx->count; - in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buffer, - &len, nx_ctx->ap->sglen); + len = nbytes; + in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)src, &len, + nx_ctx->ap->sglen); - if (len != sctx->count) { + if (len != nbytes) { rc = -EINVAL; goto out; } @@ -362,18 +313,19 @@ struct shash_alg nx_shash_aes_xcbc_alg = { .digestsize = AES_BLOCK_SIZE, .init = nx_xcbc_init, .update = nx_xcbc_update, - .final = nx_xcbc_final, + .finup = nx_xcbc_finup, .setkey = nx_xcbc_set_key, .descsize = sizeof(struct xcbc_state), - .statesize = sizeof(struct xcbc_state), + .init_tfm = nx_crypto_ctx_aes_xcbc_init2, + .exit_tfm = nx_crypto_ctx_shash_exit, .base = { .cra_name = "xcbc(aes)", .cra_driver_name = "xcbc-aes-nx", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | + CRYPTO_AHASH_ALG_FINAL_NONZERO, .cra_blocksize = AES_BLOCK_SIZE, .cra_module = THIS_MODULE, .cra_ctxsize = sizeof(struct nx_crypto_ctx), - .cra_init = nx_crypto_ctx_aes_xcbc_init2, - .cra_exit = nx_crypto_ctx_exit, } }; diff --git a/drivers/crypto/nx/nx-sha256.c b/drivers/crypto/nx/nx-sha256.c index c3bebf0feabe..5b29dd026df2 100644 --- a/drivers/crypto/nx/nx-sha256.c +++ b/drivers/crypto/nx/nx-sha256.c @@ -9,9 +9,12 @@ #include #include +#include +#include #include -#include -#include +#include +#include +#include #include "nx_csbcpb.h" #include "nx.h" @@ -19,12 +22,11 @@ struct sha256_state_be { __be32 state[SHA256_DIGEST_SIZE / 4]; u64 count; - u8 buf[SHA256_BLOCK_SIZE]; }; -static int nx_crypto_ctx_sha256_init(struct crypto_tfm *tfm) +static int nx_crypto_ctx_sha256_init(struct crypto_shash *tfm) { - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm); + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(tfm); int err; err = nx_crypto_ctx_sha_init(tfm); @@ -40,11 +42,10 @@ static int nx_crypto_ctx_sha256_init(struct crypto_tfm *tfm) return 0; } -static int nx_sha256_init(struct shash_desc *desc) { +static int nx_sha256_init(struct shash_desc *desc) +{ struct sha256_state_be *sctx = shash_desc_ctx(desc); - memset(sctx, 0, sizeof *sctx); - sctx->state[0] = __cpu_to_be32(SHA256_H0); sctx->state[1] = __cpu_to_be32(SHA256_H1); sctx->state[2] = __cpu_to_be32(SHA256_H2); @@ -61,30 +62,18 @@ static int nx_sha256_init(struct shash_desc *desc) { static int nx_sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len) { + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc->tfm); struct sha256_state_be *sctx = shash_desc_ctx(desc); - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb; + u64 to_process, leftover, total = len; struct nx_sg *out_sg; - u64 to_process = 0, leftover, total; unsigned long irq_flags; int rc = 0; int data_len; u32 max_sg_len; - u64 buf_len = (sctx->count % SHA256_BLOCK_SIZE); spin_lock_irqsave(&nx_ctx->lock, irq_flags); - /* 2 cases for total data len: - * 1: < SHA256_BLOCK_SIZE: copy into state, return 0 - * 2: >= SHA256_BLOCK_SIZE: process X blocks, copy in leftover - */ - total = (sctx->count % SHA256_BLOCK_SIZE) + len; - if (total < SHA256_BLOCK_SIZE) { - memcpy(sctx->buf + buf_len, data, len); - sctx->count += len; - goto out; - } - memcpy(csbcpb->cpb.sha256.message_digest, sctx->state, SHA256_DIGEST_SIZE); NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE; NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; @@ -105,41 +94,17 @@ static int nx_sha256_update(struct shash_desc *desc, const u8 *data, } do { - int used_sgs = 0; struct nx_sg *in_sg = nx_ctx->in_sg; - if (buf_len) { - data_len = buf_len; - in_sg = nx_build_sg_list(in_sg, - (u8 *) sctx->buf, - &data_len, - max_sg_len); + to_process = total & ~(SHA256_BLOCK_SIZE - 1); - if (data_len != buf_len) { - rc = -EINVAL; - goto out; - } - used_sgs = in_sg - nx_ctx->in_sg; - } - - /* to_process: SHA256_BLOCK_SIZE aligned chunk to be - * processed in this iteration. This value is restricted - * by sg list limits and number of sgs we already used - * for leftover data. (see above) - * In ideal case, we could allow NX_PAGE_SIZE * max_sg_len, - * but because data may not be aligned, we need to account - * for that too. */ - to_process = min_t(u64, total, - (max_sg_len - 1 - used_sgs) * NX_PAGE_SIZE); - to_process = to_process & ~(SHA256_BLOCK_SIZE - 1); - - data_len = to_process - buf_len; + data_len = to_process; in_sg = nx_build_sg_list(in_sg, (u8 *) data, &data_len, max_sg_len); nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg); - to_process = data_len + buf_len; + to_process = data_len; leftover = total - to_process; /* @@ -162,26 +127,22 @@ static int nx_sha256_update(struct shash_desc *desc, const u8 *data, atomic_inc(&(nx_ctx->stats->sha256_ops)); total -= to_process; - data += to_process - buf_len; - buf_len = 0; - + data += to_process; + sctx->count += to_process; } while (leftover >= SHA256_BLOCK_SIZE); - /* copy the leftover back into the state struct */ - if (leftover) - memcpy(sctx->buf, data, leftover); - - sctx->count += len; + rc = leftover; memcpy(sctx->state, csbcpb->cpb.sha256.message_digest, SHA256_DIGEST_SIZE); out: spin_unlock_irqrestore(&nx_ctx->lock, irq_flags); return rc; } -static int nx_sha256_final(struct shash_desc *desc, u8 *out) +static int nx_sha256_finup(struct shash_desc *desc, const u8 *src, + unsigned int nbytes, u8 *out) { + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc->tfm); struct sha256_state_be *sctx = shash_desc_ctx(desc); - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb; struct nx_sg *in_sg, *out_sg; unsigned long irq_flags; @@ -197,25 +158,19 @@ static int nx_sha256_final(struct shash_desc *desc, u8 *out) nx_ctx->ap->databytelen/NX_PAGE_SIZE); /* final is represented by continuing the operation and indicating that - * this is not an intermediate operation */ - if (sctx->count >= SHA256_BLOCK_SIZE) { - /* we've hit the nx chip previously, now we're finalizing, - * so copy over the partial digest */ - memcpy(csbcpb->cpb.sha256.input_partial_digest, sctx->state, SHA256_DIGEST_SIZE); - NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE; - NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; - } else { - NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE; - NX_CPB_FDM(csbcpb) &= ~NX_FDM_CONTINUATION; - } + * this is not an intermediate operation + * copy over the partial digest */ + memcpy(csbcpb->cpb.sha256.input_partial_digest, sctx->state, SHA256_DIGEST_SIZE); + NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE; + NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; + sctx->count += nbytes; csbcpb->cpb.sha256.message_bit_length = (u64) (sctx->count * 8); - len = sctx->count & (SHA256_BLOCK_SIZE - 1); - in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *) sctx->buf, - &len, max_sg_len); + len = nbytes; + in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)src, &len, max_sg_len); - if (len != (sctx->count & (SHA256_BLOCK_SIZE - 1))) { + if (len != nbytes) { rc = -EINVAL; goto out; } @@ -251,18 +206,34 @@ static int nx_sha256_final(struct shash_desc *desc, u8 *out) static int nx_sha256_export(struct shash_desc *desc, void *out) { struct sha256_state_be *sctx = shash_desc_ctx(desc); + union { + u8 *u8; + u32 *u32; + u64 *u64; + } p = { .u8 = out }; + int i; - memcpy(out, sctx, sizeof(*sctx)); + for (i = 0; i < SHA256_DIGEST_SIZE / sizeof(*p.u32); i++) + put_unaligned(be32_to_cpu(sctx->state[i]), p.u32++); + put_unaligned(sctx->count, p.u64++); return 0; } static int nx_sha256_import(struct shash_desc *desc, const void *in) { struct sha256_state_be *sctx = shash_desc_ctx(desc); + union { + const u8 *u8; + const u32 *u32; + const u64 *u64; + } p = { .u8 = in }; + int i; - memcpy(sctx, in, sizeof(*sctx)); + for (i = 0; i < SHA256_DIGEST_SIZE / sizeof(*p.u32); i++) + sctx->state[i] = cpu_to_be32(get_unaligned(p.u32++)); + sctx->count = get_unaligned(p.u64++); return 0; } @@ -270,19 +241,20 @@ struct shash_alg nx_shash_sha256_alg = { .digestsize = SHA256_DIGEST_SIZE, .init = nx_sha256_init, .update = nx_sha256_update, - .final = nx_sha256_final, + .finup = nx_sha256_finup, .export = nx_sha256_export, .import = nx_sha256_import, + .init_tfm = nx_crypto_ctx_sha256_init, + .exit_tfm = nx_crypto_ctx_shash_exit, .descsize = sizeof(struct sha256_state_be), .statesize = sizeof(struct sha256_state_be), .base = { .cra_name = "sha256", .cra_driver_name = "sha256-nx", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_module = THIS_MODULE, .cra_ctxsize = sizeof(struct nx_crypto_ctx), - .cra_init = nx_crypto_ctx_sha256_init, - .cra_exit = nx_crypto_ctx_exit, } }; diff --git a/drivers/crypto/nx/nx-sha512.c b/drivers/crypto/nx/nx-sha512.c index 1ffb40d2c324..f74776b7d7d7 100644 --- a/drivers/crypto/nx/nx-sha512.c +++ b/drivers/crypto/nx/nx-sha512.c @@ -9,8 +9,12 @@ #include #include +#include +#include #include -#include +#include +#include +#include #include "nx_csbcpb.h" #include "nx.h" @@ -18,12 +22,11 @@ struct sha512_state_be { __be64 state[SHA512_DIGEST_SIZE / 8]; u64 count[2]; - u8 buf[SHA512_BLOCK_SIZE]; }; -static int nx_crypto_ctx_sha512_init(struct crypto_tfm *tfm) +static int nx_crypto_ctx_sha512_init(struct crypto_shash *tfm) { - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm); + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(tfm); int err; err = nx_crypto_ctx_sha_init(tfm); @@ -43,8 +46,6 @@ static int nx_sha512_init(struct shash_desc *desc) { struct sha512_state_be *sctx = shash_desc_ctx(desc); - memset(sctx, 0, sizeof *sctx); - sctx->state[0] = __cpu_to_be64(SHA512_H0); sctx->state[1] = __cpu_to_be64(SHA512_H1); sctx->state[2] = __cpu_to_be64(SHA512_H2); @@ -54,6 +55,7 @@ static int nx_sha512_init(struct shash_desc *desc) sctx->state[6] = __cpu_to_be64(SHA512_H6); sctx->state[7] = __cpu_to_be64(SHA512_H7); sctx->count[0] = 0; + sctx->count[1] = 0; return 0; } @@ -61,30 +63,18 @@ static int nx_sha512_init(struct shash_desc *desc) static int nx_sha512_update(struct shash_desc *desc, const u8 *data, unsigned int len) { + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc->tfm); struct sha512_state_be *sctx = shash_desc_ctx(desc); - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb; + u64 to_process, leftover, total = len; struct nx_sg *out_sg; - u64 to_process, leftover = 0, total; unsigned long irq_flags; int rc = 0; int data_len; u32 max_sg_len; - u64 buf_len = (sctx->count[0] % SHA512_BLOCK_SIZE); spin_lock_irqsave(&nx_ctx->lock, irq_flags); - /* 2 cases for total data len: - * 1: < SHA512_BLOCK_SIZE: copy into state, return 0 - * 2: >= SHA512_BLOCK_SIZE: process X blocks, copy in leftover - */ - total = (sctx->count[0] % SHA512_BLOCK_SIZE) + len; - if (total < SHA512_BLOCK_SIZE) { - memcpy(sctx->buf + buf_len, data, len); - sctx->count[0] += len; - goto out; - } - memcpy(csbcpb->cpb.sha512.message_digest, sctx->state, SHA512_DIGEST_SIZE); NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE; NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; @@ -105,45 +95,17 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data, } do { - int used_sgs = 0; struct nx_sg *in_sg = nx_ctx->in_sg; - if (buf_len) { - data_len = buf_len; - in_sg = nx_build_sg_list(in_sg, - (u8 *) sctx->buf, - &data_len, max_sg_len); + to_process = total & ~(SHA512_BLOCK_SIZE - 1); - if (data_len != buf_len) { - rc = -EINVAL; - goto out; - } - used_sgs = in_sg - nx_ctx->in_sg; - } - - /* to_process: SHA512_BLOCK_SIZE aligned chunk to be - * processed in this iteration. This value is restricted - * by sg list limits and number of sgs we already used - * for leftover data. (see above) - * In ideal case, we could allow NX_PAGE_SIZE * max_sg_len, - * but because data may not be aligned, we need to account - * for that too. */ - to_process = min_t(u64, total, - (max_sg_len - 1 - used_sgs) * NX_PAGE_SIZE); - to_process = to_process & ~(SHA512_BLOCK_SIZE - 1); - - data_len = to_process - buf_len; + data_len = to_process; in_sg = nx_build_sg_list(in_sg, (u8 *) data, &data_len, max_sg_len); nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg); - if (data_len != (to_process - buf_len)) { - rc = -EINVAL; - goto out; - } - - to_process = data_len + buf_len; + to_process = data_len; leftover = total - to_process; /* @@ -166,30 +128,29 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data, atomic_inc(&(nx_ctx->stats->sha512_ops)); total -= to_process; - data += to_process - buf_len; - buf_len = 0; - + data += to_process; + sctx->count[0] += to_process; + if (sctx->count[0] < to_process) + sctx->count[1]++; } while (leftover >= SHA512_BLOCK_SIZE); - /* copy the leftover back into the state struct */ - if (leftover) - memcpy(sctx->buf, data, leftover); - sctx->count[0] += len; + rc = leftover; memcpy(sctx->state, csbcpb->cpb.sha512.message_digest, SHA512_DIGEST_SIZE); out: spin_unlock_irqrestore(&nx_ctx->lock, irq_flags); return rc; } -static int nx_sha512_final(struct shash_desc *desc, u8 *out) +static int nx_sha512_finup(struct shash_desc *desc, const u8 *src, + unsigned int nbytes, u8 *out) { struct sha512_state_be *sctx = shash_desc_ctx(desc); - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); + struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc->tfm); struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb; struct nx_sg *in_sg, *out_sg; u32 max_sg_len; - u64 count0; unsigned long irq_flags; + u64 count0, count1; int rc = 0; int len; @@ -201,30 +162,23 @@ static int nx_sha512_final(struct shash_desc *desc, u8 *out) nx_ctx->ap->databytelen/NX_PAGE_SIZE); /* final is represented by continuing the operation and indicating that - * this is not an intermediate operation */ - if (sctx->count[0] >= SHA512_BLOCK_SIZE) { - /* we've hit the nx chip previously, now we're finalizing, - * so copy over the partial digest */ - memcpy(csbcpb->cpb.sha512.input_partial_digest, sctx->state, - SHA512_DIGEST_SIZE); - NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE; - NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; - } else { - NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE; - NX_CPB_FDM(csbcpb) &= ~NX_FDM_CONTINUATION; - } - + * this is not an intermediate operation + * copy over the partial digest */ + memcpy(csbcpb->cpb.sha512.input_partial_digest, sctx->state, SHA512_DIGEST_SIZE); NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE; + NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; - count0 = sctx->count[0] * 8; + count0 = sctx->count[0] + nbytes; + count1 = sctx->count[1]; - csbcpb->cpb.sha512.message_bit_length_lo = count0; + csbcpb->cpb.sha512.message_bit_length_lo = count0 << 3; + csbcpb->cpb.sha512.message_bit_length_hi = (count1 << 3) | + (count0 >> 61); - len = sctx->count[0] & (SHA512_BLOCK_SIZE - 1); - in_sg = nx_build_sg_list(nx_ctx->in_sg, sctx->buf, &len, - max_sg_len); + len = nbytes; + in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)src, &len, max_sg_len); - if (len != (sctx->count[0] & (SHA512_BLOCK_SIZE - 1))) { + if (len != nbytes) { rc = -EINVAL; goto out; } @@ -246,7 +200,7 @@ static int nx_sha512_final(struct shash_desc *desc, u8 *out) goto out; atomic_inc(&(nx_ctx->stats->sha512_ops)); - atomic64_add(sctx->count[0], &(nx_ctx->stats->sha512_bytes)); + atomic64_add(count0, &(nx_ctx->stats->sha512_bytes)); memcpy(out, csbcpb->cpb.sha512.message_digest, SHA512_DIGEST_SIZE); out: @@ -257,18 +211,34 @@ static int nx_sha512_final(struct shash_desc *desc, u8 *out) static int nx_sha512_export(struct shash_desc *desc, void *out) { struct sha512_state_be *sctx = shash_desc_ctx(desc); + union { + u8 *u8; + u64 *u64; + } p = { .u8 = out }; + int i; - memcpy(out, sctx, sizeof(*sctx)); + for (i = 0; i < SHA512_DIGEST_SIZE / sizeof(*p.u64); i++) + put_unaligned(be64_to_cpu(sctx->state[i]), p.u64++); + put_unaligned(sctx->count[0], p.u64++); + put_unaligned(sctx->count[1], p.u64++); return 0; } static int nx_sha512_import(struct shash_desc *desc, const void *in) { struct sha512_state_be *sctx = shash_desc_ctx(desc); + union { + const u8 *u8; + const u64 *u64; + } p = { .u8 = in }; + int i; - memcpy(sctx, in, sizeof(*sctx)); + for (i = 0; i < SHA512_DIGEST_SIZE / sizeof(*p.u64); i++) + sctx->state[i] = cpu_to_be64(get_unaligned(p.u64++)); + sctx->count[0] = get_unaligned(p.u64++); + sctx->count[1] = get_unaligned(p.u64++); return 0; } @@ -276,19 +246,20 @@ struct shash_alg nx_shash_sha512_alg = { .digestsize = SHA512_DIGEST_SIZE, .init = nx_sha512_init, .update = nx_sha512_update, - .final = nx_sha512_final, + .finup = nx_sha512_finup, .export = nx_sha512_export, .import = nx_sha512_import, + .init_tfm = nx_crypto_ctx_sha512_init, + .exit_tfm = nx_crypto_ctx_shash_exit, .descsize = sizeof(struct sha512_state_be), .statesize = sizeof(struct sha512_state_be), .base = { .cra_name = "sha512", .cra_driver_name = "sha512-nx", .cra_priority = 300, + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_module = THIS_MODULE, .cra_ctxsize = sizeof(struct nx_crypto_ctx), - .cra_init = nx_crypto_ctx_sha512_init, - .cra_exit = nx_crypto_ctx_exit, } }; diff --git a/drivers/crypto/nx/nx.c b/drivers/crypto/nx/nx.c index 4e4a371ba390..78135fb13f5c 100644 --- a/drivers/crypto/nx/nx.c +++ b/drivers/crypto/nx/nx.c @@ -124,8 +124,6 @@ struct nx_sg *nx_build_sg_list(struct nx_sg *sg_head, } if ((sg - sg_head) == sgmax) { - pr_err("nx: scatter/gather list overflow, pid: %d\n", - current->pid); sg++; break; } @@ -702,14 +700,14 @@ int nx_crypto_ctx_aes_ecb_init(struct crypto_skcipher *tfm) NX_MODE_AES_ECB); } -int nx_crypto_ctx_sha_init(struct crypto_tfm *tfm) +int nx_crypto_ctx_sha_init(struct crypto_shash *tfm) { - return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_SHA, NX_MODE_SHA); + return nx_crypto_ctx_init(crypto_shash_ctx(tfm), NX_FC_SHA, NX_MODE_SHA); } -int nx_crypto_ctx_aes_xcbc_init(struct crypto_tfm *tfm) +int nx_crypto_ctx_aes_xcbc_init(struct crypto_shash *tfm) { - return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_AES, + return nx_crypto_ctx_init(crypto_shash_ctx(tfm), NX_FC_AES, NX_MODE_AES_XCBC_MAC); } @@ -744,6 +742,11 @@ void nx_crypto_ctx_aead_exit(struct crypto_aead *tfm) kfree_sensitive(nx_ctx->kmem); } +void nx_crypto_ctx_shash_exit(struct crypto_shash *tfm) +{ + nx_crypto_ctx_exit(crypto_shash_ctx(tfm)); +} + static int nx_probe(struct vio_dev *viodev, const struct vio_device_id *id) { dev_dbg(&viodev->dev, "driver probed: %s resource id: 0x%x\n", diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h index b1f6634a1644..36974f08490a 100644 --- a/drivers/crypto/nx/nx.h +++ b/drivers/crypto/nx/nx.h @@ -3,6 +3,7 @@ #ifndef __NX_H__ #define __NX_H__ +#include #include #include #include @@ -147,14 +148,15 @@ struct scatterlist; /* prototypes */ int nx_crypto_ctx_aes_ccm_init(struct crypto_aead *tfm); int nx_crypto_ctx_aes_gcm_init(struct crypto_aead *tfm); -int nx_crypto_ctx_aes_xcbc_init(struct crypto_tfm *tfm); +int nx_crypto_ctx_aes_xcbc_init(struct crypto_shash *tfm); int nx_crypto_ctx_aes_ctr_init(struct crypto_skcipher *tfm); int nx_crypto_ctx_aes_cbc_init(struct crypto_skcipher *tfm); int nx_crypto_ctx_aes_ecb_init(struct crypto_skcipher *tfm); -int nx_crypto_ctx_sha_init(struct crypto_tfm *tfm); +int nx_crypto_ctx_sha_init(struct crypto_shash *tfm); void nx_crypto_ctx_exit(struct crypto_tfm *tfm); void nx_crypto_ctx_skcipher_exit(struct crypto_skcipher *tfm); void nx_crypto_ctx_aead_exit(struct crypto_aead *tfm); +void nx_crypto_ctx_shash_exit(struct crypto_shash *tfm); void nx_ctx_init(struct nx_crypto_ctx *nx_ctx, unsigned int function); int nx_hcall_sync(struct nx_crypto_ctx *ctx, struct vio_pfo_op *op, u32 may_sleep);