Message ID | 20200318234518.83906-1-Jason@zx2c4.com |
---|---|
State | New |
Headers | show |
Series | [URGENT,crypto] crypto: arm64/chacha - correctly walk through blocks | expand |
diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c index c1f9660d104c..debb1de0d3dd 100644 --- a/arch/arm64/crypto/chacha-neon-glue.c +++ b/arch/arm64/crypto/chacha-neon-glue.c @@ -55,10 +55,10 @@ static void chacha_doneon(u32 *state, u8 *dst, const u8 *src, break; } chacha_4block_xor_neon(state, dst, src, nrounds, l); - bytes -= CHACHA_BLOCK_SIZE * 5; - src += CHACHA_BLOCK_SIZE * 5; - dst += CHACHA_BLOCK_SIZE * 5; - state[12] += 5; + bytes -= l; + src += l; + dst += l; + state[12] += round_up(l, CHACHA_BLOCK_SIZE) / CHACHA_BLOCK_SIZE; } }
Prior, passing in chunks of 2, 3, or 4, followed by any additional chunks would result in the chacha state counter getting out of sync, resulting in incorrect encryption/decryption, which is a pretty nasty crypto vuln, dating back to 2018. WireGuard users never experienced this prior, because we have always, out of tree, used a different crypto library, until the recent Frankenzinc addition. This commit fixes the issue by advancing the pointers and state counter by the actual size processed. Fixes: f2ca1cbd0fb5 ("crypto: arm64/chacha - optimize for arbitrary length inputs") Reported-and-tested-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: stable@vger.kernel.org --- arch/arm64/crypto/chacha-neon-glue.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)