Message ID | ZWWHFeOPcW30OYo1@gondor.apana.org.au |
---|---|
State | Accepted |
Commit | d07f951903fa9922c375b8ab1ce81b18a0034e3b |
Headers | show |
Series | crypto: s390/aes - Fix buffer overread in CTR mode | expand |
On 2023-11-28 07:22, Herbert Xu wrote: > When processing the last block, the s390 ctr code will always read > a whole block, even if there isn't a whole block of data left. Fix > this by using the actual length left and copy it into a buffer first > for processing. > > Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for > CTR mode") > Cc: <stable@vger.kernel.org> > Reported-by: Guangwu Zhang <guazhang@redhat.com> > Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> > > diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c > index c773820e4af9..c6fe5405de4a 100644 > --- a/arch/s390/crypto/aes_s390.c > +++ b/arch/s390/crypto/aes_s390.c > @@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request > *req) > * final block may be < AES_BLOCK_SIZE, copy only nbytes > */ > if (nbytes) { > - cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr, > + memset(buf, 0, AES_BLOCK_SIZE); > + memcpy(buf, walk.src.virt.addr, nbytes); > + cpacf_kmctr(sctx->fc, sctx->key, buf, buf, > AES_BLOCK_SIZE, walk.iv); > memcpy(walk.dst.virt.addr, buf, nbytes); > crypto_inc(walk.iv, AES_BLOCK_SIZE); Here is a similar fix for the s390 paes ctr cipher. Compiles and is tested. You may merge this with your patch for the s390 aes cipher. -------------------------------------------------------------------------------- diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c index 8b541e44151d..55ee5567a5ea 100644 --- a/arch/s390/crypto/paes_s390.c +++ b/arch/s390/crypto/paes_s390.c @@ -693,9 +693,11 @@ static int ctr_paes_crypt(struct skcipher_request *req) * final block may be < AES_BLOCK_SIZE, copy only nbytes */ if (nbytes) { + memset(buf, 0, AES_BLOCK_SIZE); + memcpy(buf, walk.src.virt.addr, nbytes); while (1) { if (cpacf_kmctr(ctx->fc, ¶m, buf, - walk.src.virt.addr, AES_BLOCK_SIZE, + buf, AES_BLOCK_SIZE, walk.iv) == AES_BLOCK_SIZE) break; if (__paes_convert_key(ctx))
On Tue, Nov 28, 2023 at 02:18:02PM +0100, Harald Freudenberger wrote: > > Here is a similar fix for the s390 paes ctr cipher. Compiles and is > tested. You may merge this with your patch for the s390 aes cipher. Thank you. I had to apply this by hand so please check the result which I've just pushed out to cryptodev. Cheers,
diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index c773820e4af9..c6fe5405de4a 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request *req) * final block may be < AES_BLOCK_SIZE, copy only nbytes */ if (nbytes) { - cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr, + memset(buf, 0, AES_BLOCK_SIZE); + memcpy(buf, walk.src.virt.addr, nbytes); + cpacf_kmctr(sctx->fc, sctx->key, buf, buf, AES_BLOCK_SIZE, walk.iv); memcpy(walk.dst.virt.addr, buf, nbytes); crypto_inc(walk.iv, AES_BLOCK_SIZE);
When processing the last block, the s390 ctr code will always read a whole block, even if there isn't a whole block of data left. Fix this by using the actual length left and copy it into a buffer first for processing. Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for CTR mode") Cc: <stable@vger.kernel.org> Reported-by: Guangwu Zhang <guazhang@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>