Message ID | 20230524165717.14062-1-chang.seok.bae@intel.com |
---|---|
Headers | show |
Series | x86: Support Key Locker | expand |
On Sat, Jun 03, 2023 at 08:22:24AM -0700, Chang S. Bae wrote: > +static __init int x86_nokeylocker_setup(char *arg) > +{ > + /* Expect an exact match without trailing characters. */ > + if (strlen(arg)) > + return 0; > + > + if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER)) > + return 1; > + > + setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER); > + pr_info("x86/keylocker: Disabled by kernel command line.\n"); > + return 1; > +} > +__setup("nokeylocker", x86_nokeylocker_setup); Can we stop adding those just to remove them at some point later but simply do: clearcpuid=keylocker ?
On Sat, Jun 03, 2023 at 08:22:25AM -0700, Chang S. Bae wrote: > Every field in struct aesni_xts_ctx is a pointer to a byte array. A byte array. Not a pointer to a byte array. > Then, the field can be redefined as that struct type instead of the obscure > pointer. There's no pointer. > static inline struct > aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm) > { > - unsigned long align = AESNI_ALIGN; > - > - if (align <= crypto_tfm_ctx_alignment()) > - align = 1; > - return PTR_ALIGN(crypto_aead_ctx(tfm), align); > + return (struct aesni_rfc4106_gcm_ctx *)aes_align_addr(crypto_aead_ctx(tfm)); > } Explicit casts from 'void *' are unnecessary. > static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, > unsigned int keylen) > { > - struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); > + struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); > int err; > > err = xts_verify_key(tfm, key, keylen); > @@ -893,20 +892,20 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, > keylen /= 2; > > /* first half of xts-key is for crypt */ > - err = aes_set_key_common(crypto_skcipher_tfm(tfm), ctx->raw_crypt_ctx, > + err = aes_set_key_common(crypto_skcipher_tfm(tfm), &ctx->crypt_ctx, > key, keylen); > if (err) > return err; > > /* second half of xts-key is for tweak */ > - return aes_set_key_common(crypto_skcipher_tfm(tfm), ctx->raw_tweak_ctx, > + return aes_set_key_common(crypto_skcipher_tfm(tfm), &ctx->tweak_ctx, > key + keylen, keylen); > } To re-iterate what I said on v6, the runtime alignment to a 16-byte boundary should happen when translating the raw crypto_skcipher_ctx() into the pointer to the aes_xts_ctx. It should not happen when accessing each individual field in the aes_xts_ctx. Yet, this code is still doing runtime alignment when accessing each individual field, as the second argument to aes_set_key_common() is 'void *raw_ctx' which aes_set_key_common() runtime-aligns to crypto_aes_ctx. We should keep everything consistent, which means making aes_set_key_common() take a pointer to crypto_aes_ctx and not do the runtime alignment. - Eric
On 6/4/2023 8:34 AM, Eric Biggers wrote: > > To re-iterate what I said on v6, the runtime alignment to a 16-byte boundary > should happen when translating the raw crypto_skcipher_ctx() into the pointer to > the aes_xts_ctx. It should not happen when accessing each individual field in > the aes_xts_ctx. > > Yet, this code is still doing runtime alignment when accessing each individual > field, as the second argument to aes_set_key_common() is 'void *raw_ctx' which > aes_set_key_common() runtime-aligns to crypto_aes_ctx. > > We should keep everything consistent, which means making aes_set_key_common() > take a pointer to crypto_aes_ctx and not do the runtime alignment. Let me clarify what is the problem this patch tried to solve here. The current struct aesni_xts_ctx is ugly. So, the main story is let's fix it before using the code for AES-KL. Then, the rework part may be applicable for code re-usability. That seems to be okay to do here. Fixing the runtime alignment entirely seems to be touching other code than AES-XTS. Yes, that's ideal cleanup for consistency. But, it seems to be less relevant in this series. I'd be happy to follow up on that improvement though. Thanks, Chang
On 6/3/2023 9:37 AM, Borislav Petkov wrote: > On Sat, Jun 03, 2023 at 08:22:24AM -0700, Chang S. Bae wrote: >> +static __init int x86_nokeylocker_setup(char *arg) >> +{ >> + /* Expect an exact match without trailing characters. */ >> + if (strlen(arg)) >> + return 0; >> + >> + if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER)) >> + return 1; >> + >> + setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER); >> + pr_info("x86/keylocker: Disabled by kernel command line.\n"); >> + return 1; >> +} >> +__setup("nokeylocker", x86_nokeylocker_setup); > > Can we stop adding those just to remove them at some point later but > simply do: > > clearcpuid=keylocker > > ? Oh, I was not sure about this policy. Thanks, now I'm glad that I have confidence in removing this. Chang
On Sun, Jun 04, 2023 at 03:02:32PM -0700, Chang S. Bae wrote: > On 6/4/2023 8:34 AM, Eric Biggers wrote: > > > > To re-iterate what I said on v6, the runtime alignment to a 16-byte boundary > > should happen when translating the raw crypto_skcipher_ctx() into the pointer to > > the aes_xts_ctx. It should not happen when accessing each individual field in > > the aes_xts_ctx. > > > > Yet, this code is still doing runtime alignment when accessing each individual > > field, as the second argument to aes_set_key_common() is 'void *raw_ctx' which > > aes_set_key_common() runtime-aligns to crypto_aes_ctx. > > > > We should keep everything consistent, which means making aes_set_key_common() > > take a pointer to crypto_aes_ctx and not do the runtime alignment. > > Let me clarify what is the problem this patch tried to solve here. The > current struct aesni_xts_ctx is ugly. So, the main story is let's fix it > before using the code for AES-KL. > > Then, the rework part may be applicable for code re-usability. That seems to > be okay to do here. > > Fixing the runtime alignment entirely seems to be touching other code than > AES-XTS. Yes, that's ideal cleanup for consistency. But, it seems to be less > relevant in this series. I'd be happy to follow up on that improvement > though. IMO the issue is that your patch makes the code (including the XTS code) inconsistent because it makes it use a mix of both approaches: it aligns each field individually, *and* it aligns the ctx up-front. I was hoping to switch fully from the former approach to the latter approach, instead of switching from the former approach to a mix of the two approaches as you are proposing. The following on top of this patch is what I am asking for. I think it would be appropriate to fold into this patch. diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 589648142c173..ad1ae7a88b59d 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -228,10 +228,10 @@ static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm) return (struct aesni_xts_ctx *)aes_align_addr(crypto_skcipher_ctx(tfm)); } -static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, +static int aes_set_key_common(struct crypto_tfm *tfm, + struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len) { - struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx); int err; if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 && @@ -252,7 +252,8 @@ static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { - return aes_set_key_common(tfm, crypto_tfm_ctx(tfm), in_key, key_len); + return aes_set_key_common(tfm, aes_ctx(crypto_tfm_ctx(tfm)), + in_key, key_len); } static void aesni_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) @@ -285,7 +286,7 @@ static int aesni_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int len) { return aes_set_key_common(crypto_skcipher_tfm(tfm), - crypto_skcipher_ctx(tfm), key, len); + aes_ctx(crypto_skcipher_ctx(tfm)), key, len); } static int ecb_encrypt(struct skcipher_request *req)
On 6/4/2023 7:46 PM, Eric Biggers wrote: > > IMO the issue is that your patch makes the code (including the XTS code) > inconsistent because it makes it use a mix of both approaches: it aligns each > field individually, *and* it aligns the ctx up-front. But, the code did this already: common_rfc4106_set_key() -> aesni_rfc4106_gcm_ctx_get(aead) // align ->ctx -> aes_set_key_common() -> aes_ctx() // here, this aligns again And generic_gcmaes_set_key() as well. Hmm, maybe a separate patch can spell out this issue more clearly for the record, no? Thanks, Chang