From patchwork Mon Jan 25 18:38:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 370857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE7B5C4332E for ; Tue, 26 Jan 2021 04:47:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B7F6B22B3F for ; Tue, 26 Jan 2021 04:47:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727737AbhAZEri (ORCPT ); Mon, 25 Jan 2021 23:47:38 -0500 Received: from mail.kernel.org ([198.145.29.99]:57802 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727324AbhAYSlN (ORCPT ); Mon, 25 Jan 2021 13:41:13 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9583220719; Mon, 25 Jan 2021 18:40:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611600032; bh=aLTtyn519QT9wxZTZ3tohzHv1VHPgs0zmVAOG/CJ9q8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uYtdM2A+PYkWEvih40iJJoKK2ZtD2dAMnYOSNcQmmXYiFBn1+UDOCYphfNf+fMP8U sLW+uWoyTgFvR4HKuo8+B29YXwZNs1cagkaLxanhRBCMGvPsTLDEjvEVolUq4ohI21 t2TdeQrZVmjT64zQhpOU7e5fl/vIvlm90p2s2RvD1zg41vsN8fLqV1wHSQQwc/k9mu LQbbut8MMIt6nH/GS/Sv0Ou8qr+zKHnz+5LMdlLzyTi7fOf7g3H1e5wADI19/LqZm8 pbYItgdGjwE03znsMJQGvOucgj3/O2RPsGDN/Iyd8KsjVggs3lnMcIYW1u9CG0+4O6 yuHlqIWYN1ZTg== From: Eric Biggers To: linux-mmc@vger.kernel.org Cc: linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, linux-fscrypt@vger.kernel.org, Satya Tangirala , Ulf Hansson , Andy Gross , Bjorn Andersson , Adrian Hunter , Asutosh Das , Rob Herring , Neeraj Soni , Barani Muthukumaran , Peng Zhou , Stanley Chu , Konrad Dybcio Subject: [PATCH v6 4/9] mmc: cqhci: add support for inline encryption Date: Mon, 25 Jan 2021 10:38:05 -0800 Message-Id: <20210125183810.198008-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210125183810.198008-1-ebiggers@kernel.org> References: <20210125183810.198008-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Eric Biggers Add support for eMMC inline encryption using the blk-crypto framework (Documentation/block/inline-encryption.rst). eMMC inline encryption support is specified by the upcoming JEDEC eMMC v5.2 specification. It is only specified for the CQ interface, not the non-CQ interface. Although the eMMC v5.2 specification hasn't been officially released yet, the crypto support was already agreed on several years ago, and it was already implemented by at least two major hardware vendors. Lots of hardware in the field already supports and uses it, e.g. Snapdragon 630 to give one example. eMMC inline encryption support is very similar to the UFS inline encryption support which was standardized in the UFS v2.1 specification and was already upstreamed. The only major difference is that eMMC limits data unit numbers to 32 bits, unlike UFS's 64 bits. Like we did with UFS, make the crypto support opt-in by individual drivers; don't enable it automatically whenever the hardware declares crypto support. This is necessary because in every case we've seen, some extra vendor-specific logic is needed to use the crypto support. Co-developed-by: Satya Tangirala Signed-off-by: Satya Tangirala Acked-by: Adrian Hunter Reviewed-by: Satya Tangirala Reviewed-and-tested-by: Peng Zhou Signed-off-by: Eric Biggers --- drivers/mmc/host/Makefile | 1 + drivers/mmc/host/cqhci-core.c | 41 +++++- drivers/mmc/host/cqhci-crypto.c | 238 ++++++++++++++++++++++++++++++++ drivers/mmc/host/cqhci-crypto.h | 47 +++++++ drivers/mmc/host/cqhci.h | 80 ++++++++++- 5 files changed, 404 insertions(+), 3 deletions(-) create mode 100644 drivers/mmc/host/cqhci-crypto.c create mode 100644 drivers/mmc/host/cqhci-crypto.h diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile index 19687ad42c6b4..9ea86a18f0127 100644 --- a/drivers/mmc/host/Makefile +++ b/drivers/mmc/host/Makefile @@ -103,6 +103,7 @@ obj-$(CONFIG_MMC_SDHCI_OMAP) += sdhci-omap.o obj-$(CONFIG_MMC_SDHCI_SPRD) += sdhci-sprd.o obj-$(CONFIG_MMC_CQHCI) += cqhci.o cqhci-y += cqhci-core.o +cqhci-$(CONFIG_MMC_CRYPTO) += cqhci-crypto.o obj-$(CONFIG_MMC_HSQ) += mmc_hsq.o ifeq ($(CONFIG_CB710_DEBUG),y) diff --git a/drivers/mmc/host/cqhci-core.c b/drivers/mmc/host/cqhci-core.c index ad7c9acff1728..93b0432bb6011 100644 --- a/drivers/mmc/host/cqhci-core.c +++ b/drivers/mmc/host/cqhci-core.c @@ -18,6 +18,7 @@ #include #include "cqhci.h" +#include "cqhci-crypto.h" #define DCMD_SLOT 31 #define NUM_SLOTS 32 @@ -258,6 +259,9 @@ static void __cqhci_enable(struct cqhci_host *cq_host) if (cq_host->caps & CQHCI_TASK_DESC_SZ_128) cqcfg |= CQHCI_TASK_DESC_SZ; + if (mmc->caps2 & MMC_CAP2_CRYPTO) + cqcfg |= CQHCI_CRYPTO_GENERAL_ENABLE; + cqhci_writel(cq_host, cqcfg, CQHCI_CFG); cqhci_writel(cq_host, lower_32_bits(cq_host->desc_dma_base), @@ -430,7 +434,7 @@ static void cqhci_prep_task_desc(struct mmc_request *mrq, task_desc[0] = cpu_to_le64(desc0); if (cq_host->caps & CQHCI_TASK_DESC_SZ_128) { - u64 desc1 = 0; + u64 desc1 = cqhci_crypto_prep_task_desc(mrq); task_desc[1] = cpu_to_le64(desc1); @@ -681,6 +685,7 @@ static void cqhci_error_irq(struct mmc_host *mmc, u32 status, int cmd_error, struct cqhci_host *cq_host = mmc->cqe_private; struct cqhci_slot *slot; u32 terri; + u32 tdpe; int tag; spin_lock(&cq_host->lock); @@ -719,6 +724,30 @@ static void cqhci_error_irq(struct mmc_host *mmc, u32 status, int cmd_error, } } + /* + * Handle ICCE ("Invalid Crypto Configuration Error"). This should + * never happen, since the block layer ensures that all crypto-enabled + * I/O requests have a valid keyslot before they reach the driver. + * + * Note that GCE ("General Crypto Error") is different; it already got + * handled above by checking TERRI. + */ + if (status & CQHCI_IS_ICCE) { + tdpe = cqhci_readl(cq_host, CQHCI_TDPE); + WARN_ONCE(1, + "%s: cqhci: invalid crypto configuration error. IRQ status: 0x%08x TDPE: 0x%08x\n", + mmc_hostname(mmc), status, tdpe); + while (tdpe != 0) { + tag = __ffs(tdpe); + tdpe &= ~(1 << tag); + slot = &cq_host->slot[tag]; + if (!slot->mrq) + continue; + slot->flags = cqhci_error_flags(data_error, cmd_error); + cqhci_recovery_needed(mmc, slot->mrq, true); + } + } + if (!cq_host->recovery_halt) { /* * The only way to guarantee forward progress is to mark at @@ -784,7 +813,8 @@ irqreturn_t cqhci_irq(struct mmc_host *mmc, u32 intmask, int cmd_error, pr_debug("%s: cqhci: IRQ status: 0x%08x\n", mmc_hostname(mmc), status); - if ((status & CQHCI_IS_RED) || cmd_error || data_error) + if ((status & (CQHCI_IS_RED | CQHCI_IS_GCE | CQHCI_IS_ICCE)) || + cmd_error || data_error) cqhci_error_irq(mmc, status, cmd_error, data_error); if (status & CQHCI_IS_TCC) { @@ -1151,6 +1181,13 @@ int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc, goto out_err; } + err = cqhci_crypto_init(cq_host); + if (err) { + pr_err("%s: CQHCI crypto initialization failed\n", + mmc_hostname(mmc)); + goto out_err; + } + spin_lock_init(&cq_host->lock); init_completion(&cq_host->halt_comp); diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c new file mode 100644 index 0000000000000..0e2a9dcac6308 --- /dev/null +++ b/drivers/mmc/host/cqhci-crypto.c @@ -0,0 +1,238 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * CQHCI crypto engine (inline encryption) support + * + * Copyright 2020 Google LLC + */ + +#include +#include +#include + +#include "cqhci-crypto.h" + +/* Map from blk-crypto modes to CQHCI crypto algorithm IDs and key sizes */ +static const struct cqhci_crypto_alg_entry { + enum cqhci_crypto_alg alg; + enum cqhci_crypto_key_size key_size; +} cqhci_crypto_algs[BLK_ENCRYPTION_MODE_MAX] = { + [BLK_ENCRYPTION_MODE_AES_256_XTS] = { + .alg = CQHCI_CRYPTO_ALG_AES_XTS, + .key_size = CQHCI_CRYPTO_KEY_SIZE_256, + }, +}; + +static inline struct cqhci_host * +cqhci_host_from_ksm(struct blk_keyslot_manager *ksm) +{ + struct mmc_host *mmc = container_of(ksm, struct mmc_host, ksm); + + return mmc->cqe_private; +} + +static void cqhci_crypto_program_key(struct cqhci_host *cq_host, + const union cqhci_crypto_cfg_entry *cfg, + int slot) +{ + u32 slot_offset = cq_host->crypto_cfg_register + slot * sizeof(*cfg); + int i; + + /* Clear CFGE */ + cqhci_writel(cq_host, 0, slot_offset + 16 * sizeof(cfg->reg_val[0])); + + /* Write the key */ + for (i = 0; i < 16; i++) { + cqhci_writel(cq_host, le32_to_cpu(cfg->reg_val[i]), + slot_offset + i * sizeof(cfg->reg_val[0])); + } + /* Write dword 17 */ + cqhci_writel(cq_host, le32_to_cpu(cfg->reg_val[17]), + slot_offset + 17 * sizeof(cfg->reg_val[0])); + /* Write dword 16, which includes the new value of CFGE */ + cqhci_writel(cq_host, le32_to_cpu(cfg->reg_val[16]), + slot_offset + 16 * sizeof(cfg->reg_val[0])); +} + +static int cqhci_crypto_keyslot_program(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + unsigned int slot) + +{ + struct cqhci_host *cq_host = cqhci_host_from_ksm(ksm); + const union cqhci_crypto_cap_entry *ccap_array = + cq_host->crypto_cap_array; + const struct cqhci_crypto_alg_entry *alg = + &cqhci_crypto_algs[key->crypto_cfg.crypto_mode]; + u8 data_unit_mask = key->crypto_cfg.data_unit_size / 512; + int i; + int cap_idx = -1; + union cqhci_crypto_cfg_entry cfg = {}; + + BUILD_BUG_ON(CQHCI_CRYPTO_KEY_SIZE_INVALID != 0); + for (i = 0; i < cq_host->crypto_capabilities.num_crypto_cap; i++) { + if (ccap_array[i].algorithm_id == alg->alg && + ccap_array[i].key_size == alg->key_size && + (ccap_array[i].sdus_mask & data_unit_mask)) { + cap_idx = i; + break; + } + } + if (WARN_ON(cap_idx < 0)) + return -EOPNOTSUPP; + + cfg.data_unit_size = data_unit_mask; + cfg.crypto_cap_idx = cap_idx; + cfg.config_enable = CQHCI_CRYPTO_CONFIGURATION_ENABLE; + + if (ccap_array[cap_idx].algorithm_id == CQHCI_CRYPTO_ALG_AES_XTS) { + /* In XTS mode, the blk_crypto_key's size is already doubled */ + memcpy(cfg.crypto_key, key->raw, key->size/2); + memcpy(cfg.crypto_key + CQHCI_CRYPTO_KEY_MAX_SIZE/2, + key->raw + key->size/2, key->size/2); + } else { + memcpy(cfg.crypto_key, key->raw, key->size); + } + + cqhci_crypto_program_key(cq_host, &cfg, slot); + + memzero_explicit(&cfg, sizeof(cfg)); + return 0; +} + +static void cqhci_crypto_clear_keyslot(struct cqhci_host *cq_host, int slot) +{ + /* + * Clear the crypto cfg on the device. Clearing CFGE + * might not be sufficient, so just clear the entire cfg. + */ + union cqhci_crypto_cfg_entry cfg = {}; + + cqhci_crypto_program_key(cq_host, &cfg, slot); +} + +static int cqhci_crypto_keyslot_evict(struct blk_keyslot_manager *ksm, + const struct blk_crypto_key *key, + unsigned int slot) +{ + struct cqhci_host *cq_host = cqhci_host_from_ksm(ksm); + + cqhci_crypto_clear_keyslot(cq_host, slot); + return 0; +} + +/* + * The keyslot management operations for CQHCI crypto. + * + * Note that the block layer ensures that these are never called while the host + * controller is runtime-suspended. However, the CQE won't necessarily be + * "enabled" when these are called, i.e. CQHCI_ENABLE might not be set in the + * CQHCI_CFG register. But the hardware allows that. + */ +static const struct blk_ksm_ll_ops cqhci_ksm_ops = { + .keyslot_program = cqhci_crypto_keyslot_program, + .keyslot_evict = cqhci_crypto_keyslot_evict, +}; + +static enum blk_crypto_mode_num +cqhci_find_blk_crypto_mode(union cqhci_crypto_cap_entry cap) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(cqhci_crypto_algs); i++) { + BUILD_BUG_ON(CQHCI_CRYPTO_KEY_SIZE_INVALID != 0); + if (cqhci_crypto_algs[i].alg == cap.algorithm_id && + cqhci_crypto_algs[i].key_size == cap.key_size) + return i; + } + return BLK_ENCRYPTION_MODE_INVALID; +} + +/** + * cqhci_crypto_init - initialize CQHCI crypto support + * @cq_host: a cqhci host + * + * If the driver previously set MMC_CAP2_CRYPTO and the CQE declares + * CQHCI_CAP_CS, initialize the crypto support. This involves reading the + * crypto capability registers, initializing the keyslot manager, clearing all + * keyslots, and enabling 128-bit task descriptors. + * + * Return: 0 if crypto was initialized or isn't supported; whether + * MMC_CAP2_CRYPTO remains set indicates which one of those cases it is. + * Also can return a negative errno value on unexpected error. + */ +int cqhci_crypto_init(struct cqhci_host *cq_host) +{ + struct mmc_host *mmc = cq_host->mmc; + struct device *dev = mmc_dev(mmc); + struct blk_keyslot_manager *ksm = &mmc->ksm; + unsigned int num_keyslots; + unsigned int cap_idx; + enum blk_crypto_mode_num blk_mode_num; + unsigned int slot; + int err = 0; + + if (!(mmc->caps2 & MMC_CAP2_CRYPTO) || + !(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS)) + goto out; + + cq_host->crypto_capabilities.reg_val = + cpu_to_le32(cqhci_readl(cq_host, CQHCI_CCAP)); + + cq_host->crypto_cfg_register = + (u32)cq_host->crypto_capabilities.config_array_ptr * 0x100; + + cq_host->crypto_cap_array = + devm_kcalloc(dev, cq_host->crypto_capabilities.num_crypto_cap, + sizeof(cq_host->crypto_cap_array[0]), GFP_KERNEL); + if (!cq_host->crypto_cap_array) { + err = -ENOMEM; + goto out; + } + + /* + * CCAP.CFGC is off by one, so the actual number of crypto + * configurations (a.k.a. keyslots) is CCAP.CFGC + 1. + */ + num_keyslots = cq_host->crypto_capabilities.config_count + 1; + + err = devm_blk_ksm_init(dev, ksm, num_keyslots); + if (err) + goto out; + + ksm->ksm_ll_ops = cqhci_ksm_ops; + ksm->dev = dev; + + /* Unfortunately, CQHCI crypto only supports 32 DUN bits. */ + ksm->max_dun_bytes_supported = 4; + + /* + * Cache all the crypto capabilities and advertise the supported crypto + * modes and data unit sizes to the block layer. + */ + for (cap_idx = 0; cap_idx < cq_host->crypto_capabilities.num_crypto_cap; + cap_idx++) { + cq_host->crypto_cap_array[cap_idx].reg_val = + cpu_to_le32(cqhci_readl(cq_host, + CQHCI_CRYPTOCAP + + cap_idx * sizeof(__le32))); + blk_mode_num = cqhci_find_blk_crypto_mode( + cq_host->crypto_cap_array[cap_idx]); + if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID) + continue; + ksm->crypto_modes_supported[blk_mode_num] |= + cq_host->crypto_cap_array[cap_idx].sdus_mask * 512; + } + + /* Clear all the keyslots so that we start in a known state. */ + for (slot = 0; slot < num_keyslots; slot++) + cqhci_crypto_clear_keyslot(cq_host, slot); + + /* CQHCI crypto requires the use of 128-bit task descriptors. */ + cq_host->caps |= CQHCI_TASK_DESC_SZ_128; + + return 0; + +out: + mmc->caps2 &= ~MMC_CAP2_CRYPTO; + return err; +} diff --git a/drivers/mmc/host/cqhci-crypto.h b/drivers/mmc/host/cqhci-crypto.h new file mode 100644 index 0000000000000..60b58ee0e6256 --- /dev/null +++ b/drivers/mmc/host/cqhci-crypto.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * CQHCI crypto engine (inline encryption) support + * + * Copyright 2020 Google LLC + */ + +#ifndef LINUX_MMC_CQHCI_CRYPTO_H +#define LINUX_MMC_CQHCI_CRYPTO_H + +#include + +#include "cqhci.h" + +#ifdef CONFIG_MMC_CRYPTO + +int cqhci_crypto_init(struct cqhci_host *host); + +/* + * Returns the crypto bits that should be set in bits 64-127 of the + * task descriptor. + */ +static inline u64 cqhci_crypto_prep_task_desc(struct mmc_request *mrq) +{ + if (!mrq->crypto_enabled) + return 0; + + return CQHCI_CRYPTO_ENABLE_BIT | + CQHCI_CRYPTO_KEYSLOT(mrq->crypto_key_slot) | + mrq->data_unit_num; +} + +#else /* CONFIG_MMC_CRYPTO */ + +static inline int cqhci_crypto_init(struct cqhci_host *host) +{ + return 0; +} + +static inline u64 cqhci_crypto_prep_task_desc(struct mmc_request *mrq) +{ + return 0; +} + +#endif /* !CONFIG_MMC_CRYPTO */ + +#endif /* LINUX_MMC_CQHCI_CRYPTO_H */ diff --git a/drivers/mmc/host/cqhci.h b/drivers/mmc/host/cqhci.h index 89bf6adbce8ca..8e9e8f5db5bcc 100644 --- a/drivers/mmc/host/cqhci.h +++ b/drivers/mmc/host/cqhci.h @@ -22,10 +22,13 @@ /* capabilities */ #define CQHCI_CAP 0x04 +#define CQHCI_CAP_CS 0x10000000 /* Crypto Support */ + /* configuration */ #define CQHCI_CFG 0x08 #define CQHCI_DCMD 0x00001000 #define CQHCI_TASK_DESC_SZ 0x00000100 +#define CQHCI_CRYPTO_GENERAL_ENABLE 0x00000002 #define CQHCI_ENABLE 0x00000001 /* control */ @@ -39,8 +42,11 @@ #define CQHCI_IS_TCC BIT(1) #define CQHCI_IS_RED BIT(2) #define CQHCI_IS_TCL BIT(3) +#define CQHCI_IS_GCE BIT(4) /* General Crypto Error */ +#define CQHCI_IS_ICCE BIT(5) /* Invalid Crypto Config Error */ -#define CQHCI_IS_MASK (CQHCI_IS_TCC | CQHCI_IS_RED) +#define CQHCI_IS_MASK (CQHCI_IS_TCC | CQHCI_IS_RED | \ + CQHCI_IS_GCE | CQHCI_IS_ICCE) /* interrupt status enable */ #define CQHCI_ISTE 0x14 @@ -78,6 +84,9 @@ /* task clear */ #define CQHCI_TCLR 0x38 +/* task descriptor processing error */ +#define CQHCI_TDPE 0x3c + /* send status config 1 */ #define CQHCI_SSC1 0x40 #define CQHCI_SSC1_CBC_MASK GENMASK(19, 16) @@ -107,6 +116,10 @@ /* command response argument */ #define CQHCI_CRA 0x5C +/* crypto capabilities */ +#define CQHCI_CCAP 0x100 +#define CQHCI_CRYPTOCAP 0x104 + #define CQHCI_INT_ALL 0xF #define CQHCI_IC_DEFAULT_ICCTH 31 #define CQHCI_IC_DEFAULT_ICTOVAL 1 @@ -133,11 +146,70 @@ #define CQHCI_CMD_TIMING(x) (((x) & 1) << 22) #define CQHCI_RESP_TYPE(x) (((x) & 0x3) << 23) +/* crypto task descriptor fields (for bits 64-127 of task descriptor) */ +#define CQHCI_CRYPTO_ENABLE_BIT (1ULL << 47) +#define CQHCI_CRYPTO_KEYSLOT(x) ((u64)(x) << 32) + /* transfer descriptor fields */ #define CQHCI_DAT_LENGTH(x) (((x) & 0xFFFF) << 16) #define CQHCI_DAT_ADDR_LO(x) (((x) & 0xFFFFFFFF) << 32) #define CQHCI_DAT_ADDR_HI(x) (((x) & 0xFFFFFFFF) << 0) +/* CCAP - Crypto Capability 100h */ +union cqhci_crypto_capabilities { + __le32 reg_val; + struct { + u8 num_crypto_cap; + u8 config_count; + u8 reserved; + u8 config_array_ptr; + }; +}; + +enum cqhci_crypto_key_size { + CQHCI_CRYPTO_KEY_SIZE_INVALID = 0, + CQHCI_CRYPTO_KEY_SIZE_128 = 1, + CQHCI_CRYPTO_KEY_SIZE_192 = 2, + CQHCI_CRYPTO_KEY_SIZE_256 = 3, + CQHCI_CRYPTO_KEY_SIZE_512 = 4, +}; + +enum cqhci_crypto_alg { + CQHCI_CRYPTO_ALG_AES_XTS = 0, + CQHCI_CRYPTO_ALG_BITLOCKER_AES_CBC = 1, + CQHCI_CRYPTO_ALG_AES_ECB = 2, + CQHCI_CRYPTO_ALG_ESSIV_AES_CBC = 3, +}; + +/* x-CRYPTOCAP - Crypto Capability X */ +union cqhci_crypto_cap_entry { + __le32 reg_val; + struct { + u8 algorithm_id; + u8 sdus_mask; /* Supported data unit size mask */ + u8 key_size; + u8 reserved; + }; +}; + +#define CQHCI_CRYPTO_CONFIGURATION_ENABLE (1 << 7) +#define CQHCI_CRYPTO_KEY_MAX_SIZE 64 +/* x-CRYPTOCFG - Crypto Configuration X */ +union cqhci_crypto_cfg_entry { + __le32 reg_val[32]; + struct { + u8 crypto_key[CQHCI_CRYPTO_KEY_MAX_SIZE]; + u8 data_unit_size; + u8 crypto_cap_idx; + u8 reserved_1; + u8 config_enable; + u8 reserved_multi_host; + u8 reserved_2; + u8 vsb[2]; + u8 reserved_3[56]; + }; +}; + struct cqhci_host_ops; struct mmc_host; struct mmc_request; @@ -196,6 +268,12 @@ struct cqhci_host { struct completion halt_comp; wait_queue_head_t wait_queue; struct cqhci_slot *slot; + +#ifdef CONFIG_MMC_CRYPTO + union cqhci_crypto_capabilities crypto_capabilities; + union cqhci_crypto_cap_entry *crypto_cap_array; + u32 crypto_cfg_register; +#endif }; struct cqhci_host_ops { From patchwork Mon Jan 25 18:38:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 370764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03E93C4361A for ; Tue, 26 Jan 2021 21:28:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D451120575 for ; Tue, 26 Jan 2021 21:28:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727764AbhAZErk (ORCPT ); Mon, 25 Jan 2021 23:47:40 -0500 Received: from mail.kernel.org ([198.145.29.99]:58162 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726607AbhAYSlx (ORCPT ); Mon, 25 Jan 2021 13:41:53 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3F489206E5; Mon, 25 Jan 2021 18:40:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611600032; bh=WUKP87w2XAHM6dlfW//r1KXDrQH7mn8Uptp48oDpgmo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e27AsH2ozqOhPEb+GnonkJResMgU6NjMDKHeqTuqjbTJPF+2zpISpJJmlMZrZjFa/ 5bxFFcK8Wj2IWS9L2tZ07i077bSCUyR1BIXDRxGgd1QguMqvdsWg0dtEhw7G4bzDOG 6+NigcPEm4LDN4pPVUFEfRYS40fzxxJiQToGb+YROCJnIg9cXj6CacJesZLo/YnqG+ t3kQgsEtgw31NvDngLdWi5B5B5tqtF0crR75+9qmtQxqOYAUEX0hKHy0iKuIt9SQkD oKAHOmHBGPYJL4X4Z8tU2bSk5V5Id1VVEWUj2FnK9eDCi4il9jYwDMjX8my4u+gLSr X9+7/oQJ2SK4A== From: Eric Biggers To: linux-mmc@vger.kernel.org Cc: linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, linux-fscrypt@vger.kernel.org, Satya Tangirala , Ulf Hansson , Andy Gross , Bjorn Andersson , Adrian Hunter , Asutosh Das , Rob Herring , Neeraj Soni , Barani Muthukumaran , Peng Zhou , Stanley Chu , Konrad Dybcio Subject: [PATCH v6 5/9] mmc: cqhci: add cqhci_host_ops::program_key Date: Mon, 25 Jan 2021 10:38:06 -0800 Message-Id: <20210125183810.198008-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210125183810.198008-1-ebiggers@kernel.org> References: <20210125183810.198008-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Eric Biggers On Snapdragon SoCs, the Linux kernel isn't permitted to directly access the standard CQHCI crypto configuration registers. Instead, programming and evicting keys must be done through vendor-specific SMC calls. To support this hardware, add a ->program_key() method to 'struct cqhci_host_ops'. This allows overriding the standard CQHCI crypto key programming / eviction procedure. This is inspired by the corresponding UFS crypto support, which uses these same SMC calls. See commit 1bc726e26ef3 ("scsi: ufs: Add program_key() variant op"). Acked-by: Adrian Hunter Reviewed-by: Satya Tangirala Reviewed-and-tested-by: Peng Zhou Signed-off-by: Eric Biggers --- drivers/mmc/host/cqhci-crypto.c | 22 +++++++++++++--------- drivers/mmc/host/cqhci.h | 4 ++++ 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c index 0e2a9dcac6308..6419cfbb4ab78 100644 --- a/drivers/mmc/host/cqhci-crypto.c +++ b/drivers/mmc/host/cqhci-crypto.c @@ -30,13 +30,16 @@ cqhci_host_from_ksm(struct blk_keyslot_manager *ksm) return mmc->cqe_private; } -static void cqhci_crypto_program_key(struct cqhci_host *cq_host, - const union cqhci_crypto_cfg_entry *cfg, - int slot) +static int cqhci_crypto_program_key(struct cqhci_host *cq_host, + const union cqhci_crypto_cfg_entry *cfg, + int slot) { u32 slot_offset = cq_host->crypto_cfg_register + slot * sizeof(*cfg); int i; + if (cq_host->ops->program_key) + return cq_host->ops->program_key(cq_host, cfg, slot); + /* Clear CFGE */ cqhci_writel(cq_host, 0, slot_offset + 16 * sizeof(cfg->reg_val[0])); @@ -51,6 +54,7 @@ static void cqhci_crypto_program_key(struct cqhci_host *cq_host, /* Write dword 16, which includes the new value of CFGE */ cqhci_writel(cq_host, le32_to_cpu(cfg->reg_val[16]), slot_offset + 16 * sizeof(cfg->reg_val[0])); + return 0; } static int cqhci_crypto_keyslot_program(struct blk_keyslot_manager *ksm, @@ -67,6 +71,7 @@ static int cqhci_crypto_keyslot_program(struct blk_keyslot_manager *ksm, int i; int cap_idx = -1; union cqhci_crypto_cfg_entry cfg = {}; + int err; BUILD_BUG_ON(CQHCI_CRYPTO_KEY_SIZE_INVALID != 0); for (i = 0; i < cq_host->crypto_capabilities.num_crypto_cap; i++) { @@ -93,13 +98,13 @@ static int cqhci_crypto_keyslot_program(struct blk_keyslot_manager *ksm, memcpy(cfg.crypto_key, key->raw, key->size); } - cqhci_crypto_program_key(cq_host, &cfg, slot); + err = cqhci_crypto_program_key(cq_host, &cfg, slot); memzero_explicit(&cfg, sizeof(cfg)); - return 0; + return err; } -static void cqhci_crypto_clear_keyslot(struct cqhci_host *cq_host, int slot) +static int cqhci_crypto_clear_keyslot(struct cqhci_host *cq_host, int slot) { /* * Clear the crypto cfg on the device. Clearing CFGE @@ -107,7 +112,7 @@ static void cqhci_crypto_clear_keyslot(struct cqhci_host *cq_host, int slot) */ union cqhci_crypto_cfg_entry cfg = {}; - cqhci_crypto_program_key(cq_host, &cfg, slot); + return cqhci_crypto_program_key(cq_host, &cfg, slot); } static int cqhci_crypto_keyslot_evict(struct blk_keyslot_manager *ksm, @@ -116,8 +121,7 @@ static int cqhci_crypto_keyslot_evict(struct blk_keyslot_manager *ksm, { struct cqhci_host *cq_host = cqhci_host_from_ksm(ksm); - cqhci_crypto_clear_keyslot(cq_host, slot); - return 0; + return cqhci_crypto_clear_keyslot(cq_host, slot); } /* diff --git a/drivers/mmc/host/cqhci.h b/drivers/mmc/host/cqhci.h index 8e9e8f5db5bcc..ba9387ed90eb6 100644 --- a/drivers/mmc/host/cqhci.h +++ b/drivers/mmc/host/cqhci.h @@ -286,6 +286,10 @@ struct cqhci_host_ops { u64 *data); void (*pre_enable)(struct mmc_host *mmc); void (*post_disable)(struct mmc_host *mmc); +#ifdef CONFIG_MMC_CRYPTO + int (*program_key)(struct cqhci_host *cq_host, + const union cqhci_crypto_cfg_entry *cfg, int slot); +#endif }; static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg) From patchwork Mon Jan 25 18:38:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 370306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 528E1C433DB for ; Mon, 25 Jan 2021 18:42:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3108523107 for ; Mon, 25 Jan 2021 18:42:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726988AbhAYSmR (ORCPT ); Mon, 25 Jan 2021 13:42:17 -0500 Received: from mail.kernel.org ([198.145.29.99]:58344 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726725AbhAYSmG (ORCPT ); Mon, 25 Jan 2021 13:42:06 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1B5FD221E7; Mon, 25 Jan 2021 18:40:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611600034; bh=IL3fKoYywx8E3sEg7Ojvw5iaWwEK8cFMa0PYN3PSCdQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uBngGyjERhk8wT16r4R3c20Rc/glumV4htW9AdAdxdZOh1uCsvjGn9VoKR+vMguIW Cez3YdC8sBhjCzIr2PrdujKrTRJZPxBoJlEBErKwa7gUCvMO5R1vmGN/P3rBL4ypYP EBtx6+r6r9Wbg45JIpz91IQrA7xSxCriWRrxEE/C2YZkowMvQJsLI5b+3hVDM8AJnm kdWlxquMGpeKcKPKGDaf6hsR9g0gZLHtri9MhcZWJMGDQlCTxRoDfLqHiKwdQcfPJa TKUmsKFHhQAbOmtQiwIUwIL/2YEPcc8uIBBvPZSxHU0gUNgRH/C7QnsrLB53Tpf77k pCX839G7l2aew== From: Eric Biggers To: linux-mmc@vger.kernel.org Cc: linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, linux-fscrypt@vger.kernel.org, Satya Tangirala , Ulf Hansson , Andy Gross , Bjorn Andersson , Adrian Hunter , Asutosh Das , Rob Herring , Neeraj Soni , Barani Muthukumaran , Peng Zhou , Stanley Chu , Konrad Dybcio Subject: [PATCH v6 8/9] mmc: sdhci-msm: add Inline Crypto Engine support Date: Mon, 25 Jan 2021 10:38:09 -0800 Message-Id: <20210125183810.198008-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210125183810.198008-1-ebiggers@kernel.org> References: <20210125183810.198008-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Eric Biggers Add support for Qualcomm Inline Crypto Engine (ICE) to sdhci-msm. The standard-compliant parts, such as querying the crypto capabilities and enabling crypto for individual MMC requests, are already handled by cqhci-crypto.c, which itself is wired into the blk-crypto framework. However, ICE requires vendor-specific init, enable, and resume logic, and it requires that keys be programmed and evicted by vendor-specific SMC calls. Make the sdhci-msm driver handle these details. This is heavily inspired by the similar changes made for UFS, since the UFS and eMMC ICE instances are very similar. See commit df4ec2fa7a4d ("scsi: ufs-qcom: Add Inline Crypto Engine support"). I tested this on a Sony Xperia 10, which uses the Snapdragon 630 SoC, which has basic upstream support. Mainly, I used android-xfstests (https://github.com/tytso/xfstests-bld/blob/master/Documentation/android-xfstests.md) to run the ext4 and f2fs encryption tests in a Debian chroot: android-xfstests -c ext4,f2fs -g encrypt -m inlinecrypt These tests included tests which verify that the on-disk ciphertext is identical to that produced by a software implementation. I also verified that ICE was actually being used. Acked-by: Adrian Hunter Reviewed-by: Satya Tangirala Signed-off-by: Eric Biggers --- drivers/mmc/host/Kconfig | 1 + drivers/mmc/host/sdhci-msm.c | 276 ++++++++++++++++++++++++++++++++++- 2 files changed, 273 insertions(+), 4 deletions(-) diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig index bbf6989e36386..a29411ca626f5 100644 --- a/drivers/mmc/host/Kconfig +++ b/drivers/mmc/host/Kconfig @@ -546,6 +546,7 @@ config MMC_SDHCI_MSM depends on MMC_SDHCI_PLTFM select MMC_SDHCI_IO_ACCESSORS select MMC_CQHCI + select QCOM_SCM if MMC_CRYPTO && ARCH_QCOM help This selects the Secure Digital Host Controller Interface (SDHCI) support present in Qualcomm SOCs. The controller supports diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c index 97902616a695e..5e1da4df096f6 100644 --- a/drivers/mmc/host/sdhci-msm.c +++ b/drivers/mmc/host/sdhci-msm.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -255,10 +256,12 @@ struct sdhci_msm_variant_info { struct sdhci_msm_host { struct platform_device *pdev; void __iomem *core_mem; /* MSM SDCC mapped address */ + void __iomem *ice_mem; /* MSM ICE mapped address (if available) */ int pwr_irq; /* power irq */ struct clk *bus_clk; /* SDHC bus voter clock */ struct clk *xo_clk; /* TCXO clk needed for FLL feature of cm_dll*/ - struct clk_bulk_data bulk_clks[4]; /* core, iface, cal, sleep clocks */ + /* core, iface, cal, sleep, and ice clocks */ + struct clk_bulk_data bulk_clks[5]; unsigned long clk_rate; struct mmc_host *mmc; struct opp_table *opp_table; @@ -1792,6 +1795,246 @@ static void sdhci_msm_set_clock(struct sdhci_host *host, unsigned int clock) __sdhci_msm_set_clock(host, clock); } +/*****************************************************************************\ + * * + * Inline Crypto Engine (ICE) support * + * * +\*****************************************************************************/ + +#ifdef CONFIG_MMC_CRYPTO + +#define AES_256_XTS_KEY_SIZE 64 + +/* QCOM ICE registers */ + +#define QCOM_ICE_REG_VERSION 0x0008 + +#define QCOM_ICE_REG_FUSE_SETTING 0x0010 +#define QCOM_ICE_FUSE_SETTING_MASK 0x1 +#define QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK 0x2 +#define QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK 0x4 + +#define QCOM_ICE_REG_BIST_STATUS 0x0070 +#define QCOM_ICE_BIST_STATUS_MASK 0xF0000000 + +#define QCOM_ICE_REG_ADVANCED_CONTROL 0x1000 + +#define sdhci_msm_ice_writel(host, val, reg) \ + writel((val), (host)->ice_mem + (reg)) +#define sdhci_msm_ice_readl(host, reg) \ + readl((host)->ice_mem + (reg)) + +static bool sdhci_msm_ice_supported(struct sdhci_msm_host *msm_host) +{ + struct device *dev = mmc_dev(msm_host->mmc); + u32 regval = sdhci_msm_ice_readl(msm_host, QCOM_ICE_REG_VERSION); + int major = regval >> 24; + int minor = (regval >> 16) & 0xFF; + int step = regval & 0xFFFF; + + /* For now this driver only supports ICE version 3. */ + if (major != 3) { + dev_warn(dev, "Unsupported ICE version: v%d.%d.%d\n", + major, minor, step); + return false; + } + + dev_info(dev, "Found QC Inline Crypto Engine (ICE) v%d.%d.%d\n", + major, minor, step); + + /* If fuses are blown, ICE might not work in the standard way. */ + regval = sdhci_msm_ice_readl(msm_host, QCOM_ICE_REG_FUSE_SETTING); + if (regval & (QCOM_ICE_FUSE_SETTING_MASK | + QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK | + QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK)) { + dev_warn(dev, "Fuses are blown; ICE is unusable!\n"); + return false; + } + return true; +} + +static inline struct clk *sdhci_msm_ice_get_clk(struct device *dev) +{ + return devm_clk_get(dev, "ice"); +} + +static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host, + struct cqhci_host *cq_host) +{ + struct mmc_host *mmc = msm_host->mmc; + struct device *dev = mmc_dev(mmc); + struct resource *res; + int err; + + if (!(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS)) + return 0; + + res = platform_get_resource_byname(msm_host->pdev, IORESOURCE_MEM, + "ice"); + if (!res) { + dev_warn(dev, "ICE registers not found\n"); + goto disable; + } + + if (!qcom_scm_ice_available()) { + dev_warn(dev, "ICE SCM interface not found\n"); + goto disable; + } + + msm_host->ice_mem = devm_ioremap_resource(dev, res); + if (IS_ERR(msm_host->ice_mem)) { + err = PTR_ERR(msm_host->ice_mem); + dev_err(dev, "Failed to map ICE registers; err=%d\n", err); + return err; + } + + if (!sdhci_msm_ice_supported(msm_host)) + goto disable; + + mmc->caps2 |= MMC_CAP2_CRYPTO; + return 0; + +disable: + dev_warn(dev, "Disabling inline encryption support\n"); + return 0; +} + +static void sdhci_msm_ice_low_power_mode_enable(struct sdhci_msm_host *msm_host) +{ + u32 regval; + + regval = sdhci_msm_ice_readl(msm_host, QCOM_ICE_REG_ADVANCED_CONTROL); + /* + * Enable low power mode sequence + * [0]-0, [1]-0, [2]-0, [3]-E, [4]-0, [5]-0, [6]-0, [7]-0 + */ + regval |= 0x7000; + sdhci_msm_ice_writel(msm_host, regval, QCOM_ICE_REG_ADVANCED_CONTROL); +} + +static void sdhci_msm_ice_optimization_enable(struct sdhci_msm_host *msm_host) +{ + u32 regval; + + /* ICE Optimizations Enable Sequence */ + regval = sdhci_msm_ice_readl(msm_host, QCOM_ICE_REG_ADVANCED_CONTROL); + regval |= 0xD807100; + /* ICE HPG requires delay before writing */ + udelay(5); + sdhci_msm_ice_writel(msm_host, regval, QCOM_ICE_REG_ADVANCED_CONTROL); + udelay(5); +} + +/* + * Wait until the ICE BIST (built-in self-test) has completed. + * + * This may be necessary before ICE can be used. + * + * Note that we don't really care whether the BIST passed or failed; we really + * just want to make sure that it isn't still running. This is because (a) the + * BIST is a FIPS compliance thing that never fails in practice, (b) ICE is + * documented to reject crypto requests if the BIST fails, so we needn't do it + * in software too, and (c) properly testing storage encryption requires testing + * the full storage stack anyway, and not relying on hardware-level self-tests. + */ +static int sdhci_msm_ice_wait_bist_status(struct sdhci_msm_host *msm_host) +{ + u32 regval; + int err; + + err = readl_poll_timeout(msm_host->ice_mem + QCOM_ICE_REG_BIST_STATUS, + regval, !(regval & QCOM_ICE_BIST_STATUS_MASK), + 50, 5000); + if (err) + dev_err(mmc_dev(msm_host->mmc), + "Timed out waiting for ICE self-test to complete\n"); + return err; +} + +static void sdhci_msm_ice_enable(struct sdhci_msm_host *msm_host) +{ + if (!(msm_host->mmc->caps2 & MMC_CAP2_CRYPTO)) + return; + sdhci_msm_ice_low_power_mode_enable(msm_host); + sdhci_msm_ice_optimization_enable(msm_host); + sdhci_msm_ice_wait_bist_status(msm_host); +} + +static int __maybe_unused sdhci_msm_ice_resume(struct sdhci_msm_host *msm_host) +{ + if (!(msm_host->mmc->caps2 & MMC_CAP2_CRYPTO)) + return 0; + return sdhci_msm_ice_wait_bist_status(msm_host); +} + +/* + * Program a key into a QC ICE keyslot, or evict a keyslot. QC ICE requires + * vendor-specific SCM calls for this; it doesn't support the standard way. + */ +static int sdhci_msm_program_key(struct cqhci_host *cq_host, + const union cqhci_crypto_cfg_entry *cfg, + int slot) +{ + struct device *dev = mmc_dev(cq_host->mmc); + union cqhci_crypto_cap_entry cap; + union { + u8 bytes[AES_256_XTS_KEY_SIZE]; + u32 words[AES_256_XTS_KEY_SIZE / sizeof(u32)]; + } key; + int i; + int err; + + if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE)) + return qcom_scm_ice_invalidate_key(slot); + + /* Only AES-256-XTS has been tested so far. */ + cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx]; + if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS || + cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256) { + dev_err_ratelimited(dev, + "Unhandled crypto capability; algorithm_id=%d, key_size=%d\n", + cap.algorithm_id, cap.key_size); + return -EINVAL; + } + + memcpy(key.bytes, cfg->crypto_key, AES_256_XTS_KEY_SIZE); + + /* + * The SCM call byte-swaps the 32-bit words of the key. So we have to + * do the same, in order for the final key be correct. + */ + for (i = 0; i < ARRAY_SIZE(key.words); i++) + __cpu_to_be32s(&key.words[i]); + + err = qcom_scm_ice_set_key(slot, key.bytes, AES_256_XTS_KEY_SIZE, + QCOM_SCM_ICE_CIPHER_AES_256_XTS, + cfg->data_unit_size); + memzero_explicit(&key, sizeof(key)); + return err; +} +#else /* CONFIG_MMC_CRYPTO */ +static inline struct clk *sdhci_msm_ice_get_clk(struct device *dev) +{ + return NULL; +} + +static inline int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host, + struct cqhci_host *cq_host) +{ + return 0; +} + +static inline void sdhci_msm_ice_enable(struct sdhci_msm_host *msm_host) +{ +} + +static inline int __maybe_unused +sdhci_msm_ice_resume(struct sdhci_msm_host *msm_host) +{ + return 0; +} +#endif /* !CONFIG_MMC_CRYPTO */ + /*****************************************************************************\ * * * MSM Command Queue Engine (CQE) * @@ -1810,6 +2053,16 @@ static u32 sdhci_msm_cqe_irq(struct sdhci_host *host, u32 intmask) return 0; } +static void sdhci_msm_cqe_enable(struct mmc_host *mmc) +{ + struct sdhci_host *host = mmc_priv(mmc); + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); + + sdhci_cqe_enable(mmc); + sdhci_msm_ice_enable(msm_host); +} + static void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery) { struct sdhci_host *host = mmc_priv(mmc); @@ -1842,8 +2095,11 @@ static void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery) } static const struct cqhci_host_ops sdhci_msm_cqhci_ops = { - .enable = sdhci_cqe_enable, + .enable = sdhci_msm_cqe_enable, .disable = sdhci_msm_cqe_disable, +#ifdef CONFIG_MMC_CRYPTO + .program_key = sdhci_msm_program_key, +#endif }; static int sdhci_msm_cqe_add_host(struct sdhci_host *host, @@ -1879,6 +2135,10 @@ static int sdhci_msm_cqe_add_host(struct sdhci_host *host, dma64 = host->flags & SDHCI_USE_64_BIT_DMA; + ret = sdhci_msm_ice_init(msm_host, cq_host); + if (ret) + goto cleanup; + ret = cqhci_init(cq_host, host->mmc, dma64); if (ret) { dev_err(&pdev->dev, "%s: CQE init: failed (%d)\n", @@ -2319,6 +2579,11 @@ static int sdhci_msm_probe(struct platform_device *pdev) clk = NULL; msm_host->bulk_clks[3].clk = clk; + clk = sdhci_msm_ice_get_clk(&pdev->dev); + if (IS_ERR(clk)) + clk = NULL; + msm_host->bulk_clks[4].clk = clk; + ret = clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks), msm_host->bulk_clks); if (ret) @@ -2532,12 +2797,15 @@ static __maybe_unused int sdhci_msm_runtime_resume(struct device *dev) * Whenever core-clock is gated dynamically, it's needed to * restore the SDR DLL settings when the clock is ungated. */ - if (msm_host->restore_dll_config && msm_host->clk_rate) + if (msm_host->restore_dll_config && msm_host->clk_rate) { ret = sdhci_msm_restore_sdr_dll_config(host); + if (ret) + return ret; + } dev_pm_opp_set_rate(dev, msm_host->clk_rate); - return ret; + return sdhci_msm_ice_resume(msm_host); } static const struct dev_pm_ops sdhci_msm_pm_ops = { From patchwork Mon Jan 25 18:38:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 370856 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95CB3C433E0 for ; Tue, 26 Jan 2021 04:48:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4CA7E22B2C for ; Tue, 26 Jan 2021 04:48:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727827AbhAZErr (ORCPT ); Mon, 25 Jan 2021 23:47:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:58348 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726586AbhAYSmF (ORCPT ); Mon, 25 Jan 2021 13:42:05 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A769B206FA; Mon, 25 Jan 2021 18:40:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611600035; bh=G16sIENRX9EMyT17x8UvaimcQGeqHoMJpMYb9LmaK3k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lxXvobahmBexVbR80EMa8QXk0stwHxVUyECEStOVYqjbcx0CMuqD0ME0i+boBzAdw 2zPKXSM6mf1nOeVSR4iqUFsLJQPqDF++ogzs/uFCTdYnXGCAXCP8sviUuPMsKSJutq k31ny/cXGIsXjd+TxLdhJ9WWRDvfUE1G1zehMv+ctAuYNnwJU2Bxe7aDIp2roLQXOu t/Q4+2Lhp6s5xBksAaCe0Y9EqM2sKVh6JbI8S7AXfmp5MFcQL4M6o6NlnKGFKfDeBi cpgJxXXo0YUbkRYgmm0ggQc3oxZnxgdOTNbw8vorLY41IVkIy6dcmLn4sS8RDNe4HS 1BT2mCzktY4yg== From: Eric Biggers To: linux-mmc@vger.kernel.org Cc: linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, linux-fscrypt@vger.kernel.org, Satya Tangirala , Ulf Hansson , Andy Gross , Bjorn Andersson , Adrian Hunter , Asutosh Das , Rob Herring , Neeraj Soni , Barani Muthukumaran , Peng Zhou , Stanley Chu , Konrad Dybcio Subject: [PATCH v6 9/9] arm64: dts: qcom: sdm630: add ICE registers and clocks Date: Mon, 25 Jan 2021 10:38:10 -0800 Message-Id: <20210125183810.198008-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210125183810.198008-1-ebiggers@kernel.org> References: <20210125183810.198008-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Eric Biggers Add the registers and clock for the Inline Crypto Engine (ICE) to the device tree node for the sdhci-msm host controller on sdm630. This allows sdhci-msm to support inline encryption on sdm630. Signed-off-by: Eric Biggers --- arch/arm64/boot/dts/qcom/sdm630.dtsi | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi index 37d5cc32f6b62..afb3d20c31fa0 100644 --- a/arch/arm64/boot/dts/qcom/sdm630.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi @@ -808,8 +808,9 @@ spmi_bus: spmi@800f000 { sdhc_1: sdhci@c0c4000 { compatible = "qcom,sdm630-sdhci", "qcom,sdhci-msm-v5"; reg = <0x0c0c4000 0x1000>, - <0x0c0c5000 0x1000>; - reg-names = "hc", "cqhci"; + <0x0c0c5000 0x1000>, + <0x0c0c8000 0x8000>; + reg-names = "hc", "cqhci", "ice"; interrupts = , ; @@ -817,8 +818,9 @@ sdhc_1: sdhci@c0c4000 { clocks = <&gcc GCC_SDCC1_APPS_CLK>, <&gcc GCC_SDCC1_AHB_CLK>, - <&xo_board>; - clock-names = "core", "iface", "xo"; + <&xo_board>, + <&gcc GCC_SDCC1_ICE_CORE_CLK>; + clock-names = "core", "iface", "xo", "ice"; pinctrl-names = "default", "sleep"; pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on &sdc1_rclk_on>;