From patchwork Wed Jun 6 13:17:37 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 9142 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 7FF1723E4E for ; Wed, 6 Jun 2012 13:19:15 +0000 (UTC) Received: from mail-yw0-f52.google.com (mail-yw0-f52.google.com [209.85.213.52]) by fiordland.canonical.com (Postfix) with ESMTP id 51317A1868B for ; Wed, 6 Jun 2012 13:19:15 +0000 (UTC) Received: by mail-yw0-f52.google.com with SMTP id p61so5374700yhp.11 for ; Wed, 06 Jun 2012 06:19:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:x-auditid :from:to:date:message-id:x-mailer:in-reply-to:references :x-brightmail-tracker:x-tm-as-mml:cc:subject:x-beenthere :x-mailman-version:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=vOiWX+YJWDQ7tlybcEfdZ6KVMYHWuJ49BSTsubamWv8=; b=FmqZn4qepi0uYtKDx9afZjzDDwxoNTrdm36dcpVffVHscp2cXOpinN/zVqRnn0avI7 hY1rjn0BSx9oTiCG0ofq08gwqn+J3Ewnf4uEdaf5jWQgbPViZGxdG2VaWj3sORHvC4E6 7GrGroJNTTv571mtlazjFkdKHscWpsvP4AM5HlJFGQMAtHx+1YZs72w8xNu1vExbO2Fo 66y88b9wEgi8yu2dQ69lxjY0h1WlZddw+Dq41B2fORhPse1Etpy1YaOwJoq4xA1ujXMN OUIh3s0nIrpBOD2/PDh4XcxggBhQ1ug3Dsyi0PZnhvXkYZBD/3uJai89JGFP3EF+BbEh P28Q== Received: by 10.50.163.99 with SMTP id yh3mr6377510igb.53.1338988754828; Wed, 06 Jun 2012 06:19:14 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.24.148 with SMTP id v20csp85634ibb; Wed, 6 Jun 2012 06:19:14 -0700 (PDT) Received: by 10.205.134.4 with SMTP id ia4mr12013801bkc.57.1338988753291; Wed, 06 Jun 2012 06:19:13 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id hs3si1101816bkc.74.2012.06.06.06.19.11; Wed, 06 Jun 2012 06:19:13 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1ScG8k-0005wi-N4; Wed, 06 Jun 2012 13:19:10 +0000 Received: from mailout3.samsung.com ([203.254.224.33]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1ScG8i-0005wc-Q6 for linaro-mm-sig@lists.linaro.org; Wed, 06 Jun 2012 13:19:09 +0000 Received: from epcpsbgm1.samsung.com (mailout3.samsung.com [203.254.224.33]) by mailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0M5700JC36BT8GI0@mailout3.samsung.com> for linaro-mm-sig@lists.linaro.org; Wed, 06 Jun 2012 22:19:05 +0900 (KST) X-AuditID: cbfee61a-b7f806d0000037be-c6-4fcf58c9abed Received: from epmmp1.local.host ( [203.254.227.16]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id 32.CE.14270.9C85FCF4; Wed, 06 Jun 2012 22:19:05 +0900 (KST) Received: from mcdsrvbld02.digital.local ([106.116.37.23]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0M5700H2L6B4YS50@mmp1.samsung.com> for linaro-mm-sig@lists.linaro.org; Wed, 06 Jun 2012 22:19:05 +0900 (KST) From: Marek Szyprowski To: linux-arm-kernel@lists.infradead.org, linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Date: Wed, 06 Jun 2012 15:17:37 +0200 Message-id: <1338988657-20770-3-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.10 In-reply-to: <1338988657-20770-1-git-send-email-m.szyprowski@samsung.com> References: <1338988657-20770-1-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCJMWRmVeSWpSXmKPExsVy+t9jAd2TEef9DU794bL4cuUhkwOjx+1/ j5kDGKO4bFJSczLLUov07RK4MrbfmcBccEuxYvu/fvYGxjPSXYycHBICJhLb3q5lhrDFJC7c W8/WxcjFISSwiFHi/dJdrBDOWiaJLVffsoNUsQkYSnS97WIDsUUEZjBK7OpLAyliFtjHItG5 +QQjSEJYIFLi0JR3YEUsAqoSdyZsZQWxeQU8JBb9Ps8CsU5e4un9PrAaTgFPiSkH54MtEAKq uXWil2kCI+8CRoZVjKKpBckFxUnpuYZ6xYm5xaV56XrJ+bmbGMF+fya1g3Flg8UhRgEORiUe 3sdh5/2FWBPLiitzDzFKcDArifDGOQOFeFMSK6tSi/Lji0pzUosPMUpzsCiJ8/YdO+cvJJCe WJKanZpakFoEk2Xi4JRqYOz/Hb+qerrexy9Ht80UrTc1CW9YzFfSxZfhyz8pzm3aod9ufcIX JXbc2+AuyPGjNGzhnlB/3Vt7Q1jzDqbPqz6nLBdnOLld9dep89d+y/fWPvogqr3e+bt6w2ZZ 1rAFJ1YuubXjCof2qefWlwKmHa+JeaukFuaW5nrD7d35E/umLjN+8imuTEaJpTgj0VCLuag4 EQCv18N/9wEAAA== X-TM-AS-MML: No Cc: Abhinav Kochhar , Russell King - ARM Linux , Arnd Bergmann , Konrad Rzeszutek Wilk , Benjamin Herrenschmidt , Kyungmin Park , Subash Patel Subject: [Linaro-mm-sig] [PATCH 2/2] ARM: dma-mapping: add support for DMA_ATTR_SKIP_CPU_SYNC attribute X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQlDc9XIPqeuLHEIyN+SNpwZhliyC+HA6HzBvKJ9+KhVROgCi4LsEbx2ua02oTxFJDtpgMTi This patch adds support for DMA_ATTR_SKIP_CPU_SYNC attribute for dma_(un)map_(single,page,sg) functions family. It lets dma mapping clients to create a mapping for the buffer for the given device without performing a CPU cache synchronization. CPU cache synchronization can be skipped for the buffers which it is known that they are already in 'device' domain (CPU caches have been already synchronized or there are only coherent mappings for the buffer). For advanced users only, please use it with care. Signed-off-by: Marek Szyprowski --- arch/arm/mm/dma-mapping.c | 20 +++++++++++--------- 1 files changed, 11 insertions(+), 9 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index b140440..62a0023 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -68,7 +68,7 @@ static dma_addr_t arm_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { - if (!arch_is_coherent()) + if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) __dma_page_cpu_to_dev(page, offset, size, dir); return pfn_to_dma(dev, page_to_pfn(page)) + offset; } @@ -91,7 +91,7 @@ static void arm_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { - if (!arch_is_coherent()) + if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) __dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, handle)), handle & ~PAGE_MASK, size, dir); } @@ -1077,7 +1077,7 @@ static int arm_iommu_get_sgtable(struct device *dev, struct sg_table *sgt, */ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg, size_t size, dma_addr_t *handle, - enum dma_data_direction dir) + enum dma_data_direction dir, struct dma_attrs *attrs) { struct dma_iommu_mapping *mapping = dev->archdata.mapping; dma_addr_t iova, iova_base; @@ -1096,7 +1096,8 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg, phys_addr_t phys = page_to_phys(sg_page(s)); unsigned int len = PAGE_ALIGN(s->offset + s->length); - if (!arch_is_coherent()) + if (!arch_is_coherent() && + !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); ret = iommu_map(mapping->domain, iova, phys, len, 0); @@ -1143,7 +1144,7 @@ int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents, if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) { if (__map_sg_chunk(dev, start, size, &dma->dma_address, - dir) < 0) + dir, attrs) < 0) goto bad_mapping; dma->dma_address += offset; @@ -1156,7 +1157,7 @@ int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents, } size += s->length; } - if (__map_sg_chunk(dev, start, size, &dma->dma_address, dir) < 0) + if (__map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs) < 0) goto bad_mapping; dma->dma_address += offset; @@ -1190,7 +1191,8 @@ void arm_iommu_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, if (sg_dma_len(s)) __iommu_remove_mapping(dev, sg_dma_address(s), sg_dma_len(s)); - if (!arch_is_coherent()) + if (!arch_is_coherent() && + !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir); } @@ -1252,7 +1254,7 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page, dma_addr_t dma_addr; int ret, len = PAGE_ALIGN(size + offset); - if (!arch_is_coherent()) + if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) __dma_page_cpu_to_dev(page, offset, size, dir); dma_addr = __alloc_iova(mapping, len); @@ -1291,7 +1293,7 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle, if (!iova) return; - if (!arch_is_coherent()) + if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) __dma_page_dev_to_cpu(page, offset, size, dir); iommu_unmap(mapping->domain, iova, len);