From patchwork Tue Sep 4 08:22:14 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 11161 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 82F4F23E54 for ; Tue, 4 Sep 2012 08:23:09 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id 834E1A187FF for ; Tue, 4 Sep 2012 08:22:19 +0000 (UTC) Received: by ieak11 with SMTP id k11so4142129iea.11 for ; Tue, 04 Sep 2012 01:23:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to :date:message-id:x-mailer:in-reply-to:references :x-originalarrivaltime:cc:subject:x-beenthere:x-mailman-version :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=uXO54twbY82nsOXVPPEAih4mM43jSx5hFHMrU/Yc6WM=; b=ep0jb2GTf3RXSK0qlq8nZrsgGw0N1VUUWLTM4lcNooxTv5cceAuoCgF0vt/LXIqKk+ 5WbY+grYMkk08PO5fLX0WqZuhHwRJYAhxOA96WbceFLnHX693iHf998W4rjF/XzVr0a6 xKhoPKilojB6p7FXC6bSVOjDoBPGb6I6aqMHr07i+9zZ0L9eODcCQnjzRu4/z4h183rf o9mNlnCMjqMVc1TBU9t4E//YD8Da3W+evm9pUQTdyBoSibrPRCGvN/bdPz1QBtfHLKk4 vI71W7kIz8XqcgkTot1e7dwCk/TzJxekQMbZbmEM6/U6YFxo+jPEqFxZT5dKnKbbyO0k sIsw== Received: by 10.42.60.139 with SMTP id q11mr17082677ich.53.1346746988521; Tue, 04 Sep 2012 01:23:08 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp175922igc; Tue, 4 Sep 2012 01:23:07 -0700 (PDT) Received: by 10.180.81.38 with SMTP id w6mr29043915wix.10.1346746987168; Tue, 04 Sep 2012 01:23:07 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id p10si27156355wic.16.2012.09.04.01.23.05; Tue, 04 Sep 2012 01:23:07 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1T8oPY-0005KA-W7; Tue, 04 Sep 2012 08:23:05 +0000 Received: from na3sys009aog111.obsmtp.com ([74.125.149.205]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1T8oPW-0005JQ-Gy for linaro-mm-sig@lists.linaro.org; Tue, 04 Sep 2012 08:23:02 +0000 Received: from MSI-MTA.marvell.com ([65.219.4.132]) (using TLSv1) by na3sys009aob111.postini.com ([74.125.148.12]) with SMTP ID DSNKUEW6Y9Ag1MdX0pld7WRZcIL0dMnqxl7l@postini.com; Tue, 04 Sep 2012 01:23:02 PDT Received: from maili.marvell.com ([10.68.76.210]) by MSI-MTA.marvell.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 4 Sep 2012 01:22:20 -0700 Received: from localhost (unknown [10.26.128.111]) by maili.marvell.com (Postfix) with ESMTP id 20E354E510; Tue, 4 Sep 2012 01:22:20 -0700 (PDT) From: Zhangfei Gao To: Rebecca Schultz Zavin , "linaro-mm-sig@lists.linaro.org" , Haojian Zhuang Date: Tue, 4 Sep 2012 16:22:14 +0800 Message-Id: <1346746935-1149-3-git-send-email-zhangfei.gao@marvell.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1346746935-1149-1-git-send-email-zhangfei.gao@marvell.com> References: <1346746935-1149-1-git-send-email-zhangfei.gao@marvell.com> X-OriginalArrivalTime: 04 Sep 2012 08:22:20.0522 (UTC) FILETIME=[672A6CA0:01CD8A76] Cc: Zhangfei Gao Subject: [Linaro-mm-sig] [PATCH v2 2/3] gpu: ion: carveout_heap page wised cache flush X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQlJn+PyN+dzXHtVRqswkF5Pu3dQwLp1E7JwT9Jy9f+lpMA4bmv68aBGgrXtqzhn4j61Ghfe Extend dirty bit per PAGE_SIZE Page wised cache flush is supported and only takes effect for dirty buffer Signed-off-by: Zhangfei Gao --- drivers/gpu/ion/ion_carveout_heap.c | 23 +++++++++++++++++------ 1 files changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/ion/ion_carveout_heap.c b/drivers/gpu/ion/ion_carveout_heap.c index 13f6e8d..60e97e5 100644 --- a/drivers/gpu/ion/ion_carveout_heap.c +++ b/drivers/gpu/ion/ion_carveout_heap.c @@ -88,25 +88,36 @@ struct sg_table *ion_carveout_heap_map_dma(struct ion_heap *heap, struct ion_buffer *buffer) { struct sg_table *table; - int ret; + struct scatterlist *sg; + int ret, i; + int nents = PAGE_ALIGN(buffer->size) / PAGE_SIZE; + struct page *page = phys_to_page(buffer->priv_phys); table = kzalloc(sizeof(struct sg_table), GFP_KERNEL); if (!table) return ERR_PTR(-ENOMEM); - ret = sg_alloc_table(table, 1, GFP_KERNEL); + + ret = sg_alloc_table(table, nents, GFP_KERNEL); if (ret) { kfree(table); return ERR_PTR(ret); } - sg_set_page(table->sgl, phys_to_page(buffer->priv_phys), buffer->size, - 0); + + sg = table->sgl; + for (i = 0; i < nents; i++) { + sg_set_page(sg, page + i, PAGE_SIZE, 0); + sg = sg_next(sg); + } + return table; } void ion_carveout_heap_unmap_dma(struct ion_heap *heap, struct ion_buffer *buffer) { - sg_free_table(buffer->sg_table); + if (buffer->sg_table) + sg_free_table(buffer->sg_table); + kfree(buffer->sg_table); } void *ion_carveout_heap_map_kernel(struct ion_heap *heap, @@ -157,7 +168,7 @@ struct ion_heap *ion_carveout_heap_create(struct ion_platform_heap *heap_data) if (!carveout_heap) return ERR_PTR(-ENOMEM); - carveout_heap->pool = gen_pool_create(12, -1); + carveout_heap->pool = gen_pool_create(PAGE_SHIFT, -1); if (!carveout_heap->pool) { kfree(carveout_heap); return ERR_PTR(-ENOMEM);