From patchwork Thu May 18 07:59:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 100053 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp602684qge; Thu, 18 May 2017 01:04:15 -0700 (PDT) X-Received: by 10.98.71.84 with SMTP id u81mr3067617pfa.102.1495094654988; Thu, 18 May 2017 01:04:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495094654; cv=none; d=google.com; s=arc-20160816; b=btUSrBstOP8wyXbfY9dDAjWTjLKR9CmJ8iqxYVXuz916OUVMUQEZX+UlBfOSDOEBbp dCDsspnWa4YhC3XrHkeBsF0chtqypgdzMkKosry95qqlgHOkj6oFMOoQjGr2MJs2Adll 4ptMEhctpgrk7YlCj8CqkNOxMl4L7vWVsonokLoC9PbhN3ppQdTImmThWrp9Pbf9er65 ikoZ5T0mvnw8E1g6oxu8A2idjLFXi7w1FXt8NsnsXc85B3gkNQ3MWepBoTCrOebGX3pB H5Wwn7fqugr5/W7zOFHcG59ziFvnT2DwR6yedPB8Txl6RSWSHjhyCON9KUH4YsSLBH2C Ir/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=pNXnlchfMNz8f2MhxIHpVHVnRORTH++mNk780ul/Bno=; b=NH+Mgo7z5B+qdDeGYnIoFf/1uJs+T7xO27k0394F3qm9hK2xKRFIJQcTTaEXFlfWlZ 4x2SWuuMvrM7g92bJ40HaK3a9H9uVM0O+8WYv2iWZ4K5Q+o6esbGMzI7j95c1+ofwOy2 bN+C9CgTcmFwZXaTaFushAnw3MUPLakpgQSQLzzCJYVZJ0aw+ipboc8GW+t21sb1pHiu CKzxEEHvK4J6TyHyJO0KmiNUcZjm15UKApqNNFN/qPYSN+EkVY5nYJIAO7m08FBu11tI 7L5JErWYQodV2FxhP327Bk+alYXtYlWtLB38Ai2kFt5ji3q+3uV/Ax4Unl6fk6ykrMiz M2Kw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s62si4388536pgb.247.2017.05.18.01.04.14; Thu, 18 May 2017 01:04:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755205AbdERIEG (ORCPT + 25 others); Thu, 18 May 2017 04:04:06 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6781 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754806AbdERICG (ORCPT ); Thu, 18 May 2017 04:02:06 -0400 Received: from 172.30.72.56 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.56]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AOU59820; Thu, 18 May 2017 16:02:01 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Thu, 18 May 2017 16:01:51 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v3 2/6] iommu/iova: insert start_pfn boundary of dma32 Date: Thu, 18 May 2017 15:59:53 +0800 Message-ID: <1495094397-9132-3-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> References: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020203.591D54FC.0122, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 5c8b8364966d661c6c17150a49332734 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Reserve the first granule size memory(start at start_pfn) as boundary iova, to make sure that iovad->cached32_node can not be NULL in future. Meanwhile, changed the assignment of iovad->cached32_node from rb_next to rb_prev of &free->node in function __cached_rbnode_delete_update. Signed-off-by: Zhen Lei --- drivers/iommu/iova.c | 63 ++++++++++++++++++++++++++++++---------------------- 1 file changed, 37 insertions(+), 26 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 333a9cc..d0c19ec 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -32,6 +32,17 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad, static void init_iova_rcaches(struct iova_domain *iovad); static void free_iova_rcaches(struct iova_domain *iovad); +static void +insert_iova_boundary(struct iova_domain *iovad) +{ + struct iova *iova; + unsigned long start_pfn_32bit = iovad->start_pfn; + + iova = reserve_iova(iovad, start_pfn_32bit, start_pfn_32bit); + BUG_ON(!iova); + iovad->cached32_node = &iova->node; +} + void init_iova_domain(struct iova_domain *iovad, unsigned long granule, unsigned long start_pfn, unsigned long pfn_32bit) @@ -45,27 +56,38 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, spin_lock_init(&iovad->iova_rbtree_lock); iovad->rbroot = RB_ROOT; - iovad->cached32_node = NULL; iovad->granule = granule; iovad->start_pfn = start_pfn; iovad->dma_32bit_pfn = pfn_32bit; init_iova_rcaches(iovad); + + /* + * Insert boundary nodes for dma32. So cached32_node can not be NULL in + * future. + */ + insert_iova_boundary(iovad); } EXPORT_SYMBOL_GPL(init_iova_domain); static struct rb_node * __get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn) { - if ((*limit_pfn > iovad->dma_32bit_pfn) || - (iovad->cached32_node == NULL)) + struct rb_node *cached_node; + struct rb_node *next_node; + + if (*limit_pfn > iovad->dma_32bit_pfn) return rb_last(&iovad->rbroot); - else { - struct rb_node *prev_node = rb_prev(iovad->cached32_node); - struct iova *curr_iova = - rb_entry(iovad->cached32_node, struct iova, node); - *limit_pfn = curr_iova->pfn_lo - 1; - return prev_node; + else + cached_node = iovad->cached32_node; + + next_node = rb_next(cached_node); + if (next_node) { + struct iova *next_iova = rb_entry(next_node, struct iova, node); + + *limit_pfn = min(*limit_pfn, next_iova->pfn_lo - 1); } + + return cached_node; } static void @@ -83,20 +105,13 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) struct iova *cached_iova; struct rb_node *curr; - if (!iovad->cached32_node) - return; curr = iovad->cached32_node; cached_iova = rb_entry(curr, struct iova, node); if (free->pfn_lo >= cached_iova->pfn_lo) { - struct rb_node *node = rb_next(&free->node); - struct iova *iova = rb_entry(node, struct iova, node); - /* only cache if it's below 32bit pfn */ - if (node && iova->pfn_lo < iovad->dma_32bit_pfn) - iovad->cached32_node = node; - else - iovad->cached32_node = NULL; + if (free->pfn_hi <= iovad->dma_32bit_pfn) + iovad->cached32_node = rb_prev(&free->node); } } @@ -142,7 +157,7 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, struct iova *new, bool size_aligned) { - struct rb_node *prev, *curr = NULL; + struct rb_node *prev, *curr; unsigned long flags; unsigned long saved_pfn; unsigned int pad_size = 0; @@ -172,13 +187,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, curr = rb_prev(curr); } - if (!curr) { - if (size_aligned) - pad_size = iova_get_pad_size(size, limit_pfn); - if ((iovad->start_pfn + size + pad_size) > limit_pfn) { - spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); - return -ENOMEM; - } + if (unlikely(!curr)) { + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + return -ENOMEM; } /* pfn_lo will point to size aligned address if size_aligned is set */