From patchwork Fri May 3 18:27:12 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 16702 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f71.google.com (mail-yh0-f71.google.com [209.85.213.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 92BF623905 for ; Fri, 3 May 2013 18:27:44 +0000 (UTC) Received: by mail-yh0-f71.google.com with SMTP id a41sf3430962yho.2 for ; Fri, 03 May 2013 11:27:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-beenthere:x-received:received-spf :x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=hNS4ld0UPozADZLQXmnsX/pJZPXdnu7TxbB8O+f3Hng=; b=X+uTRd3P84KLoNrAQ3kTnrmQcTetJtwCJ/qn4TdC+b9KupY4BxrYgSXbv6pJc8Cqs7 y+K34e244ae/PieS8NmMRwiaNGvZmtgBG7i4g1im5poq5ft2sQLQMrpwCEhpf/oWYUZl HhD++wxFd61FlWumvh/x3xOUwEt75xsua7u5zx18rLuVwJQ1yHLw8UCKVJWoMS3tsRyk KbDHkeEQRgeJXBXwX9dXlRnChtXpTk2hM95awjjelki3RkixArjEQ3otA8eYGc7rQl0C ZpYOvqLUjRAbPR/jYyEU/BzNY5Qy5BLYAQ+ipHMb0K2mAutWp3RbqdiD8yEuth1J4Jiv S5Dg== X-Received: by 10.236.203.134 with SMTP id f6mr8823202yho.46.1367605654414; Fri, 03 May 2013 11:27:34 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.49.229 with SMTP id x5ls1753748qen.96.gmail; Fri, 03 May 2013 11:27:34 -0700 (PDT) X-Received: by 10.220.188.201 with SMTP id db9mr3934151vcb.30.1367605654156; Fri, 03 May 2013 11:27:34 -0700 (PDT) Received: from mail-vb0-x22e.google.com (mail-vb0-x22e.google.com [2607:f8b0:400c:c02::22e]) by mx.google.com with ESMTPS id ir6si5552722vdb.39.2013.05.03.11.27.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:34 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::22e is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::22e; Received: by mail-vb0-f46.google.com with SMTP id 10so1567824vbe.19 for ; Fri, 03 May 2013 11:27:34 -0700 (PDT) X-Received: by 10.52.69.109 with SMTP id d13mr3419276vdu.75.1367605654043; Fri, 03 May 2013 11:27:34 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp34387veb; Fri, 3 May 2013 11:27:33 -0700 (PDT) X-Received: by 10.68.179.101 with SMTP id df5mr14861051pbc.199.1367605653202; Fri, 03 May 2013 11:27:33 -0700 (PDT) Received: from mail-da0-x232.google.com (mail-da0-x232.google.com [2607:f8b0:400e:c00::232]) by mx.google.com with ESMTPS id uh1si8634851pab.36.2013.05.03.11.27.32 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:33 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c00::232 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c00::232; Received: by mail-da0-f50.google.com with SMTP id i23so924625dad.37 for ; Fri, 03 May 2013 11:27:32 -0700 (PDT) X-Received: by 10.66.163.99 with SMTP id yh3mr7377246pab.22.1367605652799; Fri, 03 May 2013 11:27:32 -0700 (PDT) Received: from localhost.localdomain (c-24-21-54-107.hsd1.or.comcast.net. [24.21.54.107]) by mx.google.com with ESMTPSA id qh4sm13792406pac.8.2013.05.03.11.27.31 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:32 -0700 (PDT) From: John Stultz To: Minchan Kim Cc: John Stultz Subject: [PATCH 08/12] vrange: Add LRU handling for victim vrange Date: Fri, 3 May 2013 11:27:12 -0700 Message-Id: <1367605636-18284-9-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1367605636-18284-1-git-send-email-john.stultz@linaro.org> References: <1367605636-18284-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQlux5DWwYlfZihNkSAlDifrM+TM/m9VQ7q7jT/tfOuiCeeT9gWnBJvDfaE4oebAseWJYVCy X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::22e is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Minchan Kim This patch adds LRU data structure for selecting victim vrange when memory pressure happens. Basically, VM will select old vrange but if user try to access purged page recently, the vrange includes the page will be activated because page fault means one of them which user process will be killed or recover SIGBUS and continue the work. For latter case, we have to keep the vrange out of victim selection. I admit LRU might be not best but I can't imagine better idea so wanted to make it simple. I think user space can handle better with enough information so hope they handle it via mempressure notifier. Otherwise, if you have better idea, welcome! Signed-off-by: Minchan Kim Signed-off-by: John Stultz --- include/linux/vrange.h | 3 +++ include/linux/vrange_types.h | 1 + mm/memory.c | 1 + mm/vrange.c | 49 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 54 insertions(+) diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 25bcd92..ff301b2 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -43,6 +43,9 @@ bool vrange_address(struct mm_struct *mm, unsigned long start, extern bool is_purged_vrange(struct mm_struct *mm, unsigned long address); +unsigned int discard_vrange_pages(struct zone *zone, int nr_to_discard); +void lru_move_vrange_to_head(struct mm_struct *mm, unsigned long address); + #else static inline void vrange_init(void) {}; diff --git a/include/linux/vrange_types.h b/include/linux/vrange_types.h index e46942c..d69b608 100644 --- a/include/linux/vrange_types.h +++ b/include/linux/vrange_types.h @@ -15,6 +15,7 @@ struct vrange { struct interval_tree_node node; struct vrange_root *owner; bool purged; + struct list_head lru; /* protected by lru_lock */ }; #endif diff --git a/mm/memory.c b/mm/memory.c index 010fc42..b22fa63 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3719,6 +3719,7 @@ anon: if (unlikely(pte_vrange(entry))) { if (!is_purged_vrange(mm, address)) { + lru_move_vrange_to_head(mm, address); /* zap pte */ ptl = pte_lockptr(mm, pmd); spin_lock(ptl); diff --git a/mm/vrange.c b/mm/vrange.c index 1fce20e..8e66c41 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -15,8 +15,53 @@ #include #include +static LIST_HEAD(lru_vrange); +static DEFINE_SPINLOCK(lru_lock); + static struct kmem_cache *vrange_cachep; + + +void lru_add_vrange(struct vrange *vrange) +{ + spin_lock(&lru_lock); + WARN_ON(!list_empty(&vrange->lru)); + list_add(&vrange->lru, &lru_vrange); + spin_unlock(&lru_lock); +} + +void lru_remove_vrange(struct vrange *vrange) +{ + spin_lock(&lru_lock); + if (!list_empty(&vrange->lru)) + list_del_init(&vrange->lru); + spin_unlock(&lru_lock); +} + +void lru_move_vrange_to_head(struct mm_struct *mm, unsigned long address) +{ + struct vrange_root *vroot = &mm->vroot; + struct interval_tree_node *node; + struct vrange *vrange; + + vrange_lock(vroot); + node = interval_tree_iter_first(&vroot->v_rb, address, + address + PAGE_SIZE - 1); + if (node) { + vrange = container_of(node, struct vrange, node); + spin_lock(&lru_lock); + /* + * Race happens with get_victim_vrange so in such case, + * we can't move but it can put the vrange into head + * after finishing purging work so no problem. + */ + if (!list_empty(&vrange->lru)) + list_move(&vrange->lru, &lru_vrange); + spin_unlock(&lru_lock); + } + vrange_unlock(vroot); +} + void __init vrange_init(void) { vrange_cachep = KMEM_CACHE(vrange, SLAB_PANIC); @@ -28,24 +73,28 @@ static struct vrange *__vrange_alloc(void) if (!vrange) return vrange; vrange->owner = NULL; + INIT_LIST_HEAD(&vrange->lru); return vrange; } static void __vrange_free(struct vrange *range) { WARN_ON(range->owner); + lru_remove_vrange(range); kmem_cache_free(vrange_cachep, range); } static void __vrange_add(struct vrange *range, struct vrange_root *vroot) { range->owner = vroot; + lru_add_vrange(range); interval_tree_insert(&range->node, &vroot->v_rb); } static void __vrange_remove(struct vrange *range) { interval_tree_remove(&range->node, &range->owner->v_rb); + lru_remove_vrange(range); range->owner = NULL; }