From patchwork Fri May 3 18:27:08 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 16698 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f198.google.com (mail-vc0-f198.google.com [209.85.220.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C2CEB23905 for ; Fri, 3 May 2013 18:27:39 +0000 (UTC) Received: by mail-vc0-f198.google.com with SMTP id ht11sf3262745vcb.1 for ; Fri, 03 May 2013 11:27:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-beenthere:x-received:received-spf :x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=DSOJmjL1Xo/80FZtVdOwcVu7Fdb2YvnNDcbJqWzEtlE=; b=C1PUJPwji8q49fiJQBqVp2b1AoktldadiDbNOhCxc5Qi7xuDgKCGEubiDCE9TpBbhI VeoAapFqVTV5tZoP3cMg8uR2mzDXzfunRJyMFRBeEKZbC0khzhnJTNUYo0+SfVc3Ym+r WA0l8ZC5w6NNVdFp2MVosvBgO/E3Jg/34OJayPXt9cHqZ5Pg/pSwuVmyv7wmopvJpsx/ GzgaCJxQBZyUg9RDqIcN9yzQZZZhz70fJ8B/IZlWUz+aX8nzMTuSWgitjevxf/c32BrT b16FI28KiyPuhDZH80GBe82frv5+aNN3lpLErBQnlHjkLUDNAk2AR/0acmYvuvXgKyTV Rjpg== X-Received: by 10.236.210.114 with SMTP id t78mr8716693yho.29.1367605649628; Fri, 03 May 2013 11:27:29 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.63.33 with SMTP id d1ls1971624qes.88.gmail; Fri, 03 May 2013 11:27:29 -0700 (PDT) X-Received: by 10.52.103.50 with SMTP id ft18mr3379442vdb.108.1367605649401; Fri, 03 May 2013 11:27:29 -0700 (PDT) Received: from mail-vc0-f176.google.com (mail-vc0-f176.google.com [209.85.220.176]) by mx.google.com with ESMTPS id h3si5553900vci.18.2013.05.03.11.27.29 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:29 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.176; Received: by mail-vc0-f176.google.com with SMTP id ib11so1664789vcb.21 for ; Fri, 03 May 2013 11:27:29 -0700 (PDT) X-Received: by 10.52.71.4 with SMTP id q4mr3436657vdu.8.1367605649306; Fri, 03 May 2013 11:27:29 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp34380veb; Fri, 3 May 2013 11:27:28 -0700 (PDT) X-Received: by 10.66.2.234 with SMTP id 10mr15571454pax.180.1367605647948; Fri, 03 May 2013 11:27:27 -0700 (PDT) Received: from mail-pb0-x22d.google.com (mail-pb0-x22d.google.com [2607:f8b0:400e:c01::22d]) by mx.google.com with ESMTPS id wt9si8640921pab.8.2013.05.03.11.27.27 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:27 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c01::22d is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=2607:f8b0:400e:c01::22d; Received: by mail-pb0-f45.google.com with SMTP id ro12so1032290pbb.32 for ; Fri, 03 May 2013 11:27:27 -0700 (PDT) X-Received: by 10.66.253.34 with SMTP id zx2mr16365466pac.35.1367605647545; Fri, 03 May 2013 11:27:27 -0700 (PDT) Received: from localhost.localdomain (c-24-21-54-107.hsd1.or.comcast.net. [24.21.54.107]) by mx.google.com with ESMTPSA id qh4sm13792406pac.8.2013.05.03.11.27.26 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 03 May 2013 11:27:26 -0700 (PDT) From: John Stultz To: Minchan Kim Cc: John Stultz Subject: [PATCH 04/12] vrange: Add proper fork/exec semantics for volatile ranges Date: Fri, 3 May 2013 11:27:08 -0700 Message-Id: <1367605636-18284-5-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1367605636-18284-1-git-send-email-john.stultz@linaro.org> References: <1367605636-18284-1-git-send-email-john.stultz@linaro.org> X-Gm-Message-State: ALoCoQnjnQ/ykM5lnrJDyov2rW9M9Wg3obwm5ELfudpdwOzCR5HMFrWdKUeZ3WX2xqw2lO83MV7n X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Volatile ranges should be copied on fork, and cleared on exec. This patch tries to add these semantics. The duplicating of the vranges on fork is a little ackward. This is because we cannot allocate while holding the vrange_root lock, since the allocation could cause reclaim, which may take the vrange_root lock and deadlock. Thus we have to drop all the vrange_root locks for each allocation. Ideas for a better approach would be appreciated! Signed-off-by: John Stultz --- fs/exec.c | 1 + include/linux/vrange.h | 5 ++++- kernel/fork.c | 6 ++++++ mm/vrange.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 65 insertions(+), 1 deletion(-) diff --git a/fs/exec.c b/fs/exec.c index a96a488..417218d 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -55,6 +55,7 @@ #include #include #include +#include #include #include diff --git a/include/linux/vrange.h b/include/linux/vrange.h index 2c1c58a..4424b8d 100644 --- a/include/linux/vrange.h +++ b/include/linux/vrange.h @@ -34,12 +34,15 @@ static inline int vrange_type(struct vrange *vrange) void vrange_init(void); extern void vrange_root_cleanup(struct vrange_root *vroot); - +extern int vrange_root_duplicate(struct vrange_root *orig, + struct vrange_root *new); #else static inline void vrange_init(void) {}; static inline void vrange_root_init(struct vrange_root *vroot, int type) {}; static inline void vrange_root_cleanup(struct vrange_root *vroot) {}; +static inline int vrange_root_duplicate(struct vrange_root *orig, + struct vrange_root *new) {return 0}; #endif #endif /* _LINIUX_VRANGE_H */ diff --git a/kernel/fork.c b/kernel/fork.c index 360ad65..80d5bab 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -847,6 +847,12 @@ struct mm_struct *dup_mm(struct task_struct *tsk) if (mm->binfmt && !try_module_get(mm->binfmt->module)) goto free_pt; + /* XXX - Shouldn't this be already done in mm_init? */ + vrange_root_init(&mm->vroot, VRANGE_MM); + + if (vrange_root_duplicate(&oldmm->vroot, &mm->vroot)) + goto free_pt; + return mm; free_pt: diff --git a/mm/vrange.c b/mm/vrange.c index 537e3d5..4949152 100644 --- a/mm/vrange.c +++ b/mm/vrange.c @@ -148,6 +148,60 @@ static int vrange_remove(struct vrange_root *vroot, return 0; } +int vrange_root_duplicate(struct vrange_root *old, struct vrange_root *new) +{ + struct vrange *old_range, *new_range, *alloc_range; + struct rb_node *old_next, *new_next; + int ret = 0; + + /* + * This is awkward, as taking the vrange_lock here causes problems + * since if call vrange_alloc, while holding the vrange_lock, + * the allocation could then trigger direct reclaim which could + * then try to take the vrange_lock() and deadlock. + * + * So instead, dance around this dropping locks & restarting when + * we have to allocate. + */ +again: + alloc_range = __vrange_alloc(); + if (!alloc_range) + return -ENOMEM; + + mutex_lock_nested(&old->v_lock, I_MUTEX_PARENT); + mutex_lock_nested(&new->v_lock, I_MUTEX_CHILD); + + old_next = rb_first(&old->v_rb); + new_next = rb_first(&new->v_rb); + while (old_next) { + old_range = vrange_entry(old_next); + if (!new_next) { + new_range = alloc_range; + alloc_range = NULL; + } else { + new_range = vrange_entry(new_next); + __vrange_remove(new_range); + } + __vrange_set(new_range, old_range->node.start, + old_range->node.last, old_range->purged); + __vrange_add(new_range, new); + + if (!alloc_range) { + vrange_unlock(new); + vrange_unlock(old); + goto again; + } + + old_next = rb_next(old_next); + new_next = rb_next(new_next); + } + vrange_unlock(new); + vrange_unlock(old); + + __vrange_free(alloc_range); + + return ret; +} void vrange_root_cleanup(struct vrange_root *vroot) {