From patchwork Thu May 23 17:07:48 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 17157 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f199.google.com (mail-vc0-f199.google.com [209.85.220.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 857232395B for ; Thu, 23 May 2013 17:09:37 +0000 (UTC) Received: by mail-vc0-f199.google.com with SMTP id hf12sf4531833vcb.10 for ; Thu, 23 May 2013 10:08:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-beenthere:x-forwarded-to:x-forwarded-for :delivered-to:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=6lLCD484ex+slas+xaU841G5PLYodKbqJ118o3sSpoY=; b=Dqi4jdXqy4vmbiAEU1m2Bu6DrDuQ/Rd+l7sREIVPYZ+H3yvmqJgkW58pfkhFQr7Rml OFAMoTQwsiEK/+vnuZdA1a9JPt4mGavTtsiD7d4bc60mMH/e/8kFQ9GhAumq52YHON9U 3KBpYiwsqiiHrhyKF06NnQS49VOImU62lHhlDy/PI465kKpCZnyURSoZpkEQhWgYSX6J b3XtGTVlE2erV0U2pQmZewO+ogpGSwgm44J2IkP/MlAyvZ7dSlL/VMHULZ8wyd6A6bdi oZHHOwnGDS0QId2Y6CpWYTrc0+DoYVzsNiHgFctSfKZfbYRHH+FU2k7JoTwMdQ2rIOW3 oQoQ== X-Received: by 10.224.205.138 with SMTP id fq10mr566082qab.1.1369328921945; Thu, 23 May 2013 10:08:41 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.96.35 with SMTP id dp3ls1594798qeb.87.gmail; Thu, 23 May 2013 10:08:41 -0700 (PDT) X-Received: by 10.52.27.163 with SMTP id u3mr4978826vdg.60.1369328921745; Thu, 23 May 2013 10:08:41 -0700 (PDT) Received: from mail-ve0-x22b.google.com (mail-ve0-x22b.google.com [2607:f8b0:400c:c01::22b]) by mx.google.com with ESMTPS id ef7si6856497vdc.150.2013.05.23.10.08.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:41 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c01::22b is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c01::22b; Received: by mail-ve0-f171.google.com with SMTP id m1so2665545ves.30 for ; Thu, 23 May 2013 10:08:41 -0700 (PDT) X-Received: by 10.220.92.195 with SMTP id s3mr6028717vcm.9.1369328921509; Thu, 23 May 2013 10:08:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.126.138 with SMTP id c10csp61543vcs; Thu, 23 May 2013 10:08:40 -0700 (PDT) X-Received: by 10.180.37.133 with SMTP id y5mr26255130wij.20.1369328899031; Thu, 23 May 2013 10:08:19 -0700 (PDT) Received: from mail-wi0-x22a.google.com (mail-wi0-x22a.google.com [2a00:1450:400c:c05::22a]) by mx.google.com with ESMTPS id p6si9581215wic.54.2013.05.23.10.08.10 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:19 -0700 (PDT) Received-SPF: neutral (google.com: 2a00:1450:400c:c05::22a is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=2a00:1450:400c:c05::22a; Received: by mail-wi0-f170.google.com with SMTP id hr14so5151816wib.5 for ; Thu, 23 May 2013 10:08:10 -0700 (PDT) X-Received: by 10.180.90.70 with SMTP id bu6mr26703160wib.34.1369328888224; Thu, 23 May 2013 10:08:08 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id ca19sm36989435wib.3.2013.05.23.10.08.07 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 May 2013 10:08:07 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Michal Hocko , Ken Chen , Mel Gorman , Catalin Marinas , Will Deacon , patches@linaro.org, Steve Capper Subject: [PATCH 01/11] mm: hugetlb: Copy huge_pmd_share from x86 to mm. Date: Thu, 23 May 2013 18:07:48 +0100 Message-Id: <1369328878-11706-2-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> References: <1369328878-11706-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQkNsXphpimDxKYNXNmuNilGI3Ql9vrSvybBYqYGTxssY0OphPvuuePEQiki9aAA495FhrA1 X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c01::22b is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Under x86, multiple puds can be made to reference the same bank of huge pmds provided that they represent a full PUD_SIZE of shared huge memory that is aligned to a PUD_SIZE boundary. The code to share pmds does not require any architecture specific knowledge other than the fact that pmds can be indexed, thus can be beneficial to some other architectures. This patch copies the huge pmd sharing (and unsharing) logic from x86/ to mm/ and introduces a new config option to activate it: CONFIG_ARCH_WANTS_HUGE_PMD_SHARE Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- include/linux/hugetlb.h | 4 ++ mm/hugetlb.c | 122 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 126 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 6b4890f..981546a 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -69,6 +69,10 @@ void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed); int dequeue_hwpoisoned_huge_page(struct page *page); void copy_huge_page(struct page *dst, struct page *src); +#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE +pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud); +#endif + extern unsigned long hugepages_treat_as_movable; extern const unsigned long hugetlb_zero, hugetlb_infinity; extern int sysctl_hugetlb_shm_group; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f8feeec..b0bfb29 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3169,6 +3169,128 @@ void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed) hugetlb_acct_memory(h, -(chg - freed)); } +#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE +static unsigned long page_table_shareable(struct vm_area_struct *svma, + struct vm_area_struct *vma, + unsigned long addr, pgoff_t idx) +{ + unsigned long saddr = ((idx - svma->vm_pgoff) << PAGE_SHIFT) + + svma->vm_start; + unsigned long sbase = saddr & PUD_MASK; + unsigned long s_end = sbase + PUD_SIZE; + + /* Allow segments to share if only one is marked locked */ + unsigned long vm_flags = vma->vm_flags & ~VM_LOCKED; + unsigned long svm_flags = svma->vm_flags & ~VM_LOCKED; + + /* + * match the virtual addresses, permission and the alignment of the + * page table page. + */ + if (pmd_index(addr) != pmd_index(saddr) || + vm_flags != svm_flags || + sbase < svma->vm_start || svma->vm_end < s_end) + return 0; + + return saddr; +} + +static int vma_shareable(struct vm_area_struct *vma, unsigned long addr) +{ + unsigned long base = addr & PUD_MASK; + unsigned long end = base + PUD_SIZE; + + /* + * check on proper vm_flags and page table alignment + */ + if (vma->vm_flags & VM_MAYSHARE && + vma->vm_start <= base && end <= vma->vm_end) + return 1; + return 0; +} + +/* + * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc() + * and returns the corresponding pte. While this is not necessary for the + * !shared pmd case because we can allocate the pmd later as well, it makes the + * code much cleaner. pmd allocation is essential for the shared case because + * pud has to be populated inside the same i_mmap_mutex section - otherwise + * racing tasks could either miss the sharing (see huge_pte_offset) or select a + * bad pmd for sharing. + */ +pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) +{ + struct vm_area_struct *vma = find_vma(mm, addr); + struct address_space *mapping = vma->vm_file->f_mapping; + pgoff_t idx = ((addr - vma->vm_start) >> PAGE_SHIFT) + + vma->vm_pgoff; + struct vm_area_struct *svma; + unsigned long saddr; + pte_t *spte = NULL; + pte_t *pte; + + if (!vma_shareable(vma, addr)) + return (pte_t *)pmd_alloc(mm, pud, addr); + + mutex_lock(&mapping->i_mmap_mutex); + vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) { + if (svma == vma) + continue; + + saddr = page_table_shareable(svma, vma, addr, idx); + if (saddr) { + spte = huge_pte_offset(svma->vm_mm, saddr); + if (spte) { + get_page(virt_to_page(spte)); + break; + } + } + } + + if (!spte) + goto out; + + spin_lock(&mm->page_table_lock); + if (pud_none(*pud)) + pud_populate(mm, pud, + (pmd_t *)((unsigned long)spte & PAGE_MASK)); + else + put_page(virt_to_page(spte)); + spin_unlock(&mm->page_table_lock); +out: + pte = (pte_t *)pmd_alloc(mm, pud, addr); + mutex_unlock(&mapping->i_mmap_mutex); + return pte; +} + +/* + * unmap huge page backed by shared pte. + * + * Hugetlb pte page is ref counted at the time of mapping. If pte is shared + * indicated by page_count > 1, unmap is achieved by clearing pud and + * decrementing the ref count. If count == 1, the pte page is not shared. + * + * called with vma->vm_mm->page_table_lock held. + * + * returns: 1 successfully unmapped a shared pte page + * 0 the underlying pte page is not shared, or it is the last user + */ +int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) +{ + pgd_t *pgd = pgd_offset(mm, *addr); + pud_t *pud = pud_offset(pgd, *addr); + + BUG_ON(page_count(virt_to_page(ptep)) == 0); + if (page_count(virt_to_page(ptep)) == 1) + return 0; + + pud_clear(pud); + put_page(virt_to_page(ptep)); + *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; + return 1; +} +#endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ + #ifdef CONFIG_MEMORY_FAILURE /* Should be called in hugetlb_lock */