Message ID | 1459349164-27175-10-git-send-email-ard.biesheuvel@linaro.org |
---|---|
State | Accepted |
Commit | 1dff8083a024650c75a9c961c38082473ceae8cf |
Headers | show |
On Wed, Mar 30, 2016 at 04:46:04PM +0200, Ard Biesheuvel wrote: > The open coded conversion from struct page address to virtual address in > lowmem_page_address() involves an intermediate conversion step to pfn > number/physical address. Since the placement of the struct page array > relative to the linear mapping may be completely independent from the > placement of physical RAM (as is that case for arm64 after commit > dfd55ad85e 'arm64: vmemmap: use virtual projection of linear region'), > the conversion to physical address and back again should factor out of > the equation, but unfortunately, the shifting and pointer arithmetic > involved prevent this from happening, and the resulting calculation > essentially subtracts the address of the start of physical memory and > adds it back again, in a way that prevents the compiler from optimizing > it away. > > Since the start of physical memory is not a build time constant on arm64, > the resulting conversion involves an unnecessary memory access, which > we would like to get rid of. So replace the open coded conversion with > a call to page_to_virt(), and use the open coded conversion as its > default definition, to be overriden by the architecture, if desired. > The existing arch specific definitions of page_to_virt are all equivalent > to this default definition, so by itself this patch is a no-op. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> I assume you'll post this patch (and the nios2/openrisc) patches as individual patches targetting the relevant trees? Will > --- > include/linux/mm.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index ed6407d1b7b5..474c4625756e 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -72,6 +72,10 @@ extern int mmap_rnd_compat_bits __read_mostly; > #define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0)) > #endif > > +#ifndef page_to_virt > +#define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x))) > +#endif > + > /* > * To prevent common memory management code establishing > * a zero page mapping on a read fault. > @@ -948,7 +952,7 @@ static inline struct mem_cgroup *page_memcg(struct page *page) > > static __always_inline void *lowmem_page_address(const struct page *page) > { > - return __va(PFN_PHYS(page_to_pfn(page))); > + return page_to_virt(page); > } > > #if defined(CONFIG_HIGHMEM) && !defined(WANT_PAGE_VIRTUAL) > -- > 2.5.0 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 14 April 2016 at 17:25, Will Deacon <will.deacon@arm.com> wrote: > On Wed, Mar 30, 2016 at 04:46:04PM +0200, Ard Biesheuvel wrote: >> The open coded conversion from struct page address to virtual address in >> lowmem_page_address() involves an intermediate conversion step to pfn >> number/physical address. Since the placement of the struct page array >> relative to the linear mapping may be completely independent from the >> placement of physical RAM (as is that case for arm64 after commit >> dfd55ad85e 'arm64: vmemmap: use virtual projection of linear region'), >> the conversion to physical address and back again should factor out of >> the equation, but unfortunately, the shifting and pointer arithmetic >> involved prevent this from happening, and the resulting calculation >> essentially subtracts the address of the start of physical memory and >> adds it back again, in a way that prevents the compiler from optimizing >> it away. >> >> Since the start of physical memory is not a build time constant on arm64, >> the resulting conversion involves an unnecessary memory access, which >> we would like to get rid of. So replace the open coded conversion with >> a call to page_to_virt(), and use the open coded conversion as its >> default definition, to be overriden by the architecture, if desired. >> The existing arch specific definitions of page_to_virt are all equivalent >> to this default definition, so by itself this patch is a no-op. >> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > > Acked-by: Will Deacon <will.deacon@arm.com> > > I assume you'll post this patch (and the nios2/openrisc) patches as > individual patches targetting the relevant trees? > Sure, as they are completely independent from the rest of the series. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff --git a/include/linux/mm.h b/include/linux/mm.h index ed6407d1b7b5..474c4625756e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -72,6 +72,10 @@ extern int mmap_rnd_compat_bits __read_mostly; #define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0)) #endif +#ifndef page_to_virt +#define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x))) +#endif + /* * To prevent common memory management code establishing * a zero page mapping on a read fault. @@ -948,7 +952,7 @@ static inline struct mem_cgroup *page_memcg(struct page *page) static __always_inline void *lowmem_page_address(const struct page *page) { - return __va(PFN_PHYS(page_to_pfn(page))); + return page_to_virt(page); } #if defined(CONFIG_HIGHMEM) && !defined(WANT_PAGE_VIRTUAL)
The open coded conversion from struct page address to virtual address in lowmem_page_address() involves an intermediate conversion step to pfn number/physical address. Since the placement of the struct page array relative to the linear mapping may be completely independent from the placement of physical RAM (as is that case for arm64 after commit dfd55ad85e 'arm64: vmemmap: use virtual projection of linear region'), the conversion to physical address and back again should factor out of the equation, but unfortunately, the shifting and pointer arithmetic involved prevent this from happening, and the resulting calculation essentially subtracts the address of the start of physical memory and adds it back again, in a way that prevents the compiler from optimizing it away. Since the start of physical memory is not a build time constant on arm64, the resulting conversion involves an unnecessary memory access, which we would like to get rid of. So replace the open coded conversion with a call to page_to_virt(), and use the open coded conversion as its default definition, to be overriden by the architecture, if desired. The existing arch specific definitions of page_to_virt are all equivalent to this default definition, so by itself this patch is a no-op. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- include/linux/mm.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) -- 2.5.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel