Message ID | 20210130221035.4169-2-rppt@kernel.org |
---|---|
State | New |
Headers | show |
Series | mm: fix initialization of struct page for holes in memory layout | expand |
On 30.01.21 23:10, Mike Rapoport wrote: > From: Mike Rapoport <rppt@linux.ibm.com> > > The physical memory on an x86 system starts at address 0, but this is not > always reflected in e820 map. For example, the BIOS can have e820 entries > like > > [ 0.000000] BIOS-provided physical RAM map: > [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x000000000009ffff] usable > > or > > [ 0.000000] BIOS-provided physical RAM map: > [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved > [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x0000000000057fff] usable > > In either case, e820__memblock_setup() won't add the range 0x0000 - 0x1000 > to memblock.memory and later during memory map initialization this range is > left outside any zone. > > With SPARSEMEM=y there is always a struct page for pfn 0 and this struct > page will have it's zone link wrong no matter what value will be set there. > > To avoid this inconsistency, add the beginning of RAM to memblock.memory. > Limit the added chunk size to match the reserved memory to avoid > registering memory that may be used by the firmware but never reserved at > e820__memblock_setup() time. > > Fixes: bde9cfa3afe4 ("x86/setup: don't remove E820_TYPE_RAM for pfn 0") > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> > Cc: stable@vger.kernel.org > --- > arch/x86/kernel/setup.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > index 3412c4595efd..67c77ed6eef8 100644 > --- a/arch/x86/kernel/setup.c > +++ b/arch/x86/kernel/setup.c > @@ -727,6 +727,14 @@ static void __init trim_low_memory_range(void) > * Kconfig help text for X86_RESERVE_LOW. > */ > memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); > + > + /* > + * Even if the firmware does not report the memory at address 0 as > + * usable, inform the generic memory management about its existence > + * to ensure it is a part of ZONE_DMA and the memory map for it is > + * properly initialized. > + */ > + memblock_add(0, ALIGN(reserve_low, PAGE_SIZE)); > } > > /* > I think, to make that code more robust, and to not rely on archs to do the right thing, we should do something like 1) Make sure in free_area_init() that each PFN with a memmap (i.e., falls into a partial present section) is spanned by a zone; that would include PFN 0 in this case. 2) In init_zone_unavailable_mem(), similar to round_up(max_pfn, PAGES_PER_SECTION) handling, consider range [round_down(min_pfn, PAGES_PER_SECTION), min_pfn - 1] which would handle in the x86-64 case [0..0] and, therefore, initialize PFN 0. Also, I think the special-case of PFN 0 is analogous to the round_up(max_pfn, PAGES_PER_SECTION) handling in init_zone_unavailable_mem(): who guarantees that these PFN above the highest present PFN are actually spanned by a zone? I'd suggest going through all zone ranges in free_area_init() first, dealing with zones that have "not section aligned start/end", clamping them up/down if required such that no holes within a section are left uncovered by a zone. -- Thanks, David / dhildenb
On 02/01/21 at 10:32am, David Hildenbrand wrote: > On 30.01.21 23:10, Mike Rapoport wrote: > > From: Mike Rapoport <rppt@linux.ibm.com> > > > > The physical memory on an x86 system starts at address 0, but this is not > > always reflected in e820 map. For example, the BIOS can have e820 entries > > like > > > > [ 0.000000] BIOS-provided physical RAM map: > > [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x000000000009ffff] usable > > > > or > > > > [ 0.000000] BIOS-provided physical RAM map: > > [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved > > [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x0000000000057fff] usable > > > > In either case, e820__memblock_setup() won't add the range 0x0000 - 0x1000 > > to memblock.memory and later during memory map initialization this range is > > left outside any zone. > > > > With SPARSEMEM=y there is always a struct page for pfn 0 and this struct > > page will have it's zone link wrong no matter what value will be set there. > > > > To avoid this inconsistency, add the beginning of RAM to memblock.memory. > > Limit the added chunk size to match the reserved memory to avoid > > registering memory that may be used by the firmware but never reserved at > > e820__memblock_setup() time. > > > > Fixes: bde9cfa3afe4 ("x86/setup: don't remove E820_TYPE_RAM for pfn 0") > > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> > > Cc: stable@vger.kernel.org > > --- > > arch/x86/kernel/setup.c | 8 ++++++++ > > 1 file changed, 8 insertions(+) > > > > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > > index 3412c4595efd..67c77ed6eef8 100644 > > --- a/arch/x86/kernel/setup.c > > +++ b/arch/x86/kernel/setup.c > > @@ -727,6 +727,14 @@ static void __init trim_low_memory_range(void) > > * Kconfig help text for X86_RESERVE_LOW. > > */ > > memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); > > + > > + /* > > + * Even if the firmware does not report the memory at address 0 as > > + * usable, inform the generic memory management about its existence > > + * to ensure it is a part of ZONE_DMA and the memory map for it is > > + * properly initialized. > > + */ > > + memblock_add(0, ALIGN(reserve_low, PAGE_SIZE)); > > } > > > > /* > > > > I think, to make that code more robust, and to not rely on archs to do the > right thing, we should do something like > > 1) Make sure in free_area_init() that each PFN with a memmap (i.e., falls > into a partial present section) is spanned by a zone; that would include PFN > 0 in this case. > > 2) In init_zone_unavailable_mem(), similar to round_up(max_pfn, > PAGES_PER_SECTION) handling, consider range > [round_down(min_pfn, PAGES_PER_SECTION), min_pfn - 1] > which would handle in the x86-64 case [0..0] and, therefore, initialize PFN > 0. Sounds reasonable. Maybe we can change to get the real expected lowest pfn from find_min_pfn_for_node() by iterating memblock.memory and memblock.reserved and comparing. > > Also, I think the special-case of PFN 0 is analogous to the > round_up(max_pfn, PAGES_PER_SECTION) handling in > init_zone_unavailable_mem(): who guarantees that these PFN above the highest > present PFN are actually spanned by a zone? > > I'd suggest going through all zone ranges in free_area_init() first, dealing > with zones that have "not section aligned start/end", clamping them up/down > if required such that no holes within a section are left uncovered by a > zone. > > -- > Thanks, > > David / dhildenb
On Mon, Feb 01, 2021 at 10:32:44AM +0100, David Hildenbrand wrote: > On 30.01.21 23:10, Mike Rapoport wrote: > > From: Mike Rapoport <rppt@linux.ibm.com> > > > > The physical memory on an x86 system starts at address 0, but this is not > > always reflected in e820 map. For example, the BIOS can have e820 entries > > like > > > > [ 0.000000] BIOS-provided physical RAM map: > > [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x000000000009ffff] usable > > > > or > > > > [ 0.000000] BIOS-provided physical RAM map: > > [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved > > [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x0000000000057fff] usable > > > > In either case, e820__memblock_setup() won't add the range 0x0000 - 0x1000 > > to memblock.memory and later during memory map initialization this range is > > left outside any zone. > > > > With SPARSEMEM=y there is always a struct page for pfn 0 and this struct > > page will have it's zone link wrong no matter what value will be set there. > > > > To avoid this inconsistency, add the beginning of RAM to memblock.memory. > > Limit the added chunk size to match the reserved memory to avoid > > registering memory that may be used by the firmware but never reserved at > > e820__memblock_setup() time. > > > > Fixes: bde9cfa3afe4 ("x86/setup: don't remove E820_TYPE_RAM for pfn 0") > > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> > > Cc: stable@vger.kernel.org > > --- > > arch/x86/kernel/setup.c | 8 ++++++++ > > 1 file changed, 8 insertions(+) > > > > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > > index 3412c4595efd..67c77ed6eef8 100644 > > --- a/arch/x86/kernel/setup.c > > +++ b/arch/x86/kernel/setup.c > > @@ -727,6 +727,14 @@ static void __init trim_low_memory_range(void) > > * Kconfig help text for X86_RESERVE_LOW. > > */ > > memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); > > + > > + /* > > + * Even if the firmware does not report the memory at address 0 as > > + * usable, inform the generic memory management about its existence > > + * to ensure it is a part of ZONE_DMA and the memory map for it is > > + * properly initialized. > > + */ > > + memblock_add(0, ALIGN(reserve_low, PAGE_SIZE)); > > } > > > > /* > > > > I think, to make that code more robust, and to not rely on archs to do the > right thing, we should do something like > > 1) Make sure in free_area_init() that each PFN with a memmap (i.e., falls > into a partial present section) is spanned by a zone; that would include PFN > 0 in this case. > > 2) In init_zone_unavailable_mem(), similar to round_up(max_pfn, > PAGES_PER_SECTION) handling, consider range > [round_down(min_pfn, PAGES_PER_SECTION), min_pfn - 1] > which would handle in the x86-64 case [0..0] and, therefore, initialize PFN > 0. > > Also, I think the special-case of PFN 0 is analogous to the > round_up(max_pfn, PAGES_PER_SECTION) handling in > init_zone_unavailable_mem(): who guarantees that these PFN above the highest > present PFN are actually spanned by a zone? > > I'd suggest going through all zone ranges in free_area_init() first, dealing > with zones that have "not section aligned start/end", clamping them up/down > if required such that no holes within a section are left uncovered by a > zone. I thought about changing the way zone extents are calculated so that zone start/end will be always on a section boundary, but zone->zone_start_pfn depends on node->node_start_pfn which is defined by hardware and expanding a node to make its start pfn aligned at the section boundary might violate the HW addressing scheme. Maybe this could never happen, or maybe it's not really important as the pages there will be reserved anyway, but I'm not sure I can estimate all the implications. -- Sincerely yours, Mike.
On 01.02.21 15:30, Mike Rapoport wrote: > On Mon, Feb 01, 2021 at 10:32:44AM +0100, David Hildenbrand wrote: >> On 30.01.21 23:10, Mike Rapoport wrote: >>> From: Mike Rapoport <rppt@linux.ibm.com> >>> >>> The physical memory on an x86 system starts at address 0, but this is not >>> always reflected in e820 map. For example, the BIOS can have e820 entries >>> like >>> >>> [ 0.000000] BIOS-provided physical RAM map: >>> [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x000000000009ffff] usable >>> >>> or >>> >>> [ 0.000000] BIOS-provided physical RAM map: >>> [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved >>> [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x0000000000057fff] usable >>> >>> In either case, e820__memblock_setup() won't add the range 0x0000 - 0x1000 >>> to memblock.memory and later during memory map initialization this range is >>> left outside any zone. >>> >>> With SPARSEMEM=y there is always a struct page for pfn 0 and this struct >>> page will have it's zone link wrong no matter what value will be set there. >>> >>> To avoid this inconsistency, add the beginning of RAM to memblock.memory. >>> Limit the added chunk size to match the reserved memory to avoid >>> registering memory that may be used by the firmware but never reserved at >>> e820__memblock_setup() time. >>> >>> Fixes: bde9cfa3afe4 ("x86/setup: don't remove E820_TYPE_RAM for pfn 0") >>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> >>> Cc: stable@vger.kernel.org >>> --- >>> arch/x86/kernel/setup.c | 8 ++++++++ >>> 1 file changed, 8 insertions(+) >>> >>> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c >>> index 3412c4595efd..67c77ed6eef8 100644 >>> --- a/arch/x86/kernel/setup.c >>> +++ b/arch/x86/kernel/setup.c >>> @@ -727,6 +727,14 @@ static void __init trim_low_memory_range(void) >>> * Kconfig help text for X86_RESERVE_LOW. >>> */ >>> memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); >>> + >>> + /* >>> + * Even if the firmware does not report the memory at address 0 as >>> + * usable, inform the generic memory management about its existence >>> + * to ensure it is a part of ZONE_DMA and the memory map for it is >>> + * properly initialized. >>> + */ >>> + memblock_add(0, ALIGN(reserve_low, PAGE_SIZE)); >>> } >>> >>> /* >>> >> >> I think, to make that code more robust, and to not rely on archs to do the >> right thing, we should do something like >> >> 1) Make sure in free_area_init() that each PFN with a memmap (i.e., falls >> into a partial present section) is spanned by a zone; that would include PFN >> 0 in this case. >> >> 2) In init_zone_unavailable_mem(), similar to round_up(max_pfn, >> PAGES_PER_SECTION) handling, consider range >> [round_down(min_pfn, PAGES_PER_SECTION), min_pfn - 1] >> which would handle in the x86-64 case [0..0] and, therefore, initialize PFN >> 0. >> >> Also, I think the special-case of PFN 0 is analogous to the >> round_up(max_pfn, PAGES_PER_SECTION) handling in >> init_zone_unavailable_mem(): who guarantees that these PFN above the highest >> present PFN are actually spanned by a zone? >> >> I'd suggest going through all zone ranges in free_area_init() first, dealing >> with zones that have "not section aligned start/end", clamping them up/down >> if required such that no holes within a section are left uncovered by a >> zone. > > I thought about changing the way zone extents are calculated so that zone > start/end will be always on a section boundary, but zone->zone_start_pfn > depends on node->node_start_pfn which is defined by hardware and expanding > a node to make its start pfn aligned at the section boundary might violate > the HW addressing scheme. > > Maybe this could never happen, or maybe it's not really important as the > pages there will be reserved anyway, but I'm not sure I can estimate all > the implications. > I'm suggesting to let zone (+node?) ranges cover memory holes with a valid memmap. Not to move actual memory between nodes/zones. -- Thanks, David / dhildenb
On Mon, Feb 01, 2021 at 07:26:05PM +0800, Baoquan He wrote: > On 02/01/21 at 10:32am, David Hildenbrand wrote: > > > > 2) In init_zone_unavailable_mem(), similar to round_up(max_pfn, > > PAGES_PER_SECTION) handling, consider range > > [round_down(min_pfn, PAGES_PER_SECTION), min_pfn - 1] > > which would handle in the x86-64 case [0..0] and, therefore, initialize PFN > > 0. > > Sounds reasonable. Maybe we can change to get the real expected lowest > pfn from find_min_pfn_for_node() by iterating memblock.memory and > memblock.reserved and comparing. As I've found out the hard way [1], reserved memory is not necessary present. There could be a system that instead of reserving memory at 0xfe000000 like in Guillaume's report, could have it reserved at 0x0 and populated only from the first gigabyte... [1] https://lore.kernel.org/lkml/127999c4-7d56-0c36-7f88-8e1a5c934cae@collabora.com -- Sincerely yours, Mike.
On 02/01/21 at 04:34pm, Mike Rapoport wrote: > On Mon, Feb 01, 2021 at 07:26:05PM +0800, Baoquan He wrote: > > On 02/01/21 at 10:32am, David Hildenbrand wrote: > > > > > > 2) In init_zone_unavailable_mem(), similar to round_up(max_pfn, > > > PAGES_PER_SECTION) handling, consider range > > > [round_down(min_pfn, PAGES_PER_SECTION), min_pfn - 1] > > > which would handle in the x86-64 case [0..0] and, therefore, initialize PFN > > > 0. > > > > Sounds reasonable. Maybe we can change to get the real expected lowest > > pfn from find_min_pfn_for_node() by iterating memblock.memory and > > memblock.reserved and comparing. > > As I've found out the hard way [1], reserved memory is not necessary present. > > There could be a system that instead of reserving memory at 0xfe000000 like > in Guillaume's report, could have it reserved at 0x0 and populated only > from the first gigabyte... OK. I thought that we can even compare memblock.memory.regions[0].base with memblock.reserved.regions[0].base and take the smaller one as the lowest pfn and assign it to arch_zone_lowest_possible_pfn[0]. When we try to get the present pages, we still check memblock.memory with for_each_mem_pfn_range(). Since we will consider and take reserved memory into zone anyway, arch_zone_lowest_possible_pfn[] only impact the boundary of zone. Just rough thought, please ignore it if something is missed. Thanks Baoquan
On Mon, Feb 01, 2021 at 03:32:33PM +0100, David Hildenbrand wrote: > On 01.02.21 15:30, Mike Rapoport wrote: > > > > > > I'd suggest going through all zone ranges in free_area_init() first, dealing > > > with zones that have "not section aligned start/end", clamping them up/down > > > if required such that no holes within a section are left uncovered by a > > > zone. > > > > I thought about changing the way zone extents are calculated so that zone > > start/end will be always on a section boundary, but zone->zone_start_pfn > > depends on node->node_start_pfn which is defined by hardware and expanding > > a node to make its start pfn aligned at the section boundary might violate > > the HW addressing scheme. > > > > Maybe this could never happen, or maybe it's not really important as the > > pages there will be reserved anyway, but I'm not sure I can estimate all > > the implications. > > > > I'm suggesting to let zone (+node?) ranges cover memory holes with a valid > memmap. Not to move actual memory between nodes/zones. I didn't think you suggest to move actual memory :) My concern was that extending node range might cause troubles, but TBH, I cannot think of a memory layout that will be crazy enough to actually get us into those troubles. So something like the patch below might work. It'll need nice wrapping and some comments, but generally it implements your suggestion to extend node's range to include partial sections, and then interleave initialization of struct pages representing unpopulated memory with the initialization of the "real" memory map. Since zone's start/end are derived from node's start/end we also get zones covering the holes. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 519a60d5b6f7..179d1eb4a9bb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6257,24 +6257,69 @@ static void __meminit zone_init_free_lists(struct zone *zone) } } +#if !defined(CONFIG_FLAT_NODE_MEM_MAP) +static u64 __meminit init_unavailable_range(unsigned long spfn, + unsigned long epfn, + int zone, int node) +{ + unsigned long pfn; + u64 pgcnt = 0; + + for (pfn = spfn; pfn < epfn; pfn++) { + if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) { + pfn = ALIGN_DOWN(pfn, pageblock_nr_pages) + + pageblock_nr_pages - 1; + continue; + } + __init_single_page(pfn_to_page(pfn), pfn, zone, node); + __SetPageReserved(pfn_to_page(pfn)); + pgcnt++; + } + + return pgcnt; +} +#else +static inline u64 init_unavailable_range(unsigned long spfn, unsigned long epfn, + int zone, int node) +{ + return 0; +} +#endif + + void __meminit __weak memmap_init(unsigned long size, int nid, unsigned long zone, unsigned long range_start_pfn) { - unsigned long start_pfn, end_pfn; + unsigned long start_pfn, end_pfn, next_pfn = 0; unsigned long range_end_pfn = range_start_pfn + size; + u64 pgcnt = 0; int i; for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn); end_pfn = clamp(end_pfn, range_start_pfn, range_end_pfn); + next_pfn = clamp(next_pfn, range_start_pfn, range_end_pfn); if (end_pfn > start_pfn) { size = end_pfn - start_pfn; memmap_init_zone(size, nid, zone, start_pfn, range_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); } + + if (next_pfn < start_pfn) + pgcnt += init_unavailable_range(next_pfn, start_pfn, + zone, nid); + next_pfn = end_pfn; } + + if (next_pfn < range_end_pfn) + pgcnt += init_unavailable_range(next_pfn, range_end_pfn, + zone, nid); + + if (pgcnt) + pr_info("%s: Zeroed struct page in unavailable ranges: %lld\n", + zone_names[zone], pgcnt); } static int zone_batchsize(struct zone *zone) @@ -6523,6 +6568,12 @@ void __init get_pfn_range_for_nid(unsigned int nid, if (*start_pfn == -1UL) *start_pfn = 0; + else { +#ifdef CONFIG_SPARSEMEM + *start_pfn = round_down(*start_pfn, PAGES_PER_SECTION); + *end_pfn = round_up(*end_pfn, PAGES_PER_SECTION); +#endif + } } /* @@ -7075,88 +7126,6 @@ void __init free_area_init_memoryless_node(int nid) free_area_init_node(nid); } -#if !defined(CONFIG_FLAT_NODE_MEM_MAP) -/* - * Initialize all valid struct pages in the range [spfn, epfn) and mark them - * PageReserved(). Return the number of struct pages that were initialized. - */ -static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) -{ - unsigned long pfn; - u64 pgcnt = 0; - - for (pfn = spfn; pfn < epfn; pfn++) { - if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) { - pfn = ALIGN_DOWN(pfn, pageblock_nr_pages) - + pageblock_nr_pages - 1; - continue; - } - /* - * Use a fake node/zone (0) for now. Some of these pages - * (in memblock.reserved but not in memblock.memory) will - * get re-initialized via reserve_bootmem_region() later. - */ - __init_single_page(pfn_to_page(pfn), pfn, 0, 0); - __SetPageReserved(pfn_to_page(pfn)); - pgcnt++; - } - - return pgcnt; -} - -/* - * Only struct pages that are backed by physical memory are zeroed and - * initialized by going through __init_single_page(). But, there are some - * struct pages which are reserved in memblock allocator and their fields - * may be accessed (for example page_to_pfn() on some configuration accesses - * flags). We must explicitly initialize those struct pages. - * - * This function also addresses a similar issue where struct pages are left - * uninitialized because the physical address range is not covered by - * memblock.memory or memblock.reserved. That could happen when memblock - * layout is manually configured via memmap=, or when the highest physical - * address (max_pfn) does not end on a section boundary. - */ -static void __init init_unavailable_mem(void) -{ - phys_addr_t start, end; - u64 i, pgcnt; - phys_addr_t next = 0; - - /* - * Loop through unavailable ranges not covered by memblock.memory. - */ - pgcnt = 0; - for_each_mem_range(i, &start, &end) { - if (next < start) - pgcnt += init_unavailable_range(PFN_DOWN(next), - PFN_UP(start)); - next = end; - } - - /* - * Early sections always have a fully populated memmap for the whole - * section - see pfn_valid(). If the last section has holes at the - * end and that section is marked "online", the memmap will be - * considered initialized. Make sure that memmap has a well defined - * state. - */ - pgcnt += init_unavailable_range(PFN_DOWN(next), - round_up(max_pfn, PAGES_PER_SECTION)); - - /* - * Struct pages that do not have backing memory. This could be because - * firmware is using some of this memory, or for some other reasons. - */ - if (pgcnt) - pr_info("Zeroed struct page in unavailable ranges: %lld pages", pgcnt); -} -#else -static inline void __init init_unavailable_mem(void) -{ -} -#endif /* !CONFIG_FLAT_NODE_MEM_MAP */ - #if MAX_NUMNODES > 1 /* * Figure out the number of possible node ids. @@ -7516,7 +7485,7 @@ void __init free_area_init(unsigned long *max_zone_pfn) memset(arch_zone_highest_possible_pfn, 0, sizeof(arch_zone_highest_possible_pfn)); - start_pfn = find_min_pfn_with_active_regions(); + start_pfn = 0; descending = arch_has_descending_max_zone_pfns(); for (i = 0; i < MAX_NR_ZONES; i++) { @@ -7580,7 +7549,6 @@ void __init free_area_init(unsigned long *max_zone_pfn) /* Initialise every node */ mminit_verify_pageflags_layout(); setup_nr_node_ids(); - init_unavailable_mem(); for_each_online_node(nid) { pg_data_t *pgdat = NODE_DATA(nid); free_area_init_node(nid); -- Sincerely yours, Mike.
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 3412c4595efd..67c77ed6eef8 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -727,6 +727,14 @@ static void __init trim_low_memory_range(void) * Kconfig help text for X86_RESERVE_LOW. */ memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); + + /* + * Even if the firmware does not report the memory at address 0 as + * usable, inform the generic memory management about its existence + * to ensure it is a part of ZONE_DMA and the memory map for it is + * properly initialized. + */ + memblock_add(0, ALIGN(reserve_low, PAGE_SIZE)); } /*