2024-01-26 16:53:45

by Steve Wahl

[permalink] [raw]
Subject: [PATCH v3] x86/mm/ident_map: Use gbpages only where full GB page should be mapped.

When ident_pud_init() uses only gbpages to create identity maps, large
ranges of addresses not actually requested can be included in the
resulting table; a 4K request will map a full GB. On UV systems, this
ends up including regions that will cause hardware to halt the system
if accessed (these are marked "reserved" by BIOS). Even though code
does not actually make references to these addresses, including them
in an active map allows processor speculation into this region, which
is enough to trigger the system halt.

Instead of using gbpages for all memory regions, which can include
vast areas outside what's actually been requested, use them only when
map creation requests include the full GB page of space; descend to
using smaller 2M pages when only portions of a GB page are included in
the request.

No attempt is made to coalesce mapping requests. If a request requires
a map entry at the 2M (pmd) level, subsequent mapping requests within
the same 1G region will also be at the pmd level, even if adjacent or
overlapping such requests could have been combined to map a full
gbpage. Existing usage starts with larger regions and then adds
smaller regions, so this should not have any great consequence.

The existing kernel option "nogbpages" would disallow use of
gbpages entirely and avoid this problem, but uses a lot of extra
memory for page tables that are not really needed.

Signed-off-by: Steve Wahl <[email protected]>
---

v3: per Dave Hansen review, re-arrange changelog info,
refactor code to use bool variable and split out conditions.

v2: per Dave Hansen review: Additional changelog info,
moved pud_large() check earlier in the code, and
improved the comment describing the conditions
that restrict gbpage usage.

arch/x86/mm/ident_map.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index 968d7005f4a7..f50cc210a981 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -26,18 +26,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
for (; addr < end; addr = next) {
pud_t *pud = pud_page + pud_index(addr);
pmd_t *pmd;
+ bool use_gbpage;

next = (addr & PUD_MASK) + PUD_SIZE;
if (next > end)
next = end;

- if (info->direct_gbpages) {
- pud_t pudval;
+ /* if this is already a gbpage, this portion is already mapped */
+ if (pud_large(*pud))
+ continue;
+
+ /* Is using a gbpage allowed? */
+ use_gbpage = info->direct_gbpages;

- if (pud_present(*pud))
- continue;
+ /* Don't use gbpage if it maps more than the requested region. */
+ /* at the begining: */
+ use_gbpage &= ((addr & ~PUD_MASK) == 0);
+ /* ... or at the end: */
+ use_gbpage &= ((next & ~PUD_MASK) == 0);
+
+ /* Never overwrite existing mappings */
+ use_gbpage &= !pud_present(*pud);
+
+ if (use_gbpage) {
+ pud_t pudval;

- addr &= PUD_MASK;
pudval = __pud((addr - info->offset) | info->page_flag);
set_pud(pud, pudval);
continue;
--
2.26.2



2024-02-12 16:05:54

by Steve Wahl

[permalink] [raw]
Subject: Re: [PATCH v3] x86/mm/ident_map: Use gbpages only where full GB page should be mapped.

Gentle Ping... Thanks

--> Steve Wahl

On Fri, Jan 26, 2024 at 10:48:41AM -0600, Steve Wahl wrote:
> When ident_pud_init() uses only gbpages to create identity maps, large
> ranges of addresses not actually requested can be included in the
> resulting table; a 4K request will map a full GB. On UV systems, this
> ends up including regions that will cause hardware to halt the system
> if accessed (these are marked "reserved" by BIOS). Even though code
> does not actually make references to these addresses, including them
> in an active map allows processor speculation into this region, which
> is enough to trigger the system halt.
>
> Instead of using gbpages for all memory regions, which can include
> vast areas outside what's actually been requested, use them only when
> map creation requests include the full GB page of space; descend to
> using smaller 2M pages when only portions of a GB page are included in
> the request.
>
> No attempt is made to coalesce mapping requests. If a request requires
> a map entry at the 2M (pmd) level, subsequent mapping requests within
> the same 1G region will also be at the pmd level, even if adjacent or
> overlapping such requests could have been combined to map a full
> gbpage. Existing usage starts with larger regions and then adds
> smaller regions, so this should not have any great consequence.
>
> The existing kernel option "nogbpages" would disallow use of
> gbpages entirely and avoid this problem, but uses a lot of extra
> memory for page tables that are not really needed.
>
> Signed-off-by: Steve Wahl <[email protected]>
> ---
>
> v3: per Dave Hansen review, re-arrange changelog info,
> refactor code to use bool variable and split out conditions.
>
> v2: per Dave Hansen review: Additional changelog info,
> moved pud_large() check earlier in the code, and
> improved the comment describing the conditions
> that restrict gbpage usage.
>
> arch/x86/mm/ident_map.c | 23 ++++++++++++++++++-----
> 1 file changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
> index 968d7005f4a7..f50cc210a981 100644
> --- a/arch/x86/mm/ident_map.c
> +++ b/arch/x86/mm/ident_map.c
> @@ -26,18 +26,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
> for (; addr < end; addr = next) {
> pud_t *pud = pud_page + pud_index(addr);
> pmd_t *pmd;
> + bool use_gbpage;
>
> next = (addr & PUD_MASK) + PUD_SIZE;
> if (next > end)
> next = end;
>
> - if (info->direct_gbpages) {
> - pud_t pudval;
> + /* if this is already a gbpage, this portion is already mapped */
> + if (pud_large(*pud))
> + continue;
> +
> + /* Is using a gbpage allowed? */
> + use_gbpage = info->direct_gbpages;
>
> - if (pud_present(*pud))
> - continue;
> + /* Don't use gbpage if it maps more than the requested region. */
> + /* at the begining: */
> + use_gbpage &= ((addr & ~PUD_MASK) == 0);
> + /* ... or at the end: */
> + use_gbpage &= ((next & ~PUD_MASK) == 0);
> +
> + /* Never overwrite existing mappings */
> + use_gbpage &= !pud_present(*pud);
> +
> + if (use_gbpage) {
> + pud_t pudval;
>
> - addr &= PUD_MASK;
> pudval = __pud((addr - info->offset) | info->page_flag);
> set_pud(pud, pudval);
> continue;
> --
> 2.26.2
>