2020-02-21 10:40:34

by Juergen Gross

[permalink] [raw]
Subject: [PATCH] x86/mm: fix dump_pagetables with Xen PV

Commit 2ae27137b2db89 ("x86: mm: convert dump_pagetables to use
walk_page_range") broke Xen PV guests as the hypervisor reserved hole
in the memory map was not taken into account.

Fix that by starting the kernel range only at GUARD_HOLE_END_ADDR.

Fixes: 2ae27137b2db89 ("x86: mm: convert dump_pagetables to use walk_page_range")
Reported-by: Julien Grall <[email protected]>
Signed-off-by: Juergen Gross <[email protected]>
---
arch/x86/mm/dump_pagetables.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index 64229dad7eab..69309cd56fdf 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -363,13 +363,8 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m,
{
const struct ptdump_range ptdump_ranges[] = {
#ifdef CONFIG_X86_64
-
-#define normalize_addr_shift (64 - (__VIRTUAL_MASK_SHIFT + 1))
-#define normalize_addr(u) ((signed long)((u) << normalize_addr_shift) >> \
- normalize_addr_shift)
-
{0, PTRS_PER_PGD * PGD_LEVEL_MULT / 2},
- {normalize_addr(PTRS_PER_PGD * PGD_LEVEL_MULT / 2), ~0UL},
+ {GUARD_HOLE_END_ADDR, ~0UL},
#else
{0, ~0UL},
#endif
--
2.16.4


2020-02-21 10:53:19

by Julien Grall

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH] x86/mm: fix dump_pagetables with Xen PV

Hi Juergen,

On 21/02/2020 10:38, Juergen Gross wrote:
> Commit 2ae27137b2db89 ("x86: mm: convert dump_pagetables to use
> walk_page_range") broke Xen PV guests as the hypervisor reserved hole
> in the memory map was not taken into account.
>
> Fix that by starting the kernel range only at GUARD_HOLE_END_ADDR.
>
> Fixes: 2ae27137b2db89 ("x86: mm: convert dump_pagetables to use walk_page_range")
> Reported-by: Julien Grall <[email protected]>
> Signed-off-by: Juergen Gross <[email protected]>

I can confirm the crash has now disappeared:

Tested-by: Julien Grall <[email protected]>

Cheers,

--
Julien Grall

2020-02-28 15:33:07

by Juergen Gross

[permalink] [raw]
Subject: Re: [PATCH] x86/mm: fix dump_pagetables with Xen PV

Friendly ping...

On 21.02.20 11:38, Juergen Gross wrote:
> Commit 2ae27137b2db89 ("x86: mm: convert dump_pagetables to use
> walk_page_range") broke Xen PV guests as the hypervisor reserved hole
> in the memory map was not taken into account.
>
> Fix that by starting the kernel range only at GUARD_HOLE_END_ADDR.
>
> Fixes: 2ae27137b2db89 ("x86: mm: convert dump_pagetables to use walk_page_range")
> Reported-by: Julien Grall <[email protected]>
> Signed-off-by: Juergen Gross <[email protected]>
> ---
> arch/x86/mm/dump_pagetables.c | 7 +------
> 1 file changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
> index 64229dad7eab..69309cd56fdf 100644
> --- a/arch/x86/mm/dump_pagetables.c
> +++ b/arch/x86/mm/dump_pagetables.c
> @@ -363,13 +363,8 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m,
> {
> const struct ptdump_range ptdump_ranges[] = {
> #ifdef CONFIG_X86_64
> -
> -#define normalize_addr_shift (64 - (__VIRTUAL_MASK_SHIFT + 1))
> -#define normalize_addr(u) ((signed long)((u) << normalize_addr_shift) >> \
> - normalize_addr_shift)
> -
> {0, PTRS_PER_PGD * PGD_LEVEL_MULT / 2},
> - {normalize_addr(PTRS_PER_PGD * PGD_LEVEL_MULT / 2), ~0UL},
> + {GUARD_HOLE_END_ADDR, ~0UL},
> #else
> {0, ~0UL},
> #endif
>

2020-02-29 11:50:13

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/urgent] x86/mm: Fix dump_pagetables with Xen PV

The following commit has been merged into the x86/urgent branch of tip:

Commit-ID: bba42affa732d6fd5bd5c9678e6deacde2de1547
Gitweb: https://git.kernel.org/tip/bba42affa732d6fd5bd5c9678e6deacde2de1547
Author: Juergen Gross <[email protected]>
AuthorDate: Fri, 21 Feb 2020 11:38:51 +01:00
Committer: Thomas Gleixner <[email protected]>
CommitterDate: Sat, 29 Feb 2020 12:43:10 +01:00

x86/mm: Fix dump_pagetables with Xen PV

Commit 2ae27137b2db89 ("x86: mm: convert dump_pagetables to use
walk_page_range") broke Xen PV guests as the hypervisor reserved hole in
the memory map was not taken into account.

Fix that by starting the kernel range only at GUARD_HOLE_END_ADDR.

Fixes: 2ae27137b2db89 ("x86: mm: convert dump_pagetables to use walk_page_range")
Reported-by: Julien Grall <[email protected]>
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Julien Grall <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]

---
arch/x86/mm/dump_pagetables.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index 64229da..69309cd 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -363,13 +363,8 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m,
{
const struct ptdump_range ptdump_ranges[] = {
#ifdef CONFIG_X86_64
-
-#define normalize_addr_shift (64 - (__VIRTUAL_MASK_SHIFT + 1))
-#define normalize_addr(u) ((signed long)((u) << normalize_addr_shift) >> \
- normalize_addr_shift)
-
{0, PTRS_PER_PGD * PGD_LEVEL_MULT / 2},
- {normalize_addr(PTRS_PER_PGD * PGD_LEVEL_MULT / 2), ~0UL},
+ {GUARD_HOLE_END_ADDR, ~0UL},
#else
{0, ~0UL},
#endif