2022-11-10 20:41:10

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping

Recompute the physical address for each per-CPU page in the CPU entry
area, a recent commit inadvertantly modified cea_map_percpu_pages() such
that every PTE is mapped to the physical address of the first page.

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Cc: Andrey Ryabinin <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/mm/cpu_entry_area.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index dff9001e5e12..d831aae94b41 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
early_pfn_to_nid(PFN_DOWN(pa)));

for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
- cea_set_pte(cea_vaddr, pa, prot);
+ cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
}

static void __init percpu_setup_debug_store(unsigned int cpu)
--
2.38.1.431.g37b22c650d-goog



2022-11-14 14:25:23

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] x86/mm: Recompute physical address for every page of per-CPU CEA mapping



On 11/10/22 23:35, Sean Christopherson wrote:
> Recompute the physical address for each per-CPU page in the CPU entry
> area, a recent commit inadvertantly modified cea_map_percpu_pages() such
> that every PTE is mapped to the physical address of the first page.
>
> Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
> Cc: Andrey Ryabinin <[email protected]>
> Signed-off-by: Sean Christopherson <[email protected]>

Reviewed-by: Andrey Ryabinin <[email protected]>

2022-11-15 22:34:44

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/mm] x86/mm: Recompute physical address for every page of per-CPU CEA mapping

The following commit has been merged into the x86/mm branch of tip:

Commit-ID: 991ab455645118e83fe9f2f9aea6ee383908ffd4
Gitweb: https://git.kernel.org/tip/991ab455645118e83fe9f2f9aea6ee383908ffd4
Author: Sean Christopherson <[email protected]>
AuthorDate: Thu, 10 Nov 2022 20:35:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Tue, 15 Nov 2022 22:29:58 +01:00

x86/mm: Recompute physical address for every page of per-CPU CEA mapping

Recompute the physical address for each per-CPU page in the CPU entry
area, a recent commit inadvertantly modified cea_map_percpu_pages() such
that every PTE is mapped to the physical address of the first page.

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/mm/cpu_entry_area.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index dff9001..d831aae 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
early_pfn_to_nid(PFN_DOWN(pa)));

for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
- cea_set_pte(cea_vaddr, pa, prot);
+ cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
}

static void __init percpu_setup_debug_store(unsigned int cpu)

2022-12-17 19:20:34

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/mm] x86/mm: Recompute physical address for every page of per-CPU CEA mapping

The following commit has been merged into the x86/mm branch of tip:

Commit-ID: 80d72a8f76e8f3f0b5a70b8c7022578e17bde8e7
Gitweb: https://git.kernel.org/tip/80d72a8f76e8f3f0b5a70b8c7022578e17bde8e7
Author: Sean Christopherson <[email protected]>
AuthorDate: Thu, 10 Nov 2022 20:35:00
Committer: Dave Hansen <[email protected]>
CommitterDate: Thu, 15 Dec 2022 10:37:28 -08:00

x86/mm: Recompute physical address for every page of per-CPU CEA mapping

Recompute the physical address for each per-CPU page in the CPU entry
area, a recent commit inadvertantly modified cea_map_percpu_pages() such
that every PTE is mapped to the physical address of the first page.

Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand")
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/mm/cpu_entry_area.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index dff9001..d831aae 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
early_pfn_to_nid(PFN_DOWN(pa)));

for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
- cea_set_pte(cea_vaddr, pa, prot);
+ cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
}

static void __init percpu_setup_debug_store(unsigned int cpu)