2022-01-24 19:14:05

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 00/31] mm/mmap: Drop protection_map[] and platform's __SXXX/__PXXX requirements

protection_map[] is an array based construct that translates given vm_flags
combination. This array contains page protection map, which is populated by
the platform via [__S000 .. __S111] and [__P000 .. __P111] exported macros.
Primary usage for protection_map[] is for vm_get_page_prot(), which is used
to determine page protection value for a given vm_flags. vm_get_page_prot()
implementation, could again call platform overrides arch_vm_get_page_prot()
and arch_filter_pgprot(). Some platforms override protection_map[] that was
originally built with __SXXX/__PXXX with different runtime values.

Currently there are multiple layers of abstraction i.e __SXXX/__PXXX macros
, protection_map[], arch_vm_get_page_prot() and arch_filter_pgprot() built
between the platform and generic MM, finally defining vm_get_page_prot().

Hence this series proposes to drop all these abstraction levels and instead
just move the responsibility of defining vm_get_page_prot() to the platform
itself making it clean and simple.

This first introduces ARCH_HAS_VM_GET_PAGE_PROT which enables the platforms
to define custom vm_get_page_prot(). This starts converting platforms that
either change protection_map[] or define the overrides arch_filter_pgprot()
or arch_vm_get_page_prot() which enables for those constructs to be dropped
off completely. This series then converts remaining platforms which enables
for __SXXX/__PXXX constructs to be dropped off completely. Finally it drops
the generic vm_get_page_prot() and then ARCH_HAS_VM_GET_PAGE_PROT as every
platform now defines their own vm_get_page_prot().

The last patch demonstrates how vm_flags combination indices can be defined
as macros and be replaces across all platforms (if required, not done yet).

The series has been inspired from an earlier discuss with Christoph Hellwig

https://lore.kernel.org/all/[email protected]/

This series applies on 5.17-rc1 after the following patch.

https://lore.kernel.org/all/[email protected]/

This has been cross built for multiple platforms. I would like to get some
early feed back on this proposal. All reviews and suggestions welcome.

Hello Christoph,

I have taken the liberty to preserve your authorship on the x86 patch which
is borrowed almost as is from our earlier discussion. I have also added you
as 'Suggested-by:' on the patch that adds config ARCH_HAS_VM_GET_PAGE_PROT.
Nonetheless please feel free to correct me for any other missing authorship
attributes I should have added. Thank you.

- Anshuman

Cc: Christoph Hellwig <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]

Anshuman Khandual (30):
mm/debug_vm_pgtable: Directly use vm_get_page_prot()
mm/mmap: Clarify protection_map[] indices
mm/mmap: Add new config ARCH_HAS_VM_GET_PAGE_PROT
powerpc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
sparc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
mips/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
m68k/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
mm/mmap: Drop protection_map[]
mm/mmap: Drop arch_filter_pgprot()
mm/mmap: Drop arch_vm_get_page_pgprot()
s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
riscv/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
alpha/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
sh/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
arc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
extensa/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
parisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
um/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
nios2/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
hexagon/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
nds32/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
ia64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
mm/mmap: Drop generic vm_get_page_prot()
mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT
mm/mmap: Define macros for vm_flags access permission combinations

Christoph Hellwig (1):
x86/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

arch/alpha/include/asm/pgtable.h | 17 -----
arch/alpha/mm/init.c | 41 +++++++++++
arch/arc/include/asm/pgtable-bits-arcv2.h | 17 -----
arch/arc/mm/mmap.c | 41 +++++++++++
arch/arm/include/asm/pgtable.h | 18 -----
arch/arm/mm/mmu.c | 50 +++++++++++--
arch/arm64/Kconfig | 1 -
arch/arm64/include/asm/mman.h | 3 +-
arch/arm64/include/asm/pgtable-prot.h | 18 -----
arch/arm64/include/asm/pgtable.h | 2 +-
arch/arm64/mm/mmap.c | 50 +++++++++++++
arch/csky/include/asm/pgtable.h | 18 -----
arch/csky/mm/init.c | 41 +++++++++++
arch/hexagon/include/asm/pgtable.h | 24 -------
arch/hexagon/mm/init.c | 42 +++++++++++
arch/ia64/include/asm/pgtable.h | 17 -----
arch/ia64/mm/init.c | 43 ++++++++++-
arch/m68k/include/asm/mcf_pgtable.h | 59 ---------------
arch/m68k/include/asm/motorola_pgtable.h | 22 ------
arch/m68k/include/asm/sun3_pgtable.h | 22 ------
arch/m68k/mm/init.c | 87 +++++++++++++++++++++++
arch/m68k/mm/motorola.c | 44 +++++++++++-
arch/microblaze/include/asm/pgtable.h | 17 -----
arch/microblaze/mm/init.c | 41 +++++++++++
arch/mips/include/asm/pgtable.h | 22 ------
arch/mips/mm/cache.c | 65 ++++++++++-------
arch/nds32/include/asm/pgtable.h | 17 -----
arch/nds32/mm/mmap.c | 41 +++++++++++
arch/nios2/include/asm/pgtable.h | 16 -----
arch/nios2/mm/init.c | 41 +++++++++++
arch/openrisc/include/asm/pgtable.h | 18 -----
arch/openrisc/mm/init.c | 41 +++++++++++
arch/parisc/include/asm/pgtable.h | 20 ------
arch/parisc/mm/init.c | 41 +++++++++++
arch/powerpc/include/asm/mman.h | 3 +-
arch/powerpc/include/asm/pgtable.h | 19 -----
arch/powerpc/mm/mmap.c | 47 ++++++++++++
arch/riscv/include/asm/pgtable.h | 16 -----
arch/riscv/mm/init.c | 41 +++++++++++
arch/s390/include/asm/pgtable.h | 17 -----
arch/s390/mm/mmap.c | 41 +++++++++++
arch/sh/include/asm/pgtable.h | 17 -----
arch/sh/mm/mmap.c | 43 +++++++++++
arch/sparc/include/asm/mman.h | 1 -
arch/sparc/include/asm/pgtable_32.h | 19 -----
arch/sparc/include/asm/pgtable_64.h | 19 -----
arch/sparc/mm/init_32.c | 41 +++++++++++
arch/sparc/mm/init_64.c | 71 +++++++++++++-----
arch/um/include/asm/pgtable.h | 17 -----
arch/um/kernel/mem.c | 41 +++++++++++
arch/x86/Kconfig | 1 -
arch/x86/include/asm/pgtable.h | 5 --
arch/x86/include/asm/pgtable_types.h | 19 -----
arch/x86/include/uapi/asm/mman.h | 14 ----
arch/x86/mm/Makefile | 2 +-
arch/x86/mm/mem_encrypt_amd.c | 4 --
arch/x86/mm/pgprot.c | 71 ++++++++++++++++++
arch/xtensa/include/asm/pgtable.h | 18 -----
arch/xtensa/mm/init.c | 41 +++++++++++
include/linux/mm.h | 45 ++++++++++--
include/linux/mman.h | 4 --
mm/Kconfig | 3 -
mm/debug_vm_pgtable.c | 27 +++----
mm/mmap.c | 22 ------
64 files changed, 1150 insertions(+), 636 deletions(-)
create mode 100644 arch/x86/mm/pgprot.c

--
2.25.1


2022-01-24 19:14:06

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 01/31] mm/debug_vm_pgtable: Directly use vm_get_page_prot()

Although protection_map[] contains the platform defined page protection map
, vm_get_page_prot() is the right interface to call for page protection for
a given vm_flags. Hence lets use it directly instead. This will also reduce
dependency on protection_map[].

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
mm/debug_vm_pgtable.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index a7ac97c76762..07593eb79338 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -93,7 +93,7 @@ struct pgtable_debug_args {

static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx)
{
- pgprot_t prot = protection_map[idx];
+ pgprot_t prot = vm_get_page_prot(idx);
pte_t pte = pfn_pte(args->fixed_pte_pfn, prot);
unsigned long val = idx, *ptr = &val;

@@ -101,7 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx)

/*
* This test needs to be executed after the given page table entry
- * is created with pfn_pte() to make sure that protection_map[idx]
+ * is created with pfn_pte() to make sure that vm_get_page_prot(idx)
* does not have the dirty bit enabled from the beginning. This is
* important for platforms like arm64 where (!PTE_RDONLY) indicate
* dirty bit being set.
@@ -188,7 +188,7 @@ static void __init pte_savedwrite_tests(struct pgtable_debug_args *args)
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx)
{
- pgprot_t prot = protection_map[idx];
+ pgprot_t prot = vm_get_page_prot(idx);
unsigned long val = idx, *ptr = &val;
pmd_t pmd;

@@ -200,7 +200,7 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx)

/*
* This test needs to be executed after the given page table entry
- * is created with pfn_pmd() to make sure that protection_map[idx]
+ * is created with pfn_pmd() to make sure that vm_get_page_prot(idx)
* does not have the dirty bit enabled from the beginning. This is
* important for platforms like arm64 where (!PTE_RDONLY) indicate
* dirty bit being set.
@@ -323,7 +323,7 @@ static void __init pmd_savedwrite_tests(struct pgtable_debug_args *args)
#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
static void __init pud_basic_tests(struct pgtable_debug_args *args, int idx)
{
- pgprot_t prot = protection_map[idx];
+ pgprot_t prot = vm_get_page_prot(idx);
unsigned long val = idx, *ptr = &val;
pud_t pud;

@@ -335,7 +335,7 @@ static void __init pud_basic_tests(struct pgtable_debug_args *args, int idx)

/*
* This test needs to be executed after the given page table entry
- * is created with pfn_pud() to make sure that protection_map[idx]
+ * is created with pfn_pud() to make sure that vm_get_page_prot(idx)
* does not have the dirty bit enabled from the beginning. This is
* important for platforms like arm64 where (!PTE_RDONLY) indicate
* dirty bit being set.
@@ -1104,14 +1104,14 @@ static int __init init_args(struct pgtable_debug_args *args)
/*
* Initialize the debugging data.
*
- * protection_map[0] (or even protection_map[8]) will help create
- * page table entries with PROT_NONE permission as required for
- * pxx_protnone_tests().
+ * vm_get_page_prot(VM_NONE) or vm_get_page_prot(VM_SHARED|VM_NONE)
+ * will help create page table entries with PROT_NONE permission as
+ * required for pxx_protnone_tests().
*/
memset(args, 0, sizeof(*args));
args->vaddr = get_random_vaddr();
args->page_prot = vm_get_page_prot(VMFLAGS);
- args->page_prot_none = protection_map[0];
+ args->page_prot_none = vm_get_page_prot(VM_NONE);
args->is_contiguous_page = false;
args->pud_pfn = ULONG_MAX;
args->pmd_pfn = ULONG_MAX;
@@ -1246,12 +1246,15 @@ static int __init debug_vm_pgtable(void)
return ret;

/*
- * Iterate over the protection_map[] to make sure that all
+ * Iterate over each possible vm_flags to make sure that all
* the basic page table transformation validations just hold
* true irrespective of the starting protection value for a
* given page table entry.
+ *
+ * Protection based vm_flags combinatins are always linear
+ * and increasing i.e VM_NONE ..[VM_SHARED|READ|WRITE|EXEC].
*/
- for (idx = 0; idx < ARRAY_SIZE(protection_map); idx++) {
+ for (idx = VM_NONE; idx <= (VM_SHARED | VM_READ | VM_WRITE | VM_EXEC); idx++) {
pte_basic_tests(&args, idx);
pmd_basic_tests(&args, idx);
pud_basic_tests(&args, idx);
--
2.25.1

2022-01-24 19:14:11

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 03/31] mm/mmap: Add new config ARCH_HAS_VM_GET_PAGE_PROT

Add a new config ARCH_HAS_VM_GET_PAGE_PROT, which when subscribed enables a
given platform to define its own vm_get_page_prot(). This framework will
help remove protection_map[] dependency going forward.

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Suggested-by: Christoph Hellwig <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
mm/Kconfig | 3 +++
mm/mmap.c | 2 ++
2 files changed, 5 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index 257ed9c86de3..fa436478a94c 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -747,6 +747,9 @@ config ARCH_HAS_CACHE_LINE_SIZE
config ARCH_HAS_FILTER_PGPROT
bool

+config ARCH_HAS_VM_GET_PAGE_PROT
+ bool
+
config ARCH_HAS_PTE_DEVMAP
bool

diff --git a/mm/mmap.c b/mm/mmap.c
index 254d716220df..ec403de32dcb 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -81,6 +81,7 @@ static void unmap_region(struct mm_struct *mm,
struct vm_area_struct *vma, struct vm_area_struct *prev,
unsigned long start, unsigned long end);

+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
/* description of effects of mapping type and prot in current implementation.
* this is due to the limited x86 page protection hardware. The expected
* behavior is in parens:
@@ -136,6 +137,7 @@ pgprot_t vm_get_page_prot(unsigned long vm_flags)
return arch_filter_pgprot(ret);
}
EXPORT_SYMBOL(vm_get_page_prot);
+#endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */

static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
{
--
2.25.1

2022-01-24 19:14:14

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 07/31] mips/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Thomas Bogendoerfer <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/mips/Kconfig | 1 +
arch/mips/include/asm/pgtable.h | 22 -----------
arch/mips/mm/cache.c | 65 ++++++++++++++++++++-------------
3 files changed, 41 insertions(+), 47 deletions(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 058446f01487..fcbfc52a1567 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -13,6 +13,7 @@ config MIPS
select ARCH_HAS_STRNLEN_USER
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_KEEP_MEMBLOCK
select ARCH_SUPPORTS_UPROBES
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 7b8037f25d9e..bf193ad4f195 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -41,28 +41,6 @@ struct vm_area_struct;
* by reasonable means..
*/

-/*
- * Dummy values to fill the table in mmap.c
- * The real values will be generated at runtime
- */
-#define __P000 __pgprot(0)
-#define __P001 __pgprot(0)
-#define __P010 __pgprot(0)
-#define __P011 __pgprot(0)
-#define __P100 __pgprot(0)
-#define __P101 __pgprot(0)
-#define __P110 __pgprot(0)
-#define __P111 __pgprot(0)
-
-#define __S000 __pgprot(0)
-#define __S001 __pgprot(0)
-#define __S010 __pgprot(0)
-#define __S011 __pgprot(0)
-#define __S100 __pgprot(0)
-#define __S101 __pgprot(0)
-#define __S110 __pgprot(0)
-#define __S111 __pgprot(0)
-
extern unsigned long _page_cachable_default;
extern void __update_cache(unsigned long address, pte_t pte);

diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 830ab91e574f..06e29982965a 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -159,30 +159,6 @@ EXPORT_SYMBOL(_page_cachable_default);

#define PM(p) __pgprot(_page_cachable_default | (p))

-static inline void setup_protection_map(void)
-{
- protection_map[0] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
- protection_map[1] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
- protection_map[2] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
- protection_map[3] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
- protection_map[4] = PM(_PAGE_PRESENT);
- protection_map[5] = PM(_PAGE_PRESENT);
- protection_map[6] = PM(_PAGE_PRESENT);
- protection_map[7] = PM(_PAGE_PRESENT);
-
- protection_map[8] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
- protection_map[9] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
- protection_map[10] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE |
- _PAGE_NO_READ);
- protection_map[11] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE);
- protection_map[12] = PM(_PAGE_PRESENT);
- protection_map[13] = PM(_PAGE_PRESENT);
- protection_map[14] = PM(_PAGE_PRESENT | _PAGE_WRITE);
- protection_map[15] = PM(_PAGE_PRESENT | _PAGE_WRITE);
-}
-
-#undef PM
-
void cpu_cache_init(void)
{
if (cpu_has_3k_cache) {
@@ -206,6 +182,45 @@ void cpu_cache_init(void)

octeon_cache_init();
}
+}

- setup_protection_map();
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
+ case VM_READ:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
+ case VM_WRITE:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
+ case VM_READ | VM_WRITE:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
+ case VM_EXEC:
+ return PM(_PAGE_PRESENT);
+ case VM_EXEC | VM_READ:
+ return PM(_PAGE_PRESENT);
+ case VM_EXEC | VM_WRITE:
+ return PM(_PAGE_PRESENT);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PM(_PAGE_PRESENT);
+ case VM_SHARED:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
+ case VM_SHARED | VM_READ:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
+ case VM_SHARED | VM_WRITE:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE | _PAGE_NO_READ);
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE);
+ case VM_SHARED | VM_EXEC:
+ return PM(_PAGE_PRESENT);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PM(_PAGE_PRESENT);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PM(_PAGE_PRESENT | _PAGE_WRITE);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PM(_PAGE_PRESENT | _PAGE_WRITE);
+ default:
+ BUILD_BUG();
+ }
}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:14:17

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 08/31] m68k/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Thomas Bogendoerfer <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/m68k/Kconfig | 1 +
arch/m68k/include/asm/mcf_pgtable.h | 59 ----------------
arch/m68k/include/asm/motorola_pgtable.h | 22 ------
arch/m68k/include/asm/sun3_pgtable.h | 22 ------
arch/m68k/mm/init.c | 87 ++++++++++++++++++++++++
arch/m68k/mm/motorola.c | 44 +++++++++++-
6 files changed, 129 insertions(+), 106 deletions(-)

diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
index 936e1803c7c7..114e65164692 100644
--- a/arch/m68k/Kconfig
+++ b/arch/m68k/Kconfig
@@ -11,6 +11,7 @@ config M68K
select ARCH_NO_PREEMPT if !COLDFIRE
select ARCH_USE_MEMTEST if MMU_MOTOROLA
select ARCH_WANT_IPC_PARSE_VERSION
+ select ARCH_HAS_VM_GET_PAGE_PROT
select BINFMT_FLAT_ARGVP_ENVP_ON_STACK
select DMA_DIRECT_REMAP if HAS_DMA && MMU && !COLDFIRE
select GENERIC_ATOMIC64
diff --git a/arch/m68k/include/asm/mcf_pgtable.h b/arch/m68k/include/asm/mcf_pgtable.h
index 6f2b87d7a50d..dc5c8ab6aa57 100644
--- a/arch/m68k/include/asm/mcf_pgtable.h
+++ b/arch/m68k/include/asm/mcf_pgtable.h
@@ -86,65 +86,6 @@
| CF_PAGE_READABLE \
| CF_PAGE_DIRTY)

-/*
- * Page protections for initialising protection_map. See mm/mmap.c
- * for use. In general, the bit positions are xwr, and P-items are
- * private, the S-items are shared.
- */
-#define __P000 PAGE_NONE
-#define __P001 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE)
-#define __P010 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_WRITABLE)
-#define __P011 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE \
- | CF_PAGE_WRITABLE)
-#define __P100 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_EXEC)
-#define __P101 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE \
- | CF_PAGE_EXEC)
-#define __P110 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_WRITABLE \
- | CF_PAGE_EXEC)
-#define __P111 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE \
- | CF_PAGE_WRITABLE \
- | CF_PAGE_EXEC)
-
-#define __S000 PAGE_NONE
-#define __S001 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE)
-#define __S010 PAGE_SHARED
-#define __S011 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_SHARED \
- | CF_PAGE_READABLE)
-#define __S100 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_EXEC)
-#define __S101 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_READABLE \
- | CF_PAGE_EXEC)
-#define __S110 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_SHARED \
- | CF_PAGE_EXEC)
-#define __S111 __pgprot(CF_PAGE_VALID \
- | CF_PAGE_ACCESSED \
- | CF_PAGE_SHARED \
- | CF_PAGE_READABLE \
- | CF_PAGE_EXEC)
-
#define PTE_MASK PAGE_MASK
#define CF_PAGE_CHG_MASK (PTE_MASK | CF_PAGE_ACCESSED | CF_PAGE_DIRTY)

diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h
index 022c3abc280d..4ea1bb57deee 100644
--- a/arch/m68k/include/asm/motorola_pgtable.h
+++ b/arch/m68k/include/asm/motorola_pgtable.h
@@ -83,28 +83,6 @@ extern unsigned long mm_cachebits;
#define PAGE_COPY_C __pgprot(_PAGE_PRESENT | _PAGE_RONLY | _PAGE_ACCESSED)
#define PAGE_READONLY_C __pgprot(_PAGE_PRESENT | _PAGE_RONLY | _PAGE_ACCESSED)

-/*
- * The m68k can't do page protection for execute, and considers that the same are read.
- * Also, write permissions imply read permissions. This is the closest we can get..
- */
-#define __P000 PAGE_NONE_C
-#define __P001 PAGE_READONLY_C
-#define __P010 PAGE_COPY_C
-#define __P011 PAGE_COPY_C
-#define __P100 PAGE_READONLY_C
-#define __P101 PAGE_READONLY_C
-#define __P110 PAGE_COPY_C
-#define __P111 PAGE_COPY_C
-
-#define __S000 PAGE_NONE_C
-#define __S001 PAGE_READONLY_C
-#define __S010 PAGE_SHARED_C
-#define __S011 PAGE_SHARED_C
-#define __S100 PAGE_READONLY_C
-#define __S101 PAGE_READONLY_C
-#define __S110 PAGE_SHARED_C
-#define __S111 PAGE_SHARED_C
-
#define pmd_pgtable(pmd) ((pgtable_t)pmd_page_vaddr(pmd))

/*
diff --git a/arch/m68k/include/asm/sun3_pgtable.h b/arch/m68k/include/asm/sun3_pgtable.h
index 5b24283a0a42..086fabdd8d4c 100644
--- a/arch/m68k/include/asm/sun3_pgtable.h
+++ b/arch/m68k/include/asm/sun3_pgtable.h
@@ -66,28 +66,6 @@
| SUN3_PAGE_SYSTEM \
| SUN3_PAGE_NOCACHE)

-/*
- * Page protections for initialising protection_map. The sun3 has only two
- * protection settings, valid (implying read and execute) and writeable. These
- * are as close as we can get...
- */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED

/* Use these fake page-protections on PMDs. */
#define SUN3_PMD_VALID (0x00000001)
diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c
index 1b47bec15832..6fcb35616189 100644
--- a/arch/m68k/mm/init.c
+++ b/arch/m68k/mm/init.c
@@ -128,3 +128,90 @@ void __init mem_init(void)
memblock_free_all();
init_pointer_tables();
}
+
+#ifdef CONFIG_COLDFIRE
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_READABLE);
+ case VM_WRITE:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_WRITABLE);
+ case VM_READ | VM_WRITE:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_READABLE|CF_PAGE_WRITABLE);
+ case VM_EXEC:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_EXEC);
+ case VM_EXEC | VM_READ:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_READABLE|CF_PAGE_EXEC);
+ case VM_EXEC | VM_WRITE:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_WRITABLE|CF_PAGE_EXEC);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_READABLE|CF_PAGE_WRITABLE|
+ CF_PAGE_EXEC);
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_READABLE);
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_READABLE|CF_PAGE_SHARED);
+ case VM_SHARED | VM_EXEC:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_EXEC);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_READABLE|CF_PAGE_EXEC);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_SHARED|CF_PAGE_EXEC);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(CF_PAGE_VALID|CF_PAGE_ACCESSED|CF_PAGE_READABLE|CF_PAGE_SHARED|
+ CF_PAGE_EXEC);
+ default:
+ BUILD_BUG();
+ }
+}
+#endif
+
+#ifdef CONFIG_SUN3
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ default:
+ BUILD_BUG();
+ }
+}
+#endif
+EXPORT_SYMBOL(vm_get_page_prot);
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index ecbe948f4c1a..72fbe5e38045 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -400,12 +400,9 @@ void __init paging_init(void)

/* Fix the cache mode in the page descriptors for the 680[46]0. */
if (CPU_IS_040_OR_060) {
- int i;
#ifndef mm_cachebits
mm_cachebits = _PAGE_CACHE040;
#endif
- for (i = 0; i < 16; i++)
- pgprot_val(protection_map[i]) |= _PAGE_CACHE040;
}

min_addr = m68k_memory[0].addr;
@@ -483,3 +480,44 @@ void __init paging_init(void)
max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM();
free_area_init(max_zone_pfn);
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return __pgprot(pgprot_val(PAGE_NONE_C)|_PAGE_CACHE040);
+ case VM_READ:
+ return __pgprot(pgprot_val(PAGE_READONLY_C)|_PAGE_CACHE040);
+ case VM_WRITE:
+ return __pgprot(pgprot_val(PAGE_COPY_C)|_PAGE_CACHE040);
+ case VM_READ | VM_WRITE:
+ return __pgprot(pgprot_val(PAGE_COPY_C)|_PAGE_CACHE040);
+ case VM_EXEC:
+ return __pgprot(pgprot_val(PAGE_READONLY_C)|_PAGE_CACHE040);
+ case VM_EXEC | VM_READ:
+ return __pgprot(pgprot_val(PAGE_READONLY_C)|_PAGE_CACHE040);
+ case VM_EXEC | VM_WRITE:
+ return __pgprot(pgprot_val(PAGE_COPY_C)|_PAGE_CACHE040);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(pgprot_val(PAGE_COPY_C)|_PAGE_CACHE040);
+ case VM_SHARED:
+ return __pgprot(pgprot_val(PAGE_NONE_C)|_PAGE_CACHE040);
+ case VM_SHARED | VM_READ:
+ return __pgprot(pgprot_val(PAGE_READONLY_C)|_PAGE_CACHE040);
+ case VM_SHARED | VM_WRITE:
+ return __pgprot(pgprot_val(PAGE_SHARED_C)|_PAGE_CACHE040);
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return __pgprot(pgprot_val(PAGE_SHARED_C)|_PAGE_CACHE040);
+ case VM_SHARED | VM_EXEC:
+ return __pgprot(pgprot_val(PAGE_READONLY_C)|_PAGE_CACHE040);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return __pgprot(pgprot_val(PAGE_READONLY_C)|_PAGE_CACHE040);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return __pgprot(pgprot_val(PAGE_SHARED_C)|_PAGE_CACHE040);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(pgprot_val(PAGE_SHARED_C)|_PAGE_CACHE040);
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:14:18

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 09/31] arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Russell King <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/pgtable.h | 18 ------------
arch/arm/mm/mmu.c | 50 ++++++++++++++++++++++++++++++----
3 files changed, 45 insertions(+), 24 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index fabe39169b12..c12362d20c44 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -23,6 +23,7 @@ config ARM
select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB || !MMU
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index cd1f84bb40ae..ec062dd6082a 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -137,24 +137,6 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
* 2) If we could do execute protection, then read is implied
* 3) write implies read permissions
*/
-#define __P000 __PAGE_NONE
-#define __P001 __PAGE_READONLY
-#define __P010 __PAGE_COPY
-#define __P011 __PAGE_COPY
-#define __P100 __PAGE_READONLY_EXEC
-#define __P101 __PAGE_READONLY_EXEC
-#define __P110 __PAGE_COPY_EXEC
-#define __P111 __PAGE_COPY_EXEC
-
-#define __S000 __PAGE_NONE
-#define __S001 __PAGE_READONLY
-#define __S010 __PAGE_SHARED
-#define __S011 __PAGE_SHARED
-#define __S100 __PAGE_READONLY_EXEC
-#define __S101 __PAGE_READONLY_EXEC
-#define __S110 __PAGE_SHARED_EXEC
-#define __S111 __PAGE_SHARED_EXEC
-
#ifndef __ASSEMBLY__
/*
* ZERO_PAGE is a global shared page that is always zero: used
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 274e4f73fd33..3007d07bc0e7 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -403,6 +403,8 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE);
}

+static pteval_t user_pgprot;
+
/*
* Adjust the PMD section entries according to the CPU in use.
*/
@@ -410,7 +412,7 @@ static void __init build_mem_type_table(void)
{
struct cachepolicy *cp;
unsigned int cr = get_cr();
- pteval_t user_pgprot, kern_pgprot, vecs_pgprot;
+ pteval_t kern_pgprot, vecs_pgprot;
int cpu_arch = cpu_architecture();
int i;

@@ -627,11 +629,6 @@ static void __init build_mem_type_table(void)
user_pgprot |= PTE_EXT_PXN;
#endif

- for (i = 0; i < 16; i++) {
- pteval_t v = pgprot_val(protection_map[i]);
- protection_map[i] = __pgprot(v | user_pgprot);
- }
-
mem_types[MT_LOW_VECTORS].prot_pte |= vecs_pgprot;
mem_types[MT_HIGH_VECTORS].prot_pte |= vecs_pgprot;

@@ -670,6 +667,47 @@ static void __init build_mem_type_table(void)
}
}

+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return __pgprot(pgprot_val(__PAGE_NONE) | user_pgprot);
+ case VM_READ:
+ return __pgprot(pgprot_val(__PAGE_READONLY) | user_pgprot);
+ case VM_WRITE:
+ return __pgprot(pgprot_val(__PAGE_COPY) | user_pgprot);
+ case VM_READ | VM_WRITE:
+ return __pgprot(pgprot_val(__PAGE_COPY) | user_pgprot);
+ case VM_EXEC:
+ return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot);
+ case VM_EXEC | VM_READ:
+ return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot);
+ case VM_EXEC | VM_WRITE:
+ return __pgprot(pgprot_val(__PAGE_COPY_EXEC) | user_pgprot);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(pgprot_val(__PAGE_COPY_EXEC) | user_pgprot);
+ case VM_SHARED:
+ return __pgprot(pgprot_val(__PAGE_NONE) | user_pgprot);
+ case VM_SHARED | VM_READ:
+ return __pgprot(pgprot_val(__PAGE_READONLY) | user_pgprot);
+ case VM_SHARED | VM_WRITE:
+ return __pgprot(pgprot_val(__PAGE_SHARED) | user_pgprot);
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return __pgprot(pgprot_val(__PAGE_SHARED) | user_pgprot);
+ case VM_SHARED | VM_EXEC:
+ return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return __pgprot(pgprot_val(__PAGE_SHARED_EXEC) | user_pgprot);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(pgprot_val(__PAGE_SHARED_EXEC) | user_pgprot);
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
+
#ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot)
--
2.25.1

2022-01-24 19:14:21

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 10/31] x86/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

From: Christoph Hellwig <[email protected]>

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed. This also unsubscribes
from ARCH_HAS_FILTER_PGPROT, after dropping off arch_filter_pgprot() and
arch_vm_get_page_prot().

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/x86/Kconfig | 2 +-
arch/x86/include/asm/pgtable.h | 5 --
arch/x86/include/asm/pgtable_types.h | 19 --------
arch/x86/include/uapi/asm/mman.h | 14 ------
arch/x86/mm/Makefile | 2 +-
arch/x86/mm/mem_encrypt_amd.c | 4 --
arch/x86/mm/pgprot.c | 71 ++++++++++++++++++++++++++++
7 files changed, 73 insertions(+), 44 deletions(-)
create mode 100644 arch/x86/mm/pgprot.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5f33a10d3dcc..00ab039044ba 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -75,7 +75,6 @@ config X86
select ARCH_HAS_EARLY_DEBUG if KGDB
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_FAST_MULTIPLIER
- select ARCH_HAS_FILTER_PGPROT
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_KCOV if X86_64
@@ -94,6 +93,7 @@ config X86
select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_DEBUG_WX
select ARCH_HAS_ZONE_DMA_SET if EXPERT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 8a9432fb3802..985e1b823691 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -648,11 +648,6 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)

#define canon_pgprot(p) __pgprot(massage_pgprot(p))

-static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
-{
- return canon_pgprot(prot);
-}
-
static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
enum page_cache_mode pcm,
enum page_cache_mode new_pcm)
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 40497a9020c6..1a9dd933088e 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -228,25 +228,6 @@ enum page_cache_mode {

#endif /* __ASSEMBLY__ */

-/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY_EXEC
-#define __P101 PAGE_READONLY_EXEC
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_EXEC
-#define __S101 PAGE_READONLY_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
/*
* early identity mapping pte attrib macros.
*/
diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h
index d4a8d0424bfb..775dbd3aff73 100644
--- a/arch/x86/include/uapi/asm/mman.h
+++ b/arch/x86/include/uapi/asm/mman.h
@@ -5,20 +5,6 @@
#define MAP_32BIT 0x40 /* only give out 32bit addresses */

#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
-/*
- * Take the 4 protection key bits out of the vma->vm_flags
- * value and turn them in to the bits that we can put in
- * to a pte.
- *
- * Only override these if Protection Keys are available
- * (which is only on 64-bit).
- */
-#define arch_vm_get_page_prot(vm_flags) __pgprot( \
- ((vm_flags) & VM_PKEY_BIT0 ? _PAGE_PKEY_BIT0 : 0) | \
- ((vm_flags) & VM_PKEY_BIT1 ? _PAGE_PKEY_BIT1 : 0) | \
- ((vm_flags) & VM_PKEY_BIT2 ? _PAGE_PKEY_BIT2 : 0) | \
- ((vm_flags) & VM_PKEY_BIT3 ? _PAGE_PKEY_BIT3 : 0))
-
#define arch_calc_vm_prot_bits(prot, key) ( \
((key) & 0x1 ? VM_PKEY_BIT0 : 0) | \
((key) & 0x2 ? VM_PKEY_BIT1 : 0) | \
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index fe3d3061fc11..fb6b41a48ae5 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -20,7 +20,7 @@ CFLAGS_REMOVE_mem_encrypt_identity.o = -pg
endif

obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o mmap.o \
- pgtable.o physaddr.o setup_nx.o tlb.o cpu_entry_area.o maccess.o
+ pgtable.o physaddr.o setup_nx.o tlb.o cpu_entry_area.o maccess.o pgprot.o

obj-y += pat/

diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index 2b2d018ea345..e0ac16ee08f4 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -188,10 +188,6 @@ void __init sme_early_init(void)

__supported_pte_mask = __sme_set(__supported_pte_mask);

- /* Update the protection map with memory encryption mask */
- for (i = 0; i < ARRAY_SIZE(protection_map); i++)
- protection_map[i] = pgprot_encrypted(protection_map[i]);
-
if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
swiotlb_force = SWIOTLB_FORCE;
}
diff --git a/arch/x86/mm/pgprot.c b/arch/x86/mm/pgprot.c
new file mode 100644
index 000000000000..a813adabfe4f
--- /dev/null
+++ b/arch/x86/mm/pgprot.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <asm/pgtable.h>
+
+static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case 0:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC:
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY_EXEC;
+ case VM_EXEC | VM_WRITE:
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY_EXEC;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ case VM_SHARED | VM_READ | VM_EXEC:
+ return PAGE_READONLY_EXEC;
+ case VM_SHARED | VM_WRITE | VM_EXEC:
+ case VM_SHARED | VM_READ | VM_WRITE | VM_EXEC:
+ return PAGE_SHARED_EXEC;
+ default:
+ BUILD_BUG();
+ return PAGE_NONE;
+ }
+}
+
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ unsigned long val = pgprot_val(__vm_get_page_prot(vm_flags));
+
+#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+ /*
+ * Take the 4 protection key bits out of the vma->vm_flags value and
+ * turn them in to the bits that we can put in to a pte.
+ *
+ * Only override these if Protection Keys are available (which is only
+ * on 64-bit).
+ */
+ if (vm_flags & VM_PKEY_BIT0)
+ val |= _PAGE_PKEY_BIT0;
+ if (vm_flags & VM_PKEY_BIT1)
+ val |= _PAGE_PKEY_BIT1;
+ if (vm_flags & VM_PKEY_BIT2)
+ val |= _PAGE_PKEY_BIT2;
+ if (vm_flags & VM_PKEY_BIT3)
+ val |= _PAGE_PKEY_BIT3;
+#endif
+
+ val = __sme_set(val);
+ if (val & _PAGE_PRESENT)
+ val &= __supported_pte_mask;
+ return __pgprot(val);
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:14:22

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 02/31] mm/mmap: Clarify protection_map[] indices

protection_map[] maps vm_flags access combinations into page protection
value as defined by the platform via __PXXX and __SXXX macros. The array
indices in protection_map[], represents vm_flags access combinations but
it's not very intuitive to derive. This makes it clear and explicit.

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
mm/mmap.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 1e8fdb0b51ed..254d716220df 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -102,8 +102,22 @@ static void unmap_region(struct mm_struct *mm,
* x: (yes) yes
*/
pgprot_t protection_map[16] __ro_after_init = {
- __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111,
- __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111
+ [VM_NONE] = __P000,
+ [VM_READ] = __P001,
+ [VM_WRITE] = __P010,
+ [VM_READ|VM_WRITE] = __P011,
+ [VM_EXEC] = __P100,
+ [VM_EXEC|VM_READ] = __P101,
+ [VM_EXEC|VM_WRITE] = __P110,
+ [VM_EXEC|VM_READ|VM_WRITE] = __P111,
+ [VM_SHARED] = __S000,
+ [VM_SHARED|VM_READ] = __S001,
+ [VM_SHARED|VM_WRITE] = __S010,
+ [VM_SHARED|VM_READ|VM_WRITE] = __S011,
+ [VM_SHARED|VM_EXEC] = __S100,
+ [VM_SHARED|VM_READ|VM_EXEC] = __S101,
+ [VM_SHARED|VM_WRITE|VM_EXEC] = __S110,
+ [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111
};

#ifndef CONFIG_ARCH_HAS_FILTER_PGPROT
--
2.25.1

2022-01-24 19:14:23

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 11/31] mm/mmap: Drop protection_map[]

There are no other users for protection_map[]. Hence just drop this array
construct and instead define __vm_get_page_prot() which will provide page
protection map based on vm_flags combination switch.

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
include/linux/mm.h | 6 -----
mm/mmap.c | 61 +++++++++++++++++++++++++++++++---------------
2 files changed, 41 insertions(+), 26 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index e1a84b1e6787..6c0844b99b3e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -418,12 +418,6 @@ extern unsigned int kobjsize(const void *objp);
#endif
#define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)

-/*
- * mapping from the currently active vm_flags protection bits (the
- * low four bits) to a page protection mask..
- */
-extern pgprot_t protection_map[16];
-
/*
* The default fault flags that should be used by most of the
* arch-specific page fault handlers.
diff --git a/mm/mmap.c b/mm/mmap.c
index ec403de32dcb..f61f74a61f62 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -102,24 +102,6 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
-pgprot_t protection_map[16] __ro_after_init = {
- [VM_NONE] = __P000,
- [VM_READ] = __P001,
- [VM_WRITE] = __P010,
- [VM_READ|VM_WRITE] = __P011,
- [VM_EXEC] = __P100,
- [VM_EXEC|VM_READ] = __P101,
- [VM_EXEC|VM_WRITE] = __P110,
- [VM_EXEC|VM_READ|VM_WRITE] = __P111,
- [VM_SHARED] = __S000,
- [VM_SHARED|VM_READ] = __S001,
- [VM_SHARED|VM_WRITE] = __S010,
- [VM_SHARED|VM_READ|VM_WRITE] = __S011,
- [VM_SHARED|VM_EXEC] = __S100,
- [VM_SHARED|VM_READ|VM_EXEC] = __S101,
- [VM_SHARED|VM_WRITE|VM_EXEC] = __S110,
- [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111
-};

#ifndef CONFIG_ARCH_HAS_FILTER_PGPROT
static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
@@ -128,10 +110,49 @@ static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
}
#endif

+static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return __P000;
+ case VM_READ:
+ return __P001;
+ case VM_WRITE:
+ return __P010;
+ case VM_READ | VM_WRITE:
+ return __P011;
+ case VM_EXEC:
+ return __P100;
+ case VM_EXEC | VM_READ:
+ return __P101;
+ case VM_EXEC | VM_WRITE:
+ return __P110;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return __P111;
+ case VM_SHARED:
+ return __S000;
+ case VM_SHARED | VM_READ:
+ return __S001;
+ case VM_SHARED | VM_WRITE:
+ return __S010;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return __S011;
+ case VM_SHARED | VM_EXEC:
+ return __S100;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return __S101;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return __S110;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return __S111;
+ default:
+ BUILD_BUG();
+ }
+}
+
pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
- pgprot_t ret = __pgprot(pgprot_val(protection_map[vm_flags &
- (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) |
+ pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
pgprot_val(arch_vm_get_page_prot(vm_flags)));

return arch_filter_pgprot(ret);
--
2.25.1

2022-01-24 19:14:25

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 12/31] mm/mmap: Drop arch_filter_pgprot()

There are no platforms left which subscribe ARCH_HAS_FILTER_PGPROT. Hence
just drop arch_filter_pgprot() and also the config ARCH_HAS_FILTER_PGPROT.

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
mm/Kconfig | 3 ---
mm/mmap.c | 10 +---------
2 files changed, 1 insertion(+), 12 deletions(-)

diff --git a/mm/Kconfig b/mm/Kconfig
index fa436478a94c..212fb6e1ddaa 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -744,9 +744,6 @@ config IDLE_PAGE_TRACKING
config ARCH_HAS_CACHE_LINE_SIZE
bool

-config ARCH_HAS_FILTER_PGPROT
- bool
-
config ARCH_HAS_VM_GET_PAGE_PROT
bool

diff --git a/mm/mmap.c b/mm/mmap.c
index f61f74a61f62..70a75ea91e94 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -102,14 +102,6 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
-
-#ifndef CONFIG_ARCH_HAS_FILTER_PGPROT
-static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
-{
- return prot;
-}
-#endif
-
static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
{
switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
@@ -155,7 +147,7 @@ pgprot_t vm_get_page_prot(unsigned long vm_flags)
pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
pgprot_val(arch_vm_get_page_prot(vm_flags)));

- return arch_filter_pgprot(ret);
+ return ret;
}
EXPORT_SYMBOL(vm_get_page_prot);
#endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
--
2.25.1

2022-01-24 19:14:25

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 04/31] powerpc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed. While here, this also
localizes arch_vm_get_page_prot() as powerpc_vm_get_page_prot().

Cc: Michael Ellerman <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/mman.h | 3 +-
arch/powerpc/include/asm/pgtable.h | 19 ------------
arch/powerpc/mm/mmap.c | 47 ++++++++++++++++++++++++++++++
4 files changed, 49 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b779603978e1..ddb4a3687c05 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -135,6 +135,7 @@ config PPC
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UACCESS_FLUSHCACHE
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_KEEP_MEMBLOCK
select ARCH_MIGHT_HAVE_PC_PARPORT
diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 7cb6d18f5cd6..7b10c2031e82 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -24,7 +24,7 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
}
#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)

-static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
+static inline pgprot_t powerpc_vm_get_page_prot(unsigned long vm_flags)
{
#ifdef CONFIG_PPC_MEM_KEYS
return (vm_flags & VM_SAO) ?
@@ -34,7 +34,6 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0);
#endif
}
-#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)

static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
{
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index d564d0ecd4cd..3cbb6de20f9d 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -20,25 +20,6 @@ struct mm_struct;
#include <asm/nohash/pgtable.h>
#endif /* !CONFIG_PPC_BOOK3S */

-/* Note due to the way vm flags are laid out, the bits are XWR */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY_X
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY_X
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_X
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED_X
-#define __S111 PAGE_SHARED_X
-
#ifndef __ASSEMBLY__

#ifndef MAX_PTRS_PER_PGD
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
index c475cf810aa8..7f05e7903bd2 100644
--- a/arch/powerpc/mm/mmap.c
+++ b/arch/powerpc/mm/mmap.c
@@ -254,3 +254,50 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
}
}
+
+static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC:
+ return PAGE_READONLY_X;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY_X;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY_X;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READONLY_X;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED_X;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED_X;
+ default:
+ BUILD_BUG();
+ }
+}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ return __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
+ pgprot_val(powerpc_vm_get_page_prot(vm_flags)));
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:14:33

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 13/31] mm/mmap: Drop arch_vm_get_page_pgprot()

There are no platforms left which use arch_vm_get_page_prot(). Just drop
arch_vm_get_page_prot() construct and simplify remaining code.

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
include/linux/mman.h | 4 ----
mm/mmap.c | 10 +---------
2 files changed, 1 insertion(+), 13 deletions(-)

diff --git a/include/linux/mman.h b/include/linux/mman.h
index b66e91b8176c..58b3abd457a3 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -93,10 +93,6 @@ static inline void vm_unacct_memory(long pages)
#define arch_calc_vm_flag_bits(flags) 0
#endif

-#ifndef arch_vm_get_page_prot
-#define arch_vm_get_page_prot(vm_flags) __pgprot(0)
-#endif
-
#ifndef arch_validate_prot
/*
* This is called from mprotect(). PROT_GROWSDOWN and PROT_GROWSUP have
diff --git a/mm/mmap.c b/mm/mmap.c
index 70a75ea91e94..2fc597cf8b8d 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -102,7 +102,7 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
-static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
case VM_NONE:
@@ -141,14 +141,6 @@ static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
BUILD_BUG();
}
}
-
-pgprot_t vm_get_page_prot(unsigned long vm_flags)
-{
- pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
- pgprot_val(arch_vm_get_page_prot(vm_flags)));
-
- return ret;
-}
EXPORT_SYMBOL(vm_get_page_prot);
#endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */

--
2.25.1

2022-01-24 19:14:35

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 15/31] riscv/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Paul Walmsley <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/pgtable.h | 16 -------------
arch/riscv/mm/init.c | 41 ++++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 16 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 5adcbd9b5e88..9391742f9286 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -31,6 +31,7 @@ config RISCV
select ARCH_HAS_STRICT_MODULE_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
select ARCH_STACKWALK
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 7e949f25c933..d2bb14cac28b 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -183,24 +183,8 @@ extern struct pt_alloc_ops pt_ops __initdata;
extern pgd_t swapper_pg_dir[];

/* MAP_PRIVATE permissions: xwr (copy-on-write) */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READ
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_EXEC
-#define __P101 PAGE_READ_EXEC
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_READ_EXEC

/* MAP_SHARED permissions: xwr */
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READ
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXEC
-#define __S101 PAGE_READ_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC

#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline int pmd_present(pmd_t pmd)
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index cf4d018b7d66..1cd96ba5398b 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -1048,3 +1048,44 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
return vmemmap_populate_basepages(start, end, node, NULL);
}
#endif
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READ;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC:
+ return PAGE_EXEC;
+ case VM_EXEC | VM_READ:
+ return PAGE_READ_EXEC;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY_EXEC;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY_READ_EXEC;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READ;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_EXEC;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READ_EXEC;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED_EXEC;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED_EXEC;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:37

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 18/31] arc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Vineet Gupta <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arc/Kconfig | 1 +
arch/arc/include/asm/pgtable-bits-arcv2.h | 17 ----------
arch/arc/mm/mmap.c | 41 +++++++++++++++++++++++
3 files changed, 42 insertions(+), 17 deletions(-)

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 3c2a4753d09b..78ff0644b343 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -13,6 +13,7 @@ config ARC
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select ARCH_32BIT_OFF_T
select BUILDTIME_TABLE_SORT
diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h
index 183d23bc1e00..798308f4dbad 100644
--- a/arch/arc/include/asm/pgtable-bits-arcv2.h
+++ b/arch/arc/include/asm/pgtable-bits-arcv2.h
@@ -72,23 +72,6 @@
* This is to enable COW mechanism
*/
/* xwr */
-#define __P000 PAGE_U_NONE
-#define __P001 PAGE_U_R
-#define __P010 PAGE_U_R /* Pvt-W => !W */
-#define __P011 PAGE_U_R /* Pvt-W => !W */
-#define __P100 PAGE_U_X_R /* X => R */
-#define __P101 PAGE_U_X_R
-#define __P110 PAGE_U_X_R /* Pvt-W => !W and X => R */
-#define __P111 PAGE_U_X_R /* Pvt-W => !W */
-
-#define __S000 PAGE_U_NONE
-#define __S001 PAGE_U_R
-#define __S010 PAGE_U_W_R /* W => R */
-#define __S011 PAGE_U_W_R
-#define __S100 PAGE_U_X_R /* X => R */
-#define __S101 PAGE_U_X_R
-#define __S110 PAGE_U_X_W_R /* X => R */
-#define __S111 PAGE_U_X_W_R

#ifndef __ASSEMBLY__

diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 722d26b94307..860b2fc91f55 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -74,3 +74,44 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.align_offset = pgoff << PAGE_SHIFT;
return vm_unmapped_area(&info);
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_U_NONE;
+ case VM_READ:
+ return PAGE_U_R;
+ case VM_WRITE:
+ return PAGE_U_R;
+ case VM_READ | VM_WRITE:
+ return PAGE_U_R;
+ case VM_EXEC:
+ return PAGE_U_X_R;
+ case VM_EXEC | VM_READ:
+ return PAGE_U_X_R;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_U_X_R;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_U_X_R;
+ case VM_SHARED:
+ return PAGE_U_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_U_R;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_U_W_R;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_U_W_R;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_U_X_R;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_U_X_R;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_U_X_W_R;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_U_X_W_R;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:37

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 16/31] alpha/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Richard Henderson <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/alpha/Kconfig | 1 +
arch/alpha/include/asm/pgtable.h | 17 -------------
arch/alpha/mm/init.c | 41 ++++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 17 deletions(-)

diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 4e87783c90ad..73e82fe5c770 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -2,6 +2,7 @@
config ALPHA
bool
default y
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_32BIT_USTAT_F_TINODE
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 02f0429f1068..9fb5e9d10bb6 100644
--- a/arch/alpha/include/asm/pgtable.h
+++ b/arch/alpha/include/asm/pgtable.h
@@ -116,23 +116,6 @@ struct vm_area_struct;
* arch/alpha/mm/fault.c)
*/
/* xwr */
-#define __P000 _PAGE_P(_PAGE_FOE | _PAGE_FOW | _PAGE_FOR)
-#define __P001 _PAGE_P(_PAGE_FOE | _PAGE_FOW)
-#define __P010 _PAGE_P(_PAGE_FOE)
-#define __P011 _PAGE_P(_PAGE_FOE)
-#define __P100 _PAGE_P(_PAGE_FOW | _PAGE_FOR)
-#define __P101 _PAGE_P(_PAGE_FOW)
-#define __P110 _PAGE_P(0)
-#define __P111 _PAGE_P(0)
-
-#define __S000 _PAGE_S(_PAGE_FOE | _PAGE_FOW | _PAGE_FOR)
-#define __S001 _PAGE_S(_PAGE_FOE | _PAGE_FOW)
-#define __S010 _PAGE_S(_PAGE_FOE)
-#define __S011 _PAGE_S(_PAGE_FOE)
-#define __S100 _PAGE_S(_PAGE_FOW | _PAGE_FOR)
-#define __S101 _PAGE_S(_PAGE_FOW)
-#define __S110 _PAGE_S(0)
-#define __S111 _PAGE_S(0)

/*
* pgprot_noncached() is only for infiniband pci support, and a real
diff --git a/arch/alpha/mm/init.c b/arch/alpha/mm/init.c
index f6114d03357c..89e5e593194d 100644
--- a/arch/alpha/mm/init.c
+++ b/arch/alpha/mm/init.c
@@ -280,3 +280,44 @@ mem_init(void)
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
memblock_free_all();
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return _PAGE_P(_PAGE_FOE|_PAGE_FOW|_PAGE_FOR);
+ case VM_READ:
+ return _PAGE_P(_PAGE_FOE|_PAGE_FOW);
+ case VM_WRITE:
+ return _PAGE_P(_PAGE_FOE);
+ case VM_READ | VM_WRITE:
+ return _PAGE_P(_PAGE_FOE);
+ case VM_EXEC:
+ return _PAGE_P(_PAGE_FOW|_PAGE_FOR);
+ case VM_EXEC | VM_READ:
+ return _PAGE_P(_PAGE_FOW);
+ case VM_EXEC | VM_WRITE:
+ return _PAGE_P(0);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return _PAGE_P(0);
+ case VM_SHARED:
+ return _PAGE_S(_PAGE_FOE|_PAGE_FOW|_PAGE_FOR);
+ case VM_SHARED | VM_READ:
+ return _PAGE_S(_PAGE_FOE|_PAGE_FOW);
+ case VM_SHARED | VM_WRITE:
+ return _PAGE_S(_PAGE_FOE);
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return _PAGE_S(_PAGE_FOE);
+ case VM_SHARED | VM_EXEC:
+ return _PAGE_S(_PAGE_FOW|_PAGE_FOR);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return _PAGE_S(_PAGE_FOW);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return _PAGE_S(0);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return _PAGE_S(0);
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:37

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 17/31] sh/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/sh/Kconfig | 1 +
arch/sh/include/asm/pgtable.h | 17 --------------
arch/sh/mm/mmap.c | 43 +++++++++++++++++++++++++++++++++++
3 files changed, 44 insertions(+), 17 deletions(-)

diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index 2474a04ceac4..f3fcd1c5e002 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -11,6 +11,7 @@ config SUPERH
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HIBERNATION_POSSIBLE if MMU
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h
index d7ddb1ec86a0..6fb9ec54cf9b 100644
--- a/arch/sh/include/asm/pgtable.h
+++ b/arch/sh/include/asm/pgtable.h
@@ -89,23 +89,6 @@ static inline unsigned long phys_addr_mask(void)
* completely separate permission bits for user and kernel space.
*/
/*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_EXECREAD
-#define __P101 PAGE_EXECREAD
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_WRITEONLY
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXECREAD
-#define __S101 PAGE_EXECREAD
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX

typedef pte_t *pte_addr_t;

diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index 6a1a1297baae..21b3fae77a4e 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -162,3 +162,46 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
{
return 1;
}
+
+#ifdef CONFIG_MMU
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC:
+ return PAGE_EXECREAD;
+ case VM_EXEC | VM_READ:
+ return PAGE_EXECREAD;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_SHARED:
+ return PAGE_COPY;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_WRITEONLY;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_EXECREAD;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_EXECREAD;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_RWX;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_RWX;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
+#endif
--
2.25.1

2022-01-24 19:15:38

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 06/31] sparc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: "David S. Miller" <[email protected]>
Cc: Khalid Aziz <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/sparc/Kconfig | 2 +
arch/sparc/include/asm/mman.h | 1 -
arch/sparc/include/asm/pgtable_32.h | 19 --------
arch/sparc/include/asm/pgtable_64.h | 19 --------
arch/sparc/mm/init_32.c | 41 +++++++++++++++++
arch/sparc/mm/init_64.c | 71 +++++++++++++++++++++--------
6 files changed, 95 insertions(+), 58 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 1cab1b284f1a..ff29156f2380 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -59,6 +59,7 @@ config SPARC32
select HAVE_UID16
select OLD_SIGACTION
select ZONE_DMA
+ select ARCH_HAS_VM_GET_PAGE_PROT

config SPARC64
def_bool 64BIT
@@ -84,6 +85,7 @@ config SPARC64
select PERF_USE_VMALLOC
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select HAVE_C_RECORDMCOUNT
+ select ARCH_HAS_VM_GET_PAGE_PROT
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
index 274217e7ed70..874d21483202 100644
--- a/arch/sparc/include/asm/mman.h
+++ b/arch/sparc/include/asm/mman.h
@@ -46,7 +46,6 @@ static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
}
}

-#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
{
return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
index ffccfe3b22ed..060a435f96d6 100644
--- a/arch/sparc/include/asm/pgtable_32.h
+++ b/arch/sparc/include/asm/pgtable_32.h
@@ -64,25 +64,6 @@ void paging_init(void);

extern unsigned long ptr_in_current_pgd;

-/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED
-
/* First physical page can be anywhere, the following is needed so that
* va-->pa and vice versa conversions work properly without performance
* hit for all __pa()/__va() operations.
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 4679e45c8348..a779418ceba9 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -187,25 +187,6 @@ bool kern_addr_valid(unsigned long addr);
#define _PAGE_SZHUGE_4U _PAGE_SZ4MB_4U
#define _PAGE_SZHUGE_4V _PAGE_SZ4MB_4V

-/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
-#define __P000 __pgprot(0)
-#define __P001 __pgprot(0)
-#define __P010 __pgprot(0)
-#define __P011 __pgprot(0)
-#define __P100 __pgprot(0)
-#define __P101 __pgprot(0)
-#define __P110 __pgprot(0)
-#define __P111 __pgprot(0)
-
-#define __S000 __pgprot(0)
-#define __S001 __pgprot(0)
-#define __S010 __pgprot(0)
-#define __S011 __pgprot(0)
-#define __S100 __pgprot(0)
-#define __S101 __pgprot(0)
-#define __S110 __pgprot(0)
-#define __S111 __pgprot(0)
-
#ifndef __ASSEMBLY__

pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);
diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
index 1e9f577f084d..efb3d6e6d7f6 100644
--- a/arch/sparc/mm/init_32.c
+++ b/arch/sparc/mm/init_32.c
@@ -302,3 +302,44 @@ void sparc_flush_page_to_ram(struct page *page)
__flush_page_to_ram(vaddr);
}
EXPORT_SYMBOL(sparc_flush_page_to_ram);
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 1b23639e2fcd..46b5366f7f69 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -50,6 +50,7 @@
#include <asm/cpudata.h>
#include <asm/setup.h>
#include <asm/irq.h>
+#include <asm/mman.h>

#include "init_64.h"

@@ -2641,29 +2642,13 @@ static void prot_init_common(unsigned long page_none,
{
PAGE_COPY = __pgprot(page_copy);
PAGE_SHARED = __pgprot(page_shared);
-
- protection_map[0x0] = __pgprot(page_none);
- protection_map[0x1] = __pgprot(page_readonly & ~page_exec_bit);
- protection_map[0x2] = __pgprot(page_copy & ~page_exec_bit);
- protection_map[0x3] = __pgprot(page_copy & ~page_exec_bit);
- protection_map[0x4] = __pgprot(page_readonly);
- protection_map[0x5] = __pgprot(page_readonly);
- protection_map[0x6] = __pgprot(page_copy);
- protection_map[0x7] = __pgprot(page_copy);
- protection_map[0x8] = __pgprot(page_none);
- protection_map[0x9] = __pgprot(page_readonly & ~page_exec_bit);
- protection_map[0xa] = __pgprot(page_shared & ~page_exec_bit);
- protection_map[0xb] = __pgprot(page_shared & ~page_exec_bit);
- protection_map[0xc] = __pgprot(page_readonly);
- protection_map[0xd] = __pgprot(page_readonly);
- protection_map[0xe] = __pgprot(page_shared);
- protection_map[0xf] = __pgprot(page_shared);
}

+static unsigned long page_none, page_shared, page_copy, page_readonly;
+static unsigned long page_exec_bit;
+
static void __init sun4u_pgprot_init(void)
{
- unsigned long page_none, page_shared, page_copy, page_readonly;
- unsigned long page_exec_bit;
int i;

PAGE_KERNEL = __pgprot (_PAGE_PRESENT_4U | _PAGE_VALID |
@@ -3183,3 +3168,51 @@ void copy_highpage(struct page *to, struct page *from)
}
}
EXPORT_SYMBOL(copy_highpage);
+
+static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return __pgprot(page_none);
+ case VM_READ:
+ return __pgprot(page_readonly & ~page_exec_bit);
+ case VM_WRITE:
+ return __pgprot(page_copy & ~page_exec_bit);
+ case VM_READ | VM_WRITE:
+ return __pgprot(page_copy & ~page_exec_bit);
+ case VM_EXEC:
+ return __pgprot(page_readonly);
+ case VM_EXEC | VM_READ:
+ return __pgprot(page_readonly);
+ case VM_EXEC | VM_WRITE:
+ return __pgprot(page_copy);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(page_copy);
+ case VM_SHARED:
+ return __pgprot(page_none);
+ case VM_SHARED | VM_READ:
+ return __pgprot(page_readonly & ~page_exec_bit);
+ case VM_SHARED | VM_WRITE:
+ return __pgprot(page_shared & ~page_exec_bit);
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return __pgprot(page_shared & ~page_exec_bit);
+ case VM_SHARED | VM_EXEC:
+ return __pgprot(page_readonly);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return __pgprot(page_readonly);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return __pgprot(page_shared);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(page_shared);
+ default:
+ BUILD_BUG();
+ }
+}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ return __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
+ pgprot_val(sparc_vm_get_page_prot(vm_flags)));
+
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:38

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 20/31] extensa/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Chris Zankel <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/xtensa/Kconfig | 1 +
arch/xtensa/include/asm/pgtable.h | 18 --------------
arch/xtensa/mm/init.c | 41 +++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 18 deletions(-)

diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 8ac599aa6d99..1608f7517546 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -9,6 +9,7 @@ config XTENSA
select ARCH_HAS_DMA_SET_UNCACHED if MMU
select ARCH_HAS_STRNCPY_FROM_USER if !KASAN
select ARCH_HAS_STRNLEN_USER
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
index bd5aeb795567..ed6e93097142 100644
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -200,24 +200,6 @@
* What follows is the closest we can get by reasonable means..
* See linux/mm/mmap.c for protection_map[] array that uses these definitions.
*/
-#define __P000 PAGE_NONE /* private --- */
-#define __P001 PAGE_READONLY /* private --r */
-#define __P010 PAGE_COPY /* private -w- */
-#define __P011 PAGE_COPY /* private -wr */
-#define __P100 PAGE_READONLY_EXEC /* private x-- */
-#define __P101 PAGE_READONLY_EXEC /* private x-r */
-#define __P110 PAGE_COPY_EXEC /* private xw- */
-#define __P111 PAGE_COPY_EXEC /* private xwr */
-
-#define __S000 PAGE_NONE /* shared --- */
-#define __S001 PAGE_READONLY /* shared --r */
-#define __S010 PAGE_SHARED /* shared -w- */
-#define __S011 PAGE_SHARED /* shared -wr */
-#define __S100 PAGE_READONLY_EXEC /* shared x-- */
-#define __S101 PAGE_READONLY_EXEC /* shared x-r */
-#define __S110 PAGE_SHARED_EXEC /* shared xw- */
-#define __S111 PAGE_SHARED_EXEC /* shared xwr */
-
#ifndef __ASSEMBLY__

#define pte_ERROR(e) \
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index 6a32b2cf2718..b2cc016dec92 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -216,3 +216,44 @@ static int __init parse_memmap_opt(char *str)
return 0;
}
early_param("memmap", parse_memmap_opt);
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC:
+ return PAGE_READONLY_EXEC;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY_EXEC;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY_EXEC;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY_EXEC;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READONLY_EXEC;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY_EXEC;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED_EXEC;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED_EXEC;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:38

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 05/31] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed. This also localizes both
arch_filter_pgprot and arch_vm_get_page_prot() helpers, unsubscribing from
ARCH_HAS_FILTER_PGPROT as well.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/arm64/Kconfig | 2 +-
arch/arm64/include/asm/mman.h | 3 +-
arch/arm64/include/asm/pgtable-prot.h | 18 ----------
arch/arm64/include/asm/pgtable.h | 2 +-
arch/arm64/mm/mmap.c | 50 +++++++++++++++++++++++++++
5 files changed, 53 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index cad609528e58..fce2d0fc4ecc 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -23,7 +23,6 @@ config ARM64
select ARCH_HAS_DMA_PREP_COHERENT
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
select ARCH_HAS_FAST_MULTIPLIER
- select ARCH_HAS_FILTER_PGPROT
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_GIGANTIC_PAGE
@@ -44,6 +43,7 @@ config ARM64
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_ZONE_DMA_SET if EXPERT
select ARCH_HAVE_ELF_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
index e3e28f7daf62..85f41f72a8b3 100644
--- a/arch/arm64/include/asm/mman.h
+++ b/arch/arm64/include/asm/mman.h
@@ -35,7 +35,7 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
}
#define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags)

-static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
+static inline pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags)
{
pteval_t prot = 0;

@@ -57,7 +57,6 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)

return __pgprot(prot);
}
-#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)

static inline bool arch_validate_prot(unsigned long prot,
unsigned long addr __always_unused)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 7032f04c8ac6..d8ee0aa7886d 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -88,24 +88,6 @@ extern bool arm64_use_ng_mappings;
#define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
#define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)

-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_READONLY
-#define __P011 PAGE_READONLY
-#define __P100 PAGE_EXECONLY
-#define __P101 PAGE_READONLY_EXEC
-#define __P110 PAGE_READONLY_EXEC
-#define __P111 PAGE_READONLY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXECONLY
-#define __S101 PAGE_READONLY_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
#endif /* __ASSEMBLY__ */

#endif /* __ASM_PGTABLE_PROT_H */
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index c4ba047a82d2..5a73501a45ed 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1017,7 +1017,7 @@ static inline bool arch_wants_old_prefaulted_pte(void)
}
#define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte

-static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
+static inline pgprot_t arm64_arch_filter_pgprot(pgprot_t prot)
{
if (cpus_have_const_cap(ARM64_HAS_EPAN))
return prot;
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index a38f54cd638c..ad605eb86d23 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -10,6 +10,7 @@
#include <linux/types.h>

#include <asm/page.h>
+#include <asm/mman.h>

/*
* You really shouldn't be using read() or write() on /dev/mem. This might go
@@ -38,3 +39,52 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
{
return !(((pfn << PAGE_SHIFT) + size) & ~PHYS_MASK);
}
+
+static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_READONLY;
+ case VM_READ | VM_WRITE:
+ return PAGE_READONLY;
+ case VM_EXEC:
+ return PAGE_EXECONLY;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY_EXEC;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_READONLY_EXEC;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_READONLY_EXEC;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_EXECONLY;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY_EXEC;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED_EXEC;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED_EXEC;
+ default:
+ BUILD_BUG();
+ }
+}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
+ pgprot_val(arm64_arch_vm_get_page_prot(vm_flags)));
+
+ return arm64_arch_filter_pgprot(ret);
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:39

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 19/31] csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Geert Uytterhoeven <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/csky/Kconfig | 1 +
arch/csky/include/asm/pgtable.h | 18 ---------------
arch/csky/mm/init.c | 41 +++++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 18 deletions(-)

diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 132f43f12dd8..209dac5686dd 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -6,6 +6,7 @@ config CSKY
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
index 151607ed5158..2c6b1cfb1cce 100644
--- a/arch/csky/include/asm/pgtable.h
+++ b/arch/csky/include/asm/pgtable.h
@@ -76,24 +76,6 @@
#define MAX_SWAPFILES_CHECK() \
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT != 5)

-#define __P000 PAGE_NONE
-#define __P001 PAGE_READ
-#define __P010 PAGE_READ
-#define __P011 PAGE_READ
-#define __P100 PAGE_READ
-#define __P101 PAGE_READ
-#define __P110 PAGE_READ
-#define __P111 PAGE_READ
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READ
-#define __S010 PAGE_WRITE
-#define __S011 PAGE_WRITE
-#define __S100 PAGE_READ
-#define __S101 PAGE_READ
-#define __S110 PAGE_WRITE
-#define __S111 PAGE_WRITE
-
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))

diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
index bf2004aa811a..55952d8f8abc 100644
--- a/arch/csky/mm/init.c
+++ b/arch/csky/mm/init.c
@@ -197,3 +197,44 @@ void __init fixaddr_init(void)
vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
fixrange_init(vaddr, vaddr + PMD_SIZE, swapper_pg_dir);
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READ;
+ case VM_WRITE:
+ return PAGE_READ;
+ case VM_READ | VM_WRITE:
+ return PAGE_READ;
+ case VM_EXEC:
+ return PAGE_READ;
+ case VM_EXEC | VM_READ:
+ return PAGE_READ;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_READ;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_READ;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READ;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_WRITE;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_WRITE;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READ;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READ;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_WRITE;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_WRITE;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:39

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 21/31] parisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: "James E.J. Bottomley" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/parisc/Kconfig | 1 +
arch/parisc/include/asm/pgtable.h | 20 ---------------
arch/parisc/mm/init.c | 41 +++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 20 deletions(-)

diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index 43c1c880def6..de512f120b50 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -10,6 +10,7 @@ config PARISC
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_UBSAN_SANITIZE_ALL
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_SG_CHAIN
select ARCH_SUPPORTS_HUGETLBFS if PA20
select ARCH_SUPPORTS_MEMORY_FAILURE
diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
index 3e7cf882639f..80d99b2b5913 100644
--- a/arch/parisc/include/asm/pgtable.h
+++ b/arch/parisc/include/asm/pgtable.h
@@ -269,26 +269,6 @@ extern void __update_cache(pte_t pte);
* pages.
*/

- /*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 __P000 /* copy on write */
-#define __P011 __P001 /* copy on write */
-#define __P100 PAGE_EXECREAD
-#define __P101 PAGE_EXECREAD
-#define __P110 __P100 /* copy on write */
-#define __P111 __P101 /* copy on write */
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_WRITEONLY
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXECREAD
-#define __S101 PAGE_EXECREAD
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX
-
-
extern pgd_t swapper_pg_dir[]; /* declared in init_task.c */

/* initial page tables for 0-8MB for kernel */
diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
index 1ae31db9988f..c8316e97e1a2 100644
--- a/arch/parisc/mm/init.c
+++ b/arch/parisc/mm/init.c
@@ -866,3 +866,44 @@ void flush_tlb_all(void)
spin_unlock(&sid_lock);
}
#endif
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_NONE;
+ case VM_READ | VM_WRITE:
+ return PAGE_READONLY;
+ case VM_EXEC:
+ return PAGE_EXECREAD;
+ case VM_EXEC | VM_READ:
+ return PAGE_EXECREAD;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_EXECREAD;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_EXECREAD;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_WRITEONLY;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_EXECREAD;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_EXECREAD;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_RWX;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_RWX;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 23/31] um/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Jeff Dike <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/um/Kconfig | 1 +
arch/um/include/asm/pgtable.h | 17 ---------------
arch/um/kernel/mem.c | 41 +++++++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 17 deletions(-)

diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 4d398b80aea8..5836296868a8 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -9,6 +9,7 @@ config UML
select ARCH_HAS_KCOV
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h
index b9e20bbe2f75..d982622c0708 100644
--- a/arch/um/include/asm/pgtable.h
+++ b/arch/um/include/asm/pgtable.h
@@ -68,23 +68,6 @@ extern unsigned long end_iomem;
* Also, write permissions imply read permissions. This is the closest we can
* get..
*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED

/*
* ZERO_PAGE is a global shared page that is always zero: used
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 15295c3237a0..1f53584ac361 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -197,3 +197,44 @@ void *uml_kmalloc(int size, int flags)
{
return kmalloc(size, flags);
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 29/31] mm/mmap: Drop generic vm_get_page_prot()

All available platforms export their own vm_get_page_prot() implementation
via ARCH_HAS_VM_GET_PAGE_PROT. Hence a generic implementation is no longer
needed.

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
mm/mmap.c | 40 ----------------------------------------
1 file changed, 40 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 2fc597cf8b8d..368bc8aee45b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -102,46 +102,6 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
-pgprot_t vm_get_page_prot(unsigned long vm_flags)
-{
- switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
- case VM_NONE:
- return __P000;
- case VM_READ:
- return __P001;
- case VM_WRITE:
- return __P010;
- case VM_READ | VM_WRITE:
- return __P011;
- case VM_EXEC:
- return __P100;
- case VM_EXEC | VM_READ:
- return __P101;
- case VM_EXEC | VM_WRITE:
- return __P110;
- case VM_EXEC | VM_READ | VM_WRITE:
- return __P111;
- case VM_SHARED:
- return __S000;
- case VM_SHARED | VM_READ:
- return __S001;
- case VM_SHARED | VM_WRITE:
- return __S010;
- case VM_SHARED | VM_READ | VM_WRITE:
- return __S011;
- case VM_SHARED | VM_EXEC:
- return __S100;
- case VM_SHARED | VM_EXEC | VM_READ:
- return __S101;
- case VM_SHARED | VM_EXEC | VM_WRITE:
- return __S110;
- case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
- return __S111;
- default:
- BUILD_BUG();
- }
-}
-EXPORT_SYMBOL(vm_get_page_prot);
#endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */

static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
--
2.25.1

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 26/31] hexagon/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Brian Cain <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/hexagon/Kconfig | 1 +
arch/hexagon/include/asm/pgtable.h | 24 -----------------
arch/hexagon/mm/init.c | 42 ++++++++++++++++++++++++++++++
3 files changed, 43 insertions(+), 24 deletions(-)

diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index 15dd8f38b698..cdc5df32a1e3 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -6,6 +6,7 @@ config HEXAGON
def_bool y
select ARCH_32BIT_OFF_T
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select DMA_GLOBAL_POOL
# Other pending projects/to-do items.
diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h
index 18cd6ea9ab23..5eceddfe013d 100644
--- a/arch/hexagon/include/asm/pgtable.h
+++ b/arch/hexagon/include/asm/pgtable.h
@@ -127,31 +127,7 @@ extern unsigned long _dflt_cache_att;
#define CACHEDEF (CACHE_DEFAULT << 6)

/* Private (copy-on-write) page protections. */
-#define __P000 __pgprot(_PAGE_PRESENT | _PAGE_USER | CACHEDEF)
-#define __P001 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | CACHEDEF)
-#define __P010 __P000 /* Write-only copy-on-write */
-#define __P011 __P001 /* Read/Write copy-on-write */
-#define __P100 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
- _PAGE_EXECUTE | CACHEDEF)
-#define __P101 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_EXECUTE | \
- _PAGE_READ | CACHEDEF)
-#define __P110 __P100 /* Write/execute copy-on-write */
-#define __P111 __P101 /* Read/Write/Execute, copy-on-write */
-
/* Shared page protections. */
-#define __S000 __P000
-#define __S001 __P001
-#define __S010 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
- _PAGE_WRITE | CACHEDEF)
-#define __S011 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | \
- _PAGE_WRITE | CACHEDEF)
-#define __S100 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
- _PAGE_EXECUTE | CACHEDEF)
-#define __S101 __P101
-#define __S110 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
- _PAGE_EXECUTE | _PAGE_WRITE | CACHEDEF)
-#define __S111 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | \
- _PAGE_EXECUTE | _PAGE_WRITE | CACHEDEF)

extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; /* located in head.S */

diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c
index f01e91e10d95..9f411e3eba68 100644
--- a/arch/hexagon/mm/init.c
+++ b/arch/hexagon/mm/init.c
@@ -236,3 +236,45 @@ void __init setup_arch_memory(void)
* which is called by start_kernel() later on in the process
*/
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|CACHEDEF);
+ case VM_READ:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_READ|CACHEDEF);
+ case VM_WRITE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|CACHEDEF);
+ case VM_READ | VM_WRITE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_READ|CACHEDEF);
+ case VM_EXEC:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_EXECUTE|CACHEDEF);
+ case VM_EXEC | VM_READ:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_EXECUTE|_PAGE_READ|CACHEDEF);
+ case VM_EXEC | VM_WRITE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_EXECUTE|CACHEDEF);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_EXECUTE|_PAGE_READ|CACHEDEF);
+ case VM_SHARED:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|CACHEDEF);
+ case VM_SHARED | VM_READ:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_READ|CACHEDEF);
+ case VM_SHARED | VM_WRITE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_WRITE|CACHEDEF);
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_READ|_PAGE_WRITE|CACHEDEF);
+ case VM_SHARED | VM_EXEC:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_EXECUTE|CACHEDEF);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_EXECUTE|_PAGE_READ|CACHEDEF);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_EXECUTE|_PAGE_WRITE|CACHEDEF);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(_PAGE_PRESENT|_PAGE_USER|_PAGE_READ|_PAGE_EXECUTE|_PAGE_WRITE|
+ CACHEDEF);
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:56

by David Miller

[permalink] [raw]
Subject: Re: [RFC V1 06/31] sparc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

From: Anshuman Khandual <[email protected]>
Date: Mon, 24 Jan 2022 18:26:43 +0530

> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.
>
> Cc: "David S. Miller" <[email protected]>
> Cc: Khalid Aziz <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>

Acked-by: David S. Miller <[email protected]>

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 22/31] openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Jonas Bonn <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/openrisc/Kconfig | 1 +
arch/openrisc/include/asm/pgtable.h | 18 -------------
arch/openrisc/mm/init.c | 41 +++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 18 deletions(-)

diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index f724b3f1aeed..842a61426816 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -10,6 +10,7 @@ config OPENRISC
select ARCH_HAS_DMA_SET_UNCACHED
select ARCH_HAS_DMA_CLEAR_UNCACHED
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select COMMON_CLK
select OF
select OF_EARLY_FLATTREE
diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h
index cdd657f80bfa..fe686c4b7065 100644
--- a/arch/openrisc/include/asm/pgtable.h
+++ b/arch/openrisc/include/asm/pgtable.h
@@ -176,24 +176,6 @@ extern void paging_init(void);
__pgprot(_PAGE_ALL | _PAGE_SRE | _PAGE_SWE \
| _PAGE_SHARED | _PAGE_DIRTY | _PAGE_EXEC | _PAGE_CI)

-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY_X
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY_X
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY_X
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED_X
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED_X
-
/* zero page used for uninitialized stuff */
extern unsigned long empty_zero_page[2048];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index 97305bde1b16..c9f5e7d6bb59 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -210,3 +210,44 @@ void __init mem_init(void)
mem_init_done = 1;
return;
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY_X;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY_X;
+ case VM_EXEC:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY_X;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED_X;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED_X;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 14/31] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Heiko Carstens <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/s390/Kconfig | 1 +
arch/s390/include/asm/pgtable.h | 17 --------------
arch/s390/mm/mmap.c | 41 +++++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 17 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 9750f92380f5..83d1e0e3c762 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -78,6 +78,7 @@ config S390
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAS_VDSO_DATA
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_INLINE_READ_LOCK
select ARCH_INLINE_READ_LOCK_BH
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 008a6c856fa4..3893ef64b439 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -422,23 +422,6 @@ static inline int is_module_addr(void *addr)
* implies read permission.
*/
/*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_RO
-#define __P010 PAGE_RO
-#define __P011 PAGE_RO
-#define __P100 PAGE_RX
-#define __P101 PAGE_RX
-#define __P110 PAGE_RX
-#define __P111 PAGE_RX
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_RO
-#define __S010 PAGE_RW
-#define __S011 PAGE_RW
-#define __S100 PAGE_RX
-#define __S101 PAGE_RX
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX

/*
* Segment entry (large page) protection definitions.
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index e54f928503c5..5b15469eef7a 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -188,3 +188,44 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
}
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_RO;
+ case VM_WRITE:
+ return PAGE_RO;
+ case VM_READ | VM_WRITE:
+ return PAGE_RO;
+ case VM_EXEC:
+ return PAGE_RX;
+ case VM_EXEC | VM_READ:
+ return PAGE_RX;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_RX;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_RX;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_RO;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_RW;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_RW;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_RX;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_RX;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_RWX;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_RWX;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 24/31] microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Michal Simek <[email protected]>
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/microblaze/Kconfig | 1 +
arch/microblaze/include/asm/pgtable.h | 17 -----------
arch/microblaze/mm/init.c | 41 +++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 17 deletions(-)

diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 59798e43cdb0..f2c25ba8621e 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -7,6 +7,7 @@ config MICROBLAZE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_TABLE_SORT
diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h
index c136a01e467e..6df373077ff2 100644
--- a/arch/microblaze/include/asm/pgtable.h
+++ b/arch/microblaze/include/asm/pgtable.h
@@ -204,23 +204,6 @@ extern pte_t *va_to_pte(unsigned long address);
* We consider execute permission the same as read.
* Also, write permissions imply read permissions.
*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY_X
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY_X
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY_X
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED_X
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED_X

#ifndef __ASSEMBLY__
/*
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index 952f35b335b2..40bc28f86144 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -280,3 +280,44 @@ void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask)

return p;
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY_X;
+ case VM_WRITE:
+ return PAGE_COPY;
+ case VM_READ | VM_WRITE:
+ return PAGE_COPY_X;
+ case VM_EXEC:
+ return PAGE_READONLY;
+ case VM_EXEC | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY_X;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED_X;
+ case VM_SHARED | VM_EXEC:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return PAGE_READONLY_X;
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_SHARED_X;
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 28/31] ia64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/ia64/Kconfig | 1 +
arch/ia64/include/asm/pgtable.h | 17 -------------
arch/ia64/mm/init.c | 43 ++++++++++++++++++++++++++++++++-
3 files changed, 43 insertions(+), 18 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 703952819e10..516c426e7606 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -11,6 +11,7 @@ config IA64
select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ACPI
diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h
index 9584b2c5f394..8154c78bba56 100644
--- a/arch/ia64/include/asm/pgtable.h
+++ b/arch/ia64/include/asm/pgtable.h
@@ -161,23 +161,6 @@
* attempts to write to the page.
*/
/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_READONLY /* write to priv pg -> copy & make writable */
-#define __P011 PAGE_READONLY /* ditto */
-#define __P100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
-#define __P101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED /* we don't have (and don't need) write-only */
-#define __S011 PAGE_SHARED
-#define __S100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
-#define __S101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __S110 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX)
-#define __S111 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX)

#define pgd_ERROR(e) printk("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e))
#if CONFIG_PGTABLE_LEVELS == 4
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index 5d165607bf35..124bdf6fbf7b 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -273,7 +273,7 @@ static int __init gate_vma_init(void)
gate_vma.vm_start = FIXADDR_USER_START;
gate_vma.vm_end = FIXADDR_USER_END;
gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
- gate_vma.vm_page_prot = __P101;
+ gate_vma.vm_page_prot = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX);

return 0;
}
@@ -492,3 +492,44 @@ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
__remove_pages(start_pfn, nr_pages, altmap);
}
#endif
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return PAGE_NONE;
+ case VM_READ:
+ return PAGE_READONLY;
+ case VM_WRITE:
+ return PAGE_READONLY;
+ case VM_READ | VM_WRITE:
+ return PAGE_READONLY;
+ case VM_EXEC:
+ return __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX);
+ case VM_EXEC | VM_READ:
+ return __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX);
+ case VM_EXEC | VM_WRITE:
+ return PAGE_COPY_EXEC;
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return PAGE_COPY_EXEC;
+ case VM_SHARED:
+ return PAGE_NONE;
+ case VM_SHARED | VM_READ:
+ return PAGE_READONLY;
+ case VM_SHARED | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return PAGE_SHARED;
+ case VM_SHARED | VM_EXEC:
+ return __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX);
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 31/31] mm/mmap: Define macros for vm_flags access permission combinations

These macros will be useful in cleaning up the all those switch statements
in vm_get_page_prot() across all platforms.

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
include/linux/mm.h | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6c0844b99b3e..b3691eeec500 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2828,6 +2828,45 @@ static inline bool range_in_vma(struct vm_area_struct *vma,
return (vma && vma->vm_start <= start && end <= vma->vm_end);
}

+/*
+ * Access permission related vm_flags combination is used to map into
+ * platform defined page protection flags. This enumeration helps in
+ * abstracting out possible indices after vm_flags is probed for all
+ * access permission i.e (VM_SHARED | VM_EXEC | VM_READ | VM_WRITE).
+ *
+ * VM_EXEC ---------------------|
+ * |
+ * VM_WRITE ---------------| |
+ * | |
+ * VM_READ -----------| | |
+ * | | |
+ * VM_SHARED ----| | | |
+ * | | | |
+ * v v v v
+ * VMFLAGS_IDX_(S|X)(R|X)(W|X)(E|X)
+ *
+ * X - Indicates that the access flag is absent
+ */
+enum vmflags_idx {
+ VMFLAGS_IDX_XXXX, /* (VM_NONE) */
+ VMFLAGS_IDX_XRXX, /* (VM_READ) */
+ VMFLAGS_IDX_XXWX, /* (VM_WRITE) */
+ VMFLAGS_IDX_XRWX, /* (VM_READ | VM_WRITE) */
+ VMFLAGS_IDX_XXXE, /* (VM_EXEC) */
+ VMFLAGS_IDX_XRXE, /* (VM_EXEC | VM_READ) */
+ VMFLAGS_IDX_XXWE, /* (VM_EXEC | VM_WRITE) */
+ VMFLAGS_IDX_XRWE, /* (VM_EXEC | VM_READ | VM_WRITE) */
+ VMFLAGS_IDX_SXXX, /* (VM_SHARED | VM_NONE) */
+ VMFLAGS_IDX_SRXX, /* (VM_SHARED | VM_READ) */
+ VMFLAGS_IDX_SXWX, /* (VM_SHARED | VM_WRITE) */
+ VMFLAGS_IDX_SRWX, /* (VM_SHARED | VM_READ | VM_WRITE) */
+ VMFLAGS_IDX_SXXE, /* (VM_SHARED | VM_EXEC) */
+ VMFLAGS_IDX_SRXE, /* (VM_SHARED | VM_EXEC | VM_READ) */
+ VMFLAGS_IDX_SXWE, /* (VM_SHARED | VM_EXEC | VM_WRITE) */
+ VMFLAGS_IDX_SRWE, /* (VM_SHARED | VM_EXEC | VM_READ | VM_WRITE) */
+ VMFLAGS_IDX_MAX
+};
+
#ifdef CONFIG_MMU
pgprot_t vm_get_page_prot(unsigned long vm_flags);
void vma_set_page_prot(struct vm_area_struct *vma);
--
2.25.1

2022-01-24 19:15:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 30/31] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT

All platforms now define their own vm_get_page_prot() and also there is no
generic version left to fallback on. Hence drop ARCH_HAS_GET_PAGE_PROT.

Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/alpha/Kconfig | 1 -
arch/arc/Kconfig | 1 -
arch/arm/Kconfig | 1 -
arch/arm64/Kconfig | 1 -
arch/csky/Kconfig | 1 -
arch/hexagon/Kconfig | 1 -
arch/ia64/Kconfig | 1 -
arch/m68k/Kconfig | 1 -
arch/microblaze/Kconfig | 1 -
arch/mips/Kconfig | 1 -
arch/nds32/Kconfig | 1 -
arch/nios2/Kconfig | 1 -
arch/openrisc/Kconfig | 1 -
arch/parisc/Kconfig | 1 -
arch/powerpc/Kconfig | 1 -
arch/riscv/Kconfig | 1 -
arch/s390/Kconfig | 1 -
arch/sh/Kconfig | 1 -
arch/sparc/Kconfig | 2 --
arch/um/Kconfig | 1 -
arch/x86/Kconfig | 1 -
arch/xtensa/Kconfig | 1 -
mm/Kconfig | 3 ---
mm/mmap.c | 3 ---
24 files changed, 29 deletions(-)

diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 73e82fe5c770..4e87783c90ad 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -2,7 +2,6 @@
config ALPHA
bool
default y
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_32BIT_USTAT_F_TINODE
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 78ff0644b343..3c2a4753d09b 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -13,7 +13,6 @@ config ARC
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select ARCH_32BIT_OFF_T
select BUILDTIME_TABLE_SORT
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c12362d20c44..fabe39169b12 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -23,7 +23,6 @@ config ARM
select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB || !MMU
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fce2d0fc4ecc..d1a21e5d6f52 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -43,7 +43,6 @@ config ARM64
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_ZONE_DMA_SET if EXPERT
select ARCH_HAVE_ELF_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 209dac5686dd..132f43f12dd8 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -6,7 +6,6 @@ config CSKY
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)
diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index cdc5df32a1e3..15dd8f38b698 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -6,7 +6,6 @@ config HEXAGON
def_bool y
select ARCH_32BIT_OFF_T
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select DMA_GLOBAL_POOL
# Other pending projects/to-do items.
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 516c426e7606..703952819e10 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -11,7 +11,6 @@ config IA64
select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ACPI
diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
index 114e65164692..936e1803c7c7 100644
--- a/arch/m68k/Kconfig
+++ b/arch/m68k/Kconfig
@@ -11,7 +11,6 @@ config M68K
select ARCH_NO_PREEMPT if !COLDFIRE
select ARCH_USE_MEMTEST if MMU_MOTOROLA
select ARCH_WANT_IPC_PARSE_VERSION
- select ARCH_HAS_VM_GET_PAGE_PROT
select BINFMT_FLAT_ARGVP_ENVP_ON_STACK
select DMA_DIRECT_REMAP if HAS_DMA && MMU && !COLDFIRE
select GENERIC_ATOMIC64
diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index f2c25ba8621e..59798e43cdb0 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -7,7 +7,6 @@ config MICROBLAZE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_TABLE_SORT
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index fcbfc52a1567..058446f01487 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -13,7 +13,6 @@ config MIPS
select ARCH_HAS_STRNLEN_USER
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_KEEP_MEMBLOCK
select ARCH_SUPPORTS_UPROBES
diff --git a/arch/nds32/Kconfig b/arch/nds32/Kconfig
index 576e05479925..4d1421b18734 100644
--- a/arch/nds32/Kconfig
+++ b/arch/nds32/Kconfig
@@ -10,7 +10,6 @@ config NDS32
select ARCH_HAS_DMA_PREP_COHERENT
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_WANT_FRAME_POINTERS if FTRACE
select CLKSRC_MMIO
select CLONE_BACKWARDS
diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index 85a58a357a3b..33fd06f5fa41 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -6,7 +6,6 @@ config NIOS2
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_DMA_SET_UNCACHED
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_SWAP
select COMMON_CLK
select TIMER_OF
diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index 842a61426816..f724b3f1aeed 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -10,7 +10,6 @@ config OPENRISC
select ARCH_HAS_DMA_SET_UNCACHED
select ARCH_HAS_DMA_CLEAR_UNCACHED
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
- select ARCH_HAS_VM_GET_PAGE_PROT
select COMMON_CLK
select OF
select OF_EARLY_FLATTREE
diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index de512f120b50..43c1c880def6 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -10,7 +10,6 @@ config PARISC
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_SG_CHAIN
select ARCH_SUPPORTS_HUGETLBFS if PA20
select ARCH_SUPPORTS_MEMORY_FAILURE
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index ddb4a3687c05..b779603978e1 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -135,7 +135,6 @@ config PPC
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UACCESS_FLUSHCACHE
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_KEEP_MEMBLOCK
select ARCH_MIGHT_HAVE_PC_PARPORT
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 9391742f9286..5adcbd9b5e88 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -31,7 +31,6 @@ config RISCV
select ARCH_HAS_STRICT_MODULE_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
select ARCH_STACKWALK
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 83d1e0e3c762..9750f92380f5 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -78,7 +78,6 @@ config S390
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAS_VDSO_DATA
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_INLINE_READ_LOCK
select ARCH_INLINE_READ_LOCK_BH
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index f3fcd1c5e002..2474a04ceac4 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -11,7 +11,6 @@ config SUPERH
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HIBERNATION_POSSIBLE if MMU
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index ff29156f2380..1cab1b284f1a 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -59,7 +59,6 @@ config SPARC32
select HAVE_UID16
select OLD_SIGACTION
select ZONE_DMA
- select ARCH_HAS_VM_GET_PAGE_PROT

config SPARC64
def_bool 64BIT
@@ -85,7 +84,6 @@ config SPARC64
select PERF_USE_VMALLOC
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select HAVE_C_RECORDMCOUNT
- select ARCH_HAS_VM_GET_PAGE_PROT
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 5836296868a8..4d398b80aea8 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -9,7 +9,6 @@ config UML
select ARCH_HAS_KCOV
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 00ab039044ba..bd342ae980ef 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -93,7 +93,6 @@ config X86
select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_DEBUG_WX
select ARCH_HAS_ZONE_DMA_SET if EXPERT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 1608f7517546..8ac599aa6d99 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -9,7 +9,6 @@ config XTENSA
select ARCH_HAS_DMA_SET_UNCACHED if MMU
select ARCH_HAS_STRNCPY_FROM_USER if !KASAN
select ARCH_HAS_STRNLEN_USER
- select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
diff --git a/mm/Kconfig b/mm/Kconfig
index 212fb6e1ddaa..3326ee3903f3 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -744,9 +744,6 @@ config IDLE_PAGE_TRACKING
config ARCH_HAS_CACHE_LINE_SIZE
bool

-config ARCH_HAS_VM_GET_PAGE_PROT
- bool
-
config ARCH_HAS_PTE_DEVMAP
bool

diff --git a/mm/mmap.c b/mm/mmap.c
index 368bc8aee45b..8c1396c3f0d6 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -81,7 +81,6 @@ static void unmap_region(struct mm_struct *mm,
struct vm_area_struct *vma, struct vm_area_struct *prev,
unsigned long start, unsigned long end);

-#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
/* description of effects of mapping type and prot in current implementation.
* this is due to the limited x86 page protection hardware. The expected
* behavior is in parens:
@@ -102,8 +101,6 @@ static void unmap_region(struct mm_struct *mm,
* w: (no) no
* x: (yes) yes
*/
-#endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
-
static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
{
return pgprot_modify(oldprot, vm_get_page_prot(vm_flags));
--
2.25.1

2022-01-24 19:16:13

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 25/31] nios2/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Dinh Nguyen <[email protected]>
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/nios2/Kconfig | 1 +
arch/nios2/include/asm/pgtable.h | 16 -------------
arch/nios2/mm/init.c | 41 ++++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 16 deletions(-)

diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index 33fd06f5fa41..85a58a357a3b 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -6,6 +6,7 @@ config NIOS2
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_DMA_SET_UNCACHED
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_SWAP
select COMMON_CLK
select TIMER_OF
diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
index 4a995fa628ee..2678dad58a63 100644
--- a/arch/nios2/include/asm/pgtable.h
+++ b/arch/nios2/include/asm/pgtable.h
@@ -40,24 +40,8 @@ struct mm_struct;
*/

/* Remove W bit on private pages for COW support */
-#define __P000 MKP(0, 0, 0)
-#define __P001 MKP(0, 0, 1)
-#define __P010 MKP(0, 0, 0) /* COW */
-#define __P011 MKP(0, 0, 1) /* COW */
-#define __P100 MKP(1, 0, 0)
-#define __P101 MKP(1, 0, 1)
-#define __P110 MKP(1, 0, 0) /* COW */
-#define __P111 MKP(1, 0, 1) /* COW */

/* Shared pages can have exact HW mapping */
-#define __S000 MKP(0, 0, 0)
-#define __S001 MKP(0, 0, 1)
-#define __S010 MKP(0, 1, 0)
-#define __S011 MKP(0, 1, 1)
-#define __S100 MKP(1, 0, 0)
-#define __S101 MKP(1, 0, 1)
-#define __S110 MKP(1, 1, 0)
-#define __S111 MKP(1, 1, 1)

/* Used all over the kernel */
#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHED | _PAGE_READ | \
diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
index 613fcaa5988a..4f8251e62f31 100644
--- a/arch/nios2/mm/init.c
+++ b/arch/nios2/mm/init.c
@@ -124,3 +124,44 @@ const char *arch_vma_name(struct vm_area_struct *vma)
{
return (vma->vm_start == KUSER_BASE) ? "[kuser]" : NULL;
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return MKP(0, 0, 0);
+ case VM_READ:
+ return MKP(0, 0, 1);
+ case VM_WRITE:
+ return MKP(0, 0, 0);
+ case VM_READ | VM_WRITE:
+ return MKP(0, 0, 1);
+ case VM_EXEC:
+ return MKP(1, 0, 0);
+ case VM_EXEC | VM_READ:
+ return MKP(1, 0, 1);
+ case VM_EXEC | VM_WRITE:
+ return MKP(1, 0, 0);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return MKP(1, 0, 1);
+ case VM_SHARED:
+ return MKP(0, 0, 0);
+ case VM_SHARED | VM_READ:
+ return MKP(0, 0, 1);
+ case VM_SHARED | VM_WRITE:
+ return MKP(0, 1, 0);
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return MKP(0, 1, 1);
+ case VM_SHARED | VM_EXEC:
+ return MKP(1, 0, 0);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return MKP(1, 0, 1);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return MKP(1, 1, 0);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return MKP(1, 1, 1);
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:17:56

by Anshuman Khandual

[permalink] [raw]
Subject: [RFC V1 27/31] nds32/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

This defines and exports a platform specific custom vm_get_page_prot() via
subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
macros can be dropped which are no longer needed.

Cc: Nick Hu <[email protected]>
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
arch/nds32/Kconfig | 1 +
arch/nds32/include/asm/pgtable.h | 17 -------------
arch/nds32/mm/mmap.c | 41 ++++++++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 17 deletions(-)

diff --git a/arch/nds32/Kconfig b/arch/nds32/Kconfig
index 4d1421b18734..576e05479925 100644
--- a/arch/nds32/Kconfig
+++ b/arch/nds32/Kconfig
@@ -10,6 +10,7 @@ config NDS32
select ARCH_HAS_DMA_PREP_COHERENT
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+ select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_WANT_FRAME_POINTERS if FTRACE
select CLKSRC_MMIO
select CLONE_BACKWARDS
diff --git a/arch/nds32/include/asm/pgtable.h b/arch/nds32/include/asm/pgtable.h
index 419f984eef70..79f64ed734cb 100644
--- a/arch/nds32/include/asm/pgtable.h
+++ b/arch/nds32/include/asm/pgtable.h
@@ -152,23 +152,6 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
#endif /* __ASSEMBLY__ */

/* xwr */
-#define __P000 (PAGE_NONE | _PAGE_CACHE_SHRD)
-#define __P001 (PAGE_READ | _PAGE_CACHE_SHRD)
-#define __P010 (PAGE_COPY | _PAGE_CACHE_SHRD)
-#define __P011 (PAGE_COPY | _PAGE_CACHE_SHRD)
-#define __P100 (PAGE_EXEC | _PAGE_CACHE_SHRD)
-#define __P101 (PAGE_READ | _PAGE_E | _PAGE_CACHE_SHRD)
-#define __P110 (PAGE_COPY | _PAGE_E | _PAGE_CACHE_SHRD)
-#define __P111 (PAGE_COPY | _PAGE_E | _PAGE_CACHE_SHRD)
-
-#define __S000 (PAGE_NONE | _PAGE_CACHE_SHRD)
-#define __S001 (PAGE_READ | _PAGE_CACHE_SHRD)
-#define __S010 (PAGE_RDWR | _PAGE_CACHE_SHRD)
-#define __S011 (PAGE_RDWR | _PAGE_CACHE_SHRD)
-#define __S100 (PAGE_EXEC | _PAGE_CACHE_SHRD)
-#define __S101 (PAGE_READ | _PAGE_E | _PAGE_CACHE_SHRD)
-#define __S110 (PAGE_RDWR | _PAGE_E | _PAGE_CACHE_SHRD)
-#define __S111 (PAGE_RDWR | _PAGE_E | _PAGE_CACHE_SHRD)

#ifndef __ASSEMBLY__
/*
diff --git a/arch/nds32/mm/mmap.c b/arch/nds32/mm/mmap.c
index 1bdf5e7d1b43..bfb3929d634a 100644
--- a/arch/nds32/mm/mmap.c
+++ b/arch/nds32/mm/mmap.c
@@ -71,3 +71,44 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.align_offset = pgoff << PAGE_SHIFT;
return vm_unmapped_area(&info);
}
+
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
+ case VM_NONE:
+ return (PAGE_NONE | _PAGE_CACHE_SHRD);
+ case VM_READ:
+ return (PAGE_READ | _PAGE_CACHE_SHRD);
+ case VM_WRITE:
+ return (PAGE_COPY | _PAGE_CACHE_SHRD);
+ case VM_READ | VM_WRITE:
+ return (PAGE_COPY | _PAGE_CACHE_SHRD);
+ case VM_EXEC:
+ return (PAGE_EXEC | _PAGE_CACHE_SHRD);
+ case VM_EXEC | VM_READ:
+ return (PAGE_READ | _PAGE_E | _PAGE_CACHE_SHRD);
+ case VM_EXEC | VM_WRITE:
+ return (PAGE_COPY | _PAGE_E | _PAGE_CACHE_SHRD);
+ case VM_EXEC | VM_READ | VM_WRITE:
+ return (PAGE_COPY | _PAGE_E | _PAGE_CACHE_SHRD);
+ case VM_SHARED:
+ return (PAGE_NONE | _PAGE_CACHE_SHRD);
+ case VM_SHARED | VM_READ:
+ return (PAGE_READ | _PAGE_CACHE_SHRD);
+ case VM_SHARED | VM_WRITE:
+ return (PAGE_RDWR | _PAGE_CACHE_SHRD);
+ case VM_SHARED | VM_READ | VM_WRITE:
+ return (PAGE_RDWR | _PAGE_CACHE_SHRD);
+ case VM_SHARED | VM_EXEC:
+ return (PAGE_EXEC | _PAGE_CACHE_SHRD);
+ case VM_SHARED | VM_EXEC | VM_READ:
+ return (PAGE_READ | _PAGE_E | _PAGE_CACHE_SHRD);
+ case VM_SHARED | VM_EXEC | VM_WRITE:
+ return (PAGE_RDWR | _PAGE_E | _PAGE_CACHE_SHRD);
+ case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
+ return (PAGE_RDWR | _PAGE_E | _PAGE_CACHE_SHRD);
+ default:
+ BUILD_BUG();
+ }
+}
+EXPORT_SYMBOL(vm_get_page_prot);
--
2.25.1

2022-01-24 19:24:20

by Andreas Schwab

[permalink] [raw]
Subject: Re: [RFC V1 08/31] m68k/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

On Jan 24 2022, Anshuman Khandual wrote:

> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return __pgprot(pgprot_val(PAGE_NONE_C)|_PAGE_CACHE040);

_PAGE_CACHE040 should only be present when running on a 040 or 060.

--
Andreas Schwab, [email protected]
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."

2022-01-24 19:38:32

by Russell King (Oracle)

[permalink] [raw]
Subject: Re: [RFC V1 09/31] arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

On Mon, Jan 24, 2022 at 06:26:46PM +0530, Anshuman Khandual wrote:
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.

What is the fundamental advantage of this approach?

>
> Cc: Russell King <[email protected]>
> Cc: Arnd Bergmann <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> arch/arm/Kconfig | 1 +
> arch/arm/include/asm/pgtable.h | 18 ------------
> arch/arm/mm/mmu.c | 50 ++++++++++++++++++++++++++++++----
> 3 files changed, 45 insertions(+), 24 deletions(-)
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index fabe39169b12..c12362d20c44 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -23,6 +23,7 @@ config ARM
> select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB || !MMU
> select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select ARCH_HAVE_CUSTOM_GPIO_H
> select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
> select ARCH_HAS_GCOV_PROFILE_ALL
> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
> index cd1f84bb40ae..ec062dd6082a 100644
> --- a/arch/arm/include/asm/pgtable.h
> +++ b/arch/arm/include/asm/pgtable.h
> @@ -137,24 +137,6 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
> * 2) If we could do execute protection, then read is implied
> * 3) write implies read permissions
> */
> -#define __P000 __PAGE_NONE
> -#define __P001 __PAGE_READONLY
> -#define __P010 __PAGE_COPY
> -#define __P011 __PAGE_COPY
> -#define __P100 __PAGE_READONLY_EXEC
> -#define __P101 __PAGE_READONLY_EXEC
> -#define __P110 __PAGE_COPY_EXEC
> -#define __P111 __PAGE_COPY_EXEC
> -
> -#define __S000 __PAGE_NONE
> -#define __S001 __PAGE_READONLY
> -#define __S010 __PAGE_SHARED
> -#define __S011 __PAGE_SHARED
> -#define __S100 __PAGE_READONLY_EXEC
> -#define __S101 __PAGE_READONLY_EXEC
> -#define __S110 __PAGE_SHARED_EXEC
> -#define __S111 __PAGE_SHARED_EXEC
> -
> #ifndef __ASSEMBLY__
> /*
> * ZERO_PAGE is a global shared page that is always zero: used
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 274e4f73fd33..3007d07bc0e7 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -403,6 +403,8 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
> local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE);
> }
>
> +static pteval_t user_pgprot;
> +
> /*
> * Adjust the PMD section entries according to the CPU in use.
> */
> @@ -410,7 +412,7 @@ static void __init build_mem_type_table(void)
> {
> struct cachepolicy *cp;
> unsigned int cr = get_cr();
> - pteval_t user_pgprot, kern_pgprot, vecs_pgprot;
> + pteval_t kern_pgprot, vecs_pgprot;
> int cpu_arch = cpu_architecture();
> int i;
>
> @@ -627,11 +629,6 @@ static void __init build_mem_type_table(void)
> user_pgprot |= PTE_EXT_PXN;
> #endif
>
> - for (i = 0; i < 16; i++) {
> - pteval_t v = pgprot_val(protection_map[i]);
> - protection_map[i] = __pgprot(v | user_pgprot);
> - }
> -
> mem_types[MT_LOW_VECTORS].prot_pte |= vecs_pgprot;
> mem_types[MT_HIGH_VECTORS].prot_pte |= vecs_pgprot;
>
> @@ -670,6 +667,47 @@ static void __init build_mem_type_table(void)
> }
> }
>
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return __pgprot(pgprot_val(__PAGE_NONE) | user_pgprot);
> + case VM_READ:
> + return __pgprot(pgprot_val(__PAGE_READONLY) | user_pgprot);
> + case VM_WRITE:
> + return __pgprot(pgprot_val(__PAGE_COPY) | user_pgprot);
> + case VM_READ | VM_WRITE:
> + return __pgprot(pgprot_val(__PAGE_COPY) | user_pgprot);
> + case VM_EXEC:
> + return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot);
> + case VM_EXEC | VM_READ:
> + return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot);
> + case VM_EXEC | VM_WRITE:
> + return __pgprot(pgprot_val(__PAGE_COPY_EXEC) | user_pgprot);
> + case VM_EXEC | VM_READ | VM_WRITE:
> + return __pgprot(pgprot_val(__PAGE_COPY_EXEC) | user_pgprot);
> + case VM_SHARED:
> + return __pgprot(pgprot_val(__PAGE_NONE) | user_pgprot);
> + case VM_SHARED | VM_READ:
> + return __pgprot(pgprot_val(__PAGE_READONLY) | user_pgprot);
> + case VM_SHARED | VM_WRITE:
> + return __pgprot(pgprot_val(__PAGE_SHARED) | user_pgprot);
> + case VM_SHARED | VM_READ | VM_WRITE:
> + return __pgprot(pgprot_val(__PAGE_SHARED) | user_pgprot);
> + case VM_SHARED | VM_EXEC:
> + return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot);
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot);
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + return __pgprot(pgprot_val(__PAGE_SHARED_EXEC) | user_pgprot);
> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
> + return __pgprot(pgprot_val(__PAGE_SHARED_EXEC) | user_pgprot);
> + default:
> + BUILD_BUG();
> + }
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
> +
> #ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE
> pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
> unsigned long size, pgprot_t vma_prot)
> --
> 2.25.1
>
>

--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

2022-01-24 19:52:19

by Khalid Aziz

[permalink] [raw]
Subject: Re: [RFC V1 06/31] sparc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

On 1/24/22 05:56, Anshuman Khandual wrote:
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.
>
> Cc: "David S. Miller" <[email protected]>
> Cc: Khalid Aziz <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---


Look like reasonable changes to me.

Reviewed-by: Khalid Aziz <[email protected]>

> arch/sparc/Kconfig | 2 +
> arch/sparc/include/asm/mman.h | 1 -
> arch/sparc/include/asm/pgtable_32.h | 19 --------
> arch/sparc/include/asm/pgtable_64.h | 19 --------
> arch/sparc/mm/init_32.c | 41 +++++++++++++++++
> arch/sparc/mm/init_64.c | 71 +++++++++++++++++++++--------
> 6 files changed, 95 insertions(+), 58 deletions(-)
>
> diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
> index 1cab1b284f1a..ff29156f2380 100644
> --- a/arch/sparc/Kconfig
> +++ b/arch/sparc/Kconfig
> @@ -59,6 +59,7 @@ config SPARC32
> select HAVE_UID16
> select OLD_SIGACTION
> select ZONE_DMA
> + select ARCH_HAS_VM_GET_PAGE_PROT
>
> config SPARC64
> def_bool 64BIT
> @@ -84,6 +85,7 @@ config SPARC64
> select PERF_USE_VMALLOC
> select ARCH_HAVE_NMI_SAFE_CMPXCHG
> select HAVE_C_RECORDMCOUNT
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select HAVE_ARCH_AUDITSYSCALL
> select ARCH_SUPPORTS_ATOMIC_RMW
> select ARCH_SUPPORTS_DEBUG_PAGEALLOC
> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
> index 274217e7ed70..874d21483202 100644
> --- a/arch/sparc/include/asm/mman.h
> +++ b/arch/sparc/include/asm/mman.h
> @@ -46,7 +46,6 @@ static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
> }
> }
>
> -#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
> static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
> {
> return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
> diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
> index ffccfe3b22ed..060a435f96d6 100644
> --- a/arch/sparc/include/asm/pgtable_32.h
> +++ b/arch/sparc/include/asm/pgtable_32.h
> @@ -64,25 +64,6 @@ void paging_init(void);
>
> extern unsigned long ptr_in_current_pgd;
>
> -/* xwr */
> -#define __P000 PAGE_NONE
> -#define __P001 PAGE_READONLY
> -#define __P010 PAGE_COPY
> -#define __P011 PAGE_COPY
> -#define __P100 PAGE_READONLY
> -#define __P101 PAGE_READONLY
> -#define __P110 PAGE_COPY
> -#define __P111 PAGE_COPY
> -
> -#define __S000 PAGE_NONE
> -#define __S001 PAGE_READONLY
> -#define __S010 PAGE_SHARED
> -#define __S011 PAGE_SHARED
> -#define __S100 PAGE_READONLY
> -#define __S101 PAGE_READONLY
> -#define __S110 PAGE_SHARED
> -#define __S111 PAGE_SHARED
> -
> /* First physical page can be anywhere, the following is needed so that
> * va-->pa and vice versa conversions work properly without performance
> * hit for all __pa()/__va() operations.
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 4679e45c8348..a779418ceba9 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -187,25 +187,6 @@ bool kern_addr_valid(unsigned long addr);
> #define _PAGE_SZHUGE_4U _PAGE_SZ4MB_4U
> #define _PAGE_SZHUGE_4V _PAGE_SZ4MB_4V
>
> -/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
> -#define __P000 __pgprot(0)
> -#define __P001 __pgprot(0)
> -#define __P010 __pgprot(0)
> -#define __P011 __pgprot(0)
> -#define __P100 __pgprot(0)
> -#define __P101 __pgprot(0)
> -#define __P110 __pgprot(0)
> -#define __P111 __pgprot(0)
> -
> -#define __S000 __pgprot(0)
> -#define __S001 __pgprot(0)
> -#define __S010 __pgprot(0)
> -#define __S011 __pgprot(0)
> -#define __S100 __pgprot(0)
> -#define __S101 __pgprot(0)
> -#define __S110 __pgprot(0)
> -#define __S111 __pgprot(0)
> -
> #ifndef __ASSEMBLY__
>
> pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);
> diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
> index 1e9f577f084d..efb3d6e6d7f6 100644
> --- a/arch/sparc/mm/init_32.c
> +++ b/arch/sparc/mm/init_32.c
> @@ -302,3 +302,44 @@ void sparc_flush_page_to_ram(struct page *page)
> __flush_page_to_ram(vaddr);
> }
> EXPORT_SYMBOL(sparc_flush_page_to_ram);
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return PAGE_NONE;
> + case VM_READ:
> + return PAGE_READONLY;
> + case VM_WRITE:
> + return PAGE_COPY;
> + case VM_READ | VM_WRITE:
> + return PAGE_COPY;
> + case VM_EXEC:
> + return PAGE_READONLY;
> + case VM_EXEC | VM_READ:
> + return PAGE_READONLY;
> + case VM_EXEC | VM_WRITE:
> + return PAGE_COPY;
> + case VM_EXEC | VM_READ | VM_WRITE:
> + return PAGE_COPY;
> + case VM_SHARED:
> + return PAGE_NONE;
> + case VM_SHARED | VM_READ:
> + return PAGE_READONLY;
> + case VM_SHARED | VM_WRITE:
> + return PAGE_SHARED;
> + case VM_SHARED | VM_READ | VM_WRITE:
> + return PAGE_SHARED;
> + case VM_SHARED | VM_EXEC:
> + return PAGE_READONLY;
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return PAGE_READONLY;
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + return PAGE_SHARED;
> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
> + return PAGE_SHARED;
> + default:
> + BUILD_BUG();
> + }
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
> index 1b23639e2fcd..46b5366f7f69 100644
> --- a/arch/sparc/mm/init_64.c
> +++ b/arch/sparc/mm/init_64.c
> @@ -50,6 +50,7 @@
> #include <asm/cpudata.h>
> #include <asm/setup.h>
> #include <asm/irq.h>
> +#include <asm/mman.h>
>
> #include "init_64.h"
>
> @@ -2641,29 +2642,13 @@ static void prot_init_common(unsigned long page_none,
> {
> PAGE_COPY = __pgprot(page_copy);
> PAGE_SHARED = __pgprot(page_shared);
> -
> - protection_map[0x0] = __pgprot(page_none);
> - protection_map[0x1] = __pgprot(page_readonly & ~page_exec_bit);
> - protection_map[0x2] = __pgprot(page_copy & ~page_exec_bit);
> - protection_map[0x3] = __pgprot(page_copy & ~page_exec_bit);
> - protection_map[0x4] = __pgprot(page_readonly);
> - protection_map[0x5] = __pgprot(page_readonly);
> - protection_map[0x6] = __pgprot(page_copy);
> - protection_map[0x7] = __pgprot(page_copy);
> - protection_map[0x8] = __pgprot(page_none);
> - protection_map[0x9] = __pgprot(page_readonly & ~page_exec_bit);
> - protection_map[0xa] = __pgprot(page_shared & ~page_exec_bit);
> - protection_map[0xb] = __pgprot(page_shared & ~page_exec_bit);
> - protection_map[0xc] = __pgprot(page_readonly);
> - protection_map[0xd] = __pgprot(page_readonly);
> - protection_map[0xe] = __pgprot(page_shared);
> - protection_map[0xf] = __pgprot(page_shared);
> }
>
> +static unsigned long page_none, page_shared, page_copy, page_readonly;
> +static unsigned long page_exec_bit;
> +
> static void __init sun4u_pgprot_init(void)
> {
> - unsigned long page_none, page_shared, page_copy, page_readonly;
> - unsigned long page_exec_bit;
> int i;
>
> PAGE_KERNEL = __pgprot (_PAGE_PRESENT_4U | _PAGE_VALID |
> @@ -3183,3 +3168,51 @@ void copy_highpage(struct page *to, struct page *from)
> }
> }
> EXPORT_SYMBOL(copy_highpage);
> +
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return __pgprot(page_none);
> + case VM_READ:
> + return __pgprot(page_readonly & ~page_exec_bit);
> + case VM_WRITE:
> + return __pgprot(page_copy & ~page_exec_bit);
> + case VM_READ | VM_WRITE:
> + return __pgprot(page_copy & ~page_exec_bit);
> + case VM_EXEC:
> + return __pgprot(page_readonly);
> + case VM_EXEC | VM_READ:
> + return __pgprot(page_readonly);
> + case VM_EXEC | VM_WRITE:
> + return __pgprot(page_copy);
> + case VM_EXEC | VM_READ | VM_WRITE:
> + return __pgprot(page_copy);
> + case VM_SHARED:
> + return __pgprot(page_none);
> + case VM_SHARED | VM_READ:
> + return __pgprot(page_readonly & ~page_exec_bit);
> + case VM_SHARED | VM_WRITE:
> + return __pgprot(page_shared & ~page_exec_bit);
> + case VM_SHARED | VM_READ | VM_WRITE:
> + return __pgprot(page_shared & ~page_exec_bit);
> + case VM_SHARED | VM_EXEC:
> + return __pgprot(page_readonly);
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return __pgprot(page_readonly);
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + return __pgprot(page_shared);
> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
> + return __pgprot(page_shared);
> + default:
> + BUILD_BUG();
> + }
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + return __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> + pgprot_val(sparc_vm_get_page_prot(vm_flags)));
> +
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
>

2022-01-25 08:52:35

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 09/31] arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT



On 1/24/22 10:36 PM, Russell King (Oracle) wrote:
> On Mon, Jan 24, 2022 at 06:26:46PM +0530, Anshuman Khandual wrote:
>> This defines and exports a platform specific custom vm_get_page_prot() via
>> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
>> macros can be dropped which are no longer needed.
>
> What is the fundamental advantage of this approach?
>

Remove multiple 'core MM <--> platform' abstraction layers to map
vm_flags access permission combination into page protection. From
the cover letter ..

----------
Currently there are multiple layers of abstraction i.e __SXXX/__PXXX macros
, protection_map[], arch_vm_get_page_prot() and arch_filter_pgprot() built
between the platform and generic MM, finally defining vm_get_page_prot().

Hence this series proposes to drop all these abstraction levels and instead
just move the responsibility of defining vm_get_page_prot() to the platform
itself making it clean and simple.
----------

2022-01-25 09:16:19

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 08/31] m68k/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT



On 1/24/22 7:43 PM, Andreas Schwab wrote:
> On Jan 24 2022, Anshuman Khandual wrote:
>
>> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
>> +{
>> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
>> + case VM_NONE:
>> + return __pgprot(pgprot_val(PAGE_NONE_C)|_PAGE_CACHE040);
>
> _PAGE_CACHE040 should only be present when running on a 040 or 060.
>

Right, seems like I have missed the conditionality on CPU_IS_040_OR_060
while moving the code, will fix it.

2022-01-26 01:22:19

by Rolf Eike Beer

[permalink] [raw]
Subject: Re: [RFC V1 21/31] parisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

Anshuman Khandual wrote:
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.
>
> Cc: "James E.J. Bottomley" <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> arch/parisc/Kconfig | 1 +
> arch/parisc/include/asm/pgtable.h | 20 ---------------
> arch/parisc/mm/init.c | 41 +++++++++++++++++++++++++++++++
> 3 files changed, 42 insertions(+), 20 deletions(-)
>
> diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
> index 43c1c880def6..de512f120b50 100644
> --- a/arch/parisc/Kconfig
> +++ b/arch/parisc/Kconfig
> @@ -10,6 +10,7 @@ config PARISC
> select ARCH_HAS_ELF_RANDOMIZE
> select ARCH_HAS_STRICT_KERNEL_RWX
> select ARCH_HAS_UBSAN_SANITIZE_ALL
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select ARCH_NO_SG_CHAIN
> select ARCH_SUPPORTS_HUGETLBFS if PA20
> select ARCH_SUPPORTS_MEMORY_FAILURE
> diff --git a/arch/parisc/include/asm/pgtable.h
> b/arch/parisc/include/asm/pgtable.h index 3e7cf882639f..80d99b2b5913 100644
> --- a/arch/parisc/include/asm/pgtable.h
> +++ b/arch/parisc/include/asm/pgtable.h
> @@ -269,26 +269,6 @@ extern void __update_cache(pte_t pte);
> * pages.
> */
>
> - /*xwr*/
> -#define __P000 PAGE_NONE
> -#define __P001 PAGE_READONLY
> -#define __P010 __P000 /* copy on write */
> -#define __P011 __P001 /* copy on write */
> -#define __P100 PAGE_EXECREAD
> -#define __P101 PAGE_EXECREAD
> -#define __P110 __P100 /* copy on write */
> -#define __P111 __P101 /* copy on write */
> -
> -#define __S000 PAGE_NONE
> -#define __S001 PAGE_READONLY
> -#define __S010 PAGE_WRITEONLY
> -#define __S011 PAGE_SHARED
> -#define __S100 PAGE_EXECREAD
> -#define __S101 PAGE_EXECREAD
> -#define __S110 PAGE_RWX
> -#define __S111 PAGE_RWX
> -
> -
> extern pgd_t swapper_pg_dir[]; /* declared in init_task.c */
>
> /* initial page tables for 0-8MB for kernel */
> diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
> index 1ae31db9988f..c8316e97e1a2 100644
> --- a/arch/parisc/mm/init.c
> +++ b/arch/parisc/mm/init.c
> @@ -866,3 +866,44 @@ void flush_tlb_all(void)
> spin_unlock(&sid_lock);
> }
> #endif
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return PAGE_NONE;
> + case VM_READ:
> + return PAGE_READONLY;
> + case VM_WRITE:
> + return PAGE_NONE;
> + case VM_READ | VM_WRITE:
> + return PAGE_READONLY;

This looks extremely strange. It probably is correct when it comes to CoW, how
about including the comment that was in the original definitions for the cases
where CoW is expected?

> + case VM_EXEC:
> + return PAGE_EXECREAD;
> + case VM_EXEC | VM_READ:
> + return PAGE_EXECREAD;
> + case VM_EXEC | VM_WRITE:
> + return PAGE_EXECREAD;
> + case VM_EXEC | VM_READ | VM_WRITE:
> + return PAGE_EXECREAD;
> + case VM_SHARED:
> + return PAGE_NONE;
> + case VM_SHARED | VM_READ:
> + return PAGE_READONLY;
> + case VM_SHARED | VM_WRITE:
> + return PAGE_WRITEONLY;
> + case VM_SHARED | VM_READ | VM_WRITE:
> + return PAGE_SHARED;
> + case VM_SHARED | VM_EXEC:
> + return PAGE_EXECREAD;
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return PAGE_EXECREAD;
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + return PAGE_RWX;
> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
> + return PAGE_RWX;
> + default:
> + BUILD_BUG();
> + }
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);


Attachments:
signature.asc (201.00 B)
This is a digitally signed message part.

2022-01-26 20:16:25

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [RFC V1 01/31] mm/debug_vm_pgtable: Directly use vm_get_page_prot()

> + *
> + * Protection based vm_flags combinatins are always linear
> + * and increasing i.e VM_NONE ..[VM_SHARED|READ|WRITE|EXEC].
> */
> - for (idx = 0; idx < ARRAY_SIZE(protection_map); idx++) {
> + for (i = VM_NONE; i <= (VM_SHARED | VM_READ | VM_WRITE | VM_EXEC); ix++) {
> pte_basic_tests(&args, idx);
> pmd_basic_tests(&args, idx);
> pud_basic_tests(&args, idx);

This looks rather convoluted. I'd prefer to add a helper for the body
of this loop, and then explicitly call it for all the valid
combinations. Right now all are valid, so this dosn't change a thing
except for generating larger code due to the explicit loop unrolling,
but I think it is much easier to follow and maintain.

2022-01-26 20:18:00

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [RFC V1 02/31] mm/mmap: Clarify protection_map[] indices

> + [VM_NONE] = __P000,
> + [VM_READ] = __P001,
> + [VM_WRITE] = __P010,
> + [VM_READ|VM_WRITE] = __P011,
> + [VM_EXEC] = __P100,
> + [VM_EXEC|VM_READ] = __P101,
> + [VM_EXEC|VM_WRITE] = __P110,
> + [VM_EXEC|VM_READ|VM_WRITE] = __P111,
> + [VM_SHARED] = __S000,
> + [VM_SHARED|VM_READ] = __S001,
> + [VM_SHARED|VM_WRITE] = __S010,
> + [VM_SHARED|VM_READ|VM_WRITE] = __S011,
> + [VM_SHARED|VM_EXEC] = __S100,
> + [VM_SHARED|VM_READ|VM_EXEC] = __S101,
> + [VM_SHARED|VM_WRITE|VM_EXEC] = __S110,
> + [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111

Please add whitespaces around the | operators.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-01-26 22:18:40

by Dinh Nguyen

[permalink] [raw]
Subject: Re: [RFC V1 25/31] nios2/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT



On 1/24/22 06:57, Anshuman Khandual wrote:
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.
>
> Cc: Dinh Nguyen <[email protected]>
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> arch/nios2/Kconfig | 1 +
> arch/nios2/include/asm/pgtable.h | 16 -------------
> arch/nios2/mm/init.c | 41 ++++++++++++++++++++++++++++++++
> 3 files changed, 42 insertions(+), 16 deletions(-)
>
> diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
> index 33fd06f5fa41..85a58a357a3b 100644
> --- a/arch/nios2/Kconfig
> +++ b/arch/nios2/Kconfig
> @@ -6,6 +6,7 @@ config NIOS2
> select ARCH_HAS_SYNC_DMA_FOR_CPU
> select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> select ARCH_HAS_DMA_SET_UNCACHED
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select ARCH_NO_SWAP
> select COMMON_CLK
> select TIMER_OF
> diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
> index 4a995fa628ee..2678dad58a63 100644
> --- a/arch/nios2/include/asm/pgtable.h
> +++ b/arch/nios2/include/asm/pgtable.h
> @@ -40,24 +40,8 @@ struct mm_struct;
> */
>
> /* Remove W bit on private pages for COW support */
> -#define __P000 MKP(0, 0, 0)
> -#define __P001 MKP(0, 0, 1)
> -#define __P010 MKP(0, 0, 0) /* COW */
> -#define __P011 MKP(0, 0, 1) /* COW */
> -#define __P100 MKP(1, 0, 0)
> -#define __P101 MKP(1, 0, 1)
> -#define __P110 MKP(1, 0, 0) /* COW */
> -#define __P111 MKP(1, 0, 1) /* COW */
>
> /* Shared pages can have exact HW mapping */
> -#define __S000 MKP(0, 0, 0)
> -#define __S001 MKP(0, 0, 1)
> -#define __S010 MKP(0, 1, 0)
> -#define __S011 MKP(0, 1, 1)
> -#define __S100 MKP(1, 0, 0)
> -#define __S101 MKP(1, 0, 1)
> -#define __S110 MKP(1, 1, 0)
> -#define __S111 MKP(1, 1, 1)
>
> /* Used all over the kernel */
> #define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHED | _PAGE_READ | \
> diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
> index 613fcaa5988a..4f8251e62f31 100644
> --- a/arch/nios2/mm/init.c
> +++ b/arch/nios2/mm/init.c
> @@ -124,3 +124,44 @@ const char *arch_vma_name(struct vm_area_struct *vma)
> {
> return (vma->vm_start == KUSER_BASE) ? "[kuser]" : NULL;
> }
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return MKP(0, 0, 0);
> + case VM_READ:
> + return MKP(0, 0, 1);
> + case VM_WRITE:
> + return MKP(0, 0, 0);
> + case VM_READ | VM_WRITE:
> + return MKP(0, 0, 1);
> + case VM_EXEC:
> + return MKP(1, 0, 0);
> + case VM_EXEC | VM_READ:
> + return MKP(1, 0, 1);
> + case VM_EXEC | VM_WRITE:
> + return MKP(1, 0, 0);
> + case VM_EXEC | VM_READ | VM_WRITE:
> + return MKP(1, 0, 1);
> + case VM_SHARED:
> + return MKP(0, 0, 0);
> + case VM_SHARED | VM_READ:
> + return MKP(0, 0, 1);
> + case VM_SHARED | VM_WRITE:
> + return MKP(0, 1, 0);
> + case VM_SHARED | VM_READ | VM_WRITE:
> + return MKP(0, 1, 1);
> + case VM_SHARED | VM_EXEC:
> + return MKP(1, 0, 0);
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return MKP(1, 0, 1);
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + return MKP(1, 1, 0);
> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
> + return MKP(1, 1, 1);
> + default:
> + BUILD_BUG();
> + }
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);


Acked-by: Dinh Nguyen <[email protected]>

2022-01-27 11:34:59

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 21/31] parisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT



On 1/25/22 10:23 PM, Rolf Eike Beer wrote:
> Anshuman Khandual wrote:
>> This defines and exports a platform specific custom vm_get_page_prot() via
>> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
>> macros can be dropped which are no longer needed.
>>
>> Cc: "James E.J. Bottomley" <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>> arch/parisc/Kconfig | 1 +
>> arch/parisc/include/asm/pgtable.h | 20 ---------------
>> arch/parisc/mm/init.c | 41 +++++++++++++++++++++++++++++++
>> 3 files changed, 42 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
>> index 43c1c880def6..de512f120b50 100644
>> --- a/arch/parisc/Kconfig
>> +++ b/arch/parisc/Kconfig
>> @@ -10,6 +10,7 @@ config PARISC
>> select ARCH_HAS_ELF_RANDOMIZE
>> select ARCH_HAS_STRICT_KERNEL_RWX
>> select ARCH_HAS_UBSAN_SANITIZE_ALL
>> + select ARCH_HAS_VM_GET_PAGE_PROT
>> select ARCH_NO_SG_CHAIN
>> select ARCH_SUPPORTS_HUGETLBFS if PA20
>> select ARCH_SUPPORTS_MEMORY_FAILURE
>> diff --git a/arch/parisc/include/asm/pgtable.h
>> b/arch/parisc/include/asm/pgtable.h index 3e7cf882639f..80d99b2b5913 100644
>> --- a/arch/parisc/include/asm/pgtable.h
>> +++ b/arch/parisc/include/asm/pgtable.h
>> @@ -269,26 +269,6 @@ extern void __update_cache(pte_t pte);
>> * pages.
>> */
>>
>> - /*xwr*/
>> -#define __P000 PAGE_NONE
>> -#define __P001 PAGE_READONLY
>> -#define __P010 __P000 /* copy on write */
>> -#define __P011 __P001 /* copy on write */
>> -#define __P100 PAGE_EXECREAD
>> -#define __P101 PAGE_EXECREAD
>> -#define __P110 __P100 /* copy on write */
>> -#define __P111 __P101 /* copy on write */
>> -
>> -#define __S000 PAGE_NONE
>> -#define __S001 PAGE_READONLY
>> -#define __S010 PAGE_WRITEONLY
>> -#define __S011 PAGE_SHARED
>> -#define __S100 PAGE_EXECREAD
>> -#define __S101 PAGE_EXECREAD
>> -#define __S110 PAGE_RWX
>> -#define __S111 PAGE_RWX
>> -
>> -
>> extern pgd_t swapper_pg_dir[]; /* declared in init_task.c */
>>
>> /* initial page tables for 0-8MB for kernel */
>> diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
>> index 1ae31db9988f..c8316e97e1a2 100644
>> --- a/arch/parisc/mm/init.c
>> +++ b/arch/parisc/mm/init.c
>> @@ -866,3 +866,44 @@ void flush_tlb_all(void)
>> spin_unlock(&sid_lock);
>> }
>> #endif
>> +
>> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
>> +{
>> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
>> + case VM_NONE:
>> + return PAGE_NONE;
>> + case VM_READ:
>> + return PAGE_READONLY;
>> + case VM_WRITE:
>> + return PAGE_NONE;
>> + case VM_READ | VM_WRITE:
>> + return PAGE_READONLY;
> This looks extremely strange. It probably is correct when it comes to CoW, how
> about including the comment that was in the original definitions for the cases
> where CoW is expected?
>

Assuming that you suggest the following four comments here, sure will add them.

-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 __P000 /* copy on write */
-#define __P011 __P001 /* copy on write */
-#define __P100 PAGE_EXECREAD
-#define __P101 PAGE_EXECREAD
-#define __P110 __P100 /* copy on write */
-#define __P111 __P101 /* copy on write */

2022-01-27 11:36:26

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 02/31] mm/mmap: Clarify protection_map[] indices



On 1/26/22 12:46 PM, Christoph Hellwig wrote:
>> + [VM_NONE] = __P000,
>> + [VM_READ] = __P001,
>> + [VM_WRITE] = __P010,
>> + [VM_READ|VM_WRITE] = __P011,
>> + [VM_EXEC] = __P100,
>> + [VM_EXEC|VM_READ] = __P101,
>> + [VM_EXEC|VM_WRITE] = __P110,
>> + [VM_EXEC|VM_READ|VM_WRITE] = __P111,
>> + [VM_SHARED] = __S000,
>> + [VM_SHARED|VM_READ] = __S001,
>> + [VM_SHARED|VM_WRITE] = __S010,
>> + [VM_SHARED|VM_READ|VM_WRITE] = __S011,
>> + [VM_SHARED|VM_EXEC] = __S100,
>> + [VM_SHARED|VM_READ|VM_EXEC] = __S101,
>> + [VM_SHARED|VM_WRITE|VM_EXEC] = __S110,
>> + [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111
>
> Please add whitespaces around the | operators.

Sure, will add.

>
> Otherwise looks good:
>
> Reviewed-by: Christoph Hellwig <[email protected]>
>

2022-01-27 11:36:30

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 01/31] mm/debug_vm_pgtable: Directly use vm_get_page_prot()



On 1/26/22 12:45 PM, Christoph Hellwig wrote:
>> + *
>> + * Protection based vm_flags combinatins are always linear
>> + * and increasing i.e VM_NONE ..[VM_SHARED|READ|WRITE|EXEC].
>> */
>> - for (idx = 0; idx < ARRAY_SIZE(protection_map); idx++) {
>> + for (i = VM_NONE; i <= (VM_SHARED | VM_READ | VM_WRITE | VM_EXEC); ix++) {
>> pte_basic_tests(&args, idx);
>> pmd_basic_tests(&args, idx);
>> pud_basic_tests(&args, idx);
>
> This looks rather convoluted. I'd prefer to add a helper for the body
> of this loop, and then explicitly call it for all the valid
> combinations. Right now all are valid, so this dosn't change a thing
> except for generating larger code due to the explicit loop unrolling,
> but I think it is much easier to follow and maintain.

IIUC, then will just keep this unchanged.

2022-01-27 23:09:49

by Mike Rapoport

[permalink] [raw]
Subject: Re: [RFC V1 02/31] mm/mmap: Clarify protection_map[] indices

On Mon, Jan 24, 2022 at 06:26:39PM +0530, Anshuman Khandual wrote:
> protection_map[] maps vm_flags access combinations into page protection
> value as defined by the platform via __PXXX and __SXXX macros. The array
> indices in protection_map[], represents vm_flags access combinations but
> it's not very intuitive to derive. This makes it clear and explicit.

The protection_map is going to be removed in one of the next patches, why
bother with this patch at all?

> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> mm/mmap.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 1e8fdb0b51ed..254d716220df 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -102,8 +102,22 @@ static void unmap_region(struct mm_struct *mm,
> * x: (yes) yes
> */
> pgprot_t protection_map[16] __ro_after_init = {
> - __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111,
> - __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111
> + [VM_NONE] = __P000,
> + [VM_READ] = __P001,
> + [VM_WRITE] = __P010,
> + [VM_READ|VM_WRITE] = __P011,
> + [VM_EXEC] = __P100,
> + [VM_EXEC|VM_READ] = __P101,
> + [VM_EXEC|VM_WRITE] = __P110,
> + [VM_EXEC|VM_READ|VM_WRITE] = __P111,
> + [VM_SHARED] = __S000,
> + [VM_SHARED|VM_READ] = __S001,
> + [VM_SHARED|VM_WRITE] = __S010,
> + [VM_SHARED|VM_READ|VM_WRITE] = __S011,
> + [VM_SHARED|VM_EXEC] = __S100,
> + [VM_SHARED|VM_READ|VM_EXEC] = __S101,
> + [VM_SHARED|VM_WRITE|VM_EXEC] = __S110,
> + [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111
> };
>
> #ifndef CONFIG_ARCH_HAS_FILTER_PGPROT
> --
> 2.25.1
>
>

--
Sincerely yours,
Mike.

2022-01-27 23:09:49

by Mike Rapoport

[permalink] [raw]
Subject: Re: [RFC V1 00/31] mm/mmap: Drop protection_map[] and platform's __SXXX/__PXXX requirements

Hi Anshuman,

On Mon, Jan 24, 2022 at 06:26:37PM +0530, Anshuman Khandual wrote:
> protection_map[] is an array based construct that translates given vm_flags
> combination. This array contains page protection map, which is populated by
> the platform via [__S000 .. __S111] and [__P000 .. __P111] exported macros.
> Primary usage for protection_map[] is for vm_get_page_prot(), which is used
> to determine page protection value for a given vm_flags. vm_get_page_prot()
> implementation, could again call platform overrides arch_vm_get_page_prot()
> and arch_filter_pgprot(). Some platforms override protection_map[] that was
> originally built with __SXXX/__PXXX with different runtime values.
>
> Currently there are multiple layers of abstraction i.e __SXXX/__PXXX macros
> , protection_map[], arch_vm_get_page_prot() and arch_filter_pgprot() built
> between the platform and generic MM, finally defining vm_get_page_prot().
>
> Hence this series proposes to drop all these abstraction levels and instead
> just move the responsibility of defining vm_get_page_prot() to the platform
> itself making it clean and simple.
>
> This first introduces ARCH_HAS_VM_GET_PAGE_PROT which enables the platforms
> to define custom vm_get_page_prot(). This starts converting platforms that
> either change protection_map[] or define the overrides arch_filter_pgprot()
> or arch_vm_get_page_prot() which enables for those constructs to be dropped
> off completely. This series then converts remaining platforms which enables
> for __SXXX/__PXXX constructs to be dropped off completely. Finally it drops
> the generic vm_get_page_prot() and then ARCH_HAS_VM_GET_PAGE_PROT as every
> platform now defines their own vm_get_page_prot().

I generally like the idea, I just think the conversion can be more straight
forward. Rather than adding ARCH_HAS_VM_GET_PAGE_PROT and then dropping it,
why won't me make the generic vm_get_page_prot() __weak, then add per-arch
implementation and in the end drop the generic one?

> The last patch demonstrates how vm_flags combination indices can be defined
> as macros and be replaces across all platforms (if required, not done yet).
>
> The series has been inspired from an earlier discuss with Christoph Hellwig
>
> https://lore.kernel.org/all/[email protected]/
>
> This series applies on 5.17-rc1 after the following patch.
>
> https://lore.kernel.org/all/[email protected]/
>
> This has been cross built for multiple platforms. I would like to get some
> early feed back on this proposal. All reviews and suggestions welcome.
>
> Hello Christoph,
>
> I have taken the liberty to preserve your authorship on the x86 patch which
> is borrowed almost as is from our earlier discussion. I have also added you
> as 'Suggested-by:' on the patch that adds config ARCH_HAS_VM_GET_PAGE_PROT.
> Nonetheless please feel free to correct me for any other missing authorship
> attributes I should have added. Thank you.
>
> - Anshuman
>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
>
> Anshuman Khandual (30):
> mm/debug_vm_pgtable: Directly use vm_get_page_prot()
> mm/mmap: Clarify protection_map[] indices
> mm/mmap: Add new config ARCH_HAS_VM_GET_PAGE_PROT
> powerpc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> sparc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> mips/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> m68k/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> mm/mmap: Drop protection_map[]
> mm/mmap: Drop arch_filter_pgprot()
> mm/mmap: Drop arch_vm_get_page_pgprot()
> s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> riscv/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> alpha/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> sh/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> arc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> extensa/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> parisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> um/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> nios2/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> hexagon/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> nds32/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> ia64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
> mm/mmap: Drop generic vm_get_page_prot()
> mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT
> mm/mmap: Define macros for vm_flags access permission combinations
>
> Christoph Hellwig (1):
> x86/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
>
> arch/alpha/include/asm/pgtable.h | 17 -----
> arch/alpha/mm/init.c | 41 +++++++++++
> arch/arc/include/asm/pgtable-bits-arcv2.h | 17 -----
> arch/arc/mm/mmap.c | 41 +++++++++++
> arch/arm/include/asm/pgtable.h | 18 -----
> arch/arm/mm/mmu.c | 50 +++++++++++--
> arch/arm64/Kconfig | 1 -
> arch/arm64/include/asm/mman.h | 3 +-
> arch/arm64/include/asm/pgtable-prot.h | 18 -----
> arch/arm64/include/asm/pgtable.h | 2 +-
> arch/arm64/mm/mmap.c | 50 +++++++++++++
> arch/csky/include/asm/pgtable.h | 18 -----
> arch/csky/mm/init.c | 41 +++++++++++
> arch/hexagon/include/asm/pgtable.h | 24 -------
> arch/hexagon/mm/init.c | 42 +++++++++++
> arch/ia64/include/asm/pgtable.h | 17 -----
> arch/ia64/mm/init.c | 43 ++++++++++-
> arch/m68k/include/asm/mcf_pgtable.h | 59 ---------------
> arch/m68k/include/asm/motorola_pgtable.h | 22 ------
> arch/m68k/include/asm/sun3_pgtable.h | 22 ------
> arch/m68k/mm/init.c | 87 +++++++++++++++++++++++
> arch/m68k/mm/motorola.c | 44 +++++++++++-
> arch/microblaze/include/asm/pgtable.h | 17 -----
> arch/microblaze/mm/init.c | 41 +++++++++++
> arch/mips/include/asm/pgtable.h | 22 ------
> arch/mips/mm/cache.c | 65 ++++++++++-------
> arch/nds32/include/asm/pgtable.h | 17 -----
> arch/nds32/mm/mmap.c | 41 +++++++++++
> arch/nios2/include/asm/pgtable.h | 16 -----
> arch/nios2/mm/init.c | 41 +++++++++++
> arch/openrisc/include/asm/pgtable.h | 18 -----
> arch/openrisc/mm/init.c | 41 +++++++++++
> arch/parisc/include/asm/pgtable.h | 20 ------
> arch/parisc/mm/init.c | 41 +++++++++++
> arch/powerpc/include/asm/mman.h | 3 +-
> arch/powerpc/include/asm/pgtable.h | 19 -----
> arch/powerpc/mm/mmap.c | 47 ++++++++++++
> arch/riscv/include/asm/pgtable.h | 16 -----
> arch/riscv/mm/init.c | 41 +++++++++++
> arch/s390/include/asm/pgtable.h | 17 -----
> arch/s390/mm/mmap.c | 41 +++++++++++
> arch/sh/include/asm/pgtable.h | 17 -----
> arch/sh/mm/mmap.c | 43 +++++++++++
> arch/sparc/include/asm/mman.h | 1 -
> arch/sparc/include/asm/pgtable_32.h | 19 -----
> arch/sparc/include/asm/pgtable_64.h | 19 -----
> arch/sparc/mm/init_32.c | 41 +++++++++++
> arch/sparc/mm/init_64.c | 71 +++++++++++++-----
> arch/um/include/asm/pgtable.h | 17 -----
> arch/um/kernel/mem.c | 41 +++++++++++
> arch/x86/Kconfig | 1 -
> arch/x86/include/asm/pgtable.h | 5 --
> arch/x86/include/asm/pgtable_types.h | 19 -----
> arch/x86/include/uapi/asm/mman.h | 14 ----
> arch/x86/mm/Makefile | 2 +-
> arch/x86/mm/mem_encrypt_amd.c | 4 --
> arch/x86/mm/pgprot.c | 71 ++++++++++++++++++
> arch/xtensa/include/asm/pgtable.h | 18 -----
> arch/xtensa/mm/init.c | 41 +++++++++++
> include/linux/mm.h | 45 ++++++++++--
> include/linux/mman.h | 4 --
> mm/Kconfig | 3 -
> mm/debug_vm_pgtable.c | 27 +++----
> mm/mmap.c | 22 ------
> 64 files changed, 1150 insertions(+), 636 deletions(-)
> create mode 100644 arch/x86/mm/pgprot.c
>
> --
> 2.25.1
>
>

--
Sincerely yours,
Mike.

2022-02-01 15:19:12

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 00/31] mm/mmap: Drop protection_map[] and platform's __SXXX/__PXXX requirements



On 1/27/22 6:08 PM, Mike Rapoport wrote:
> Hi Anshuman,
>
> On Mon, Jan 24, 2022 at 06:26:37PM +0530, Anshuman Khandual wrote:
>> protection_map[] is an array based construct that translates given vm_flags
>> combination. This array contains page protection map, which is populated by
>> the platform via [__S000 .. __S111] and [__P000 .. __P111] exported macros.
>> Primary usage for protection_map[] is for vm_get_page_prot(), which is used
>> to determine page protection value for a given vm_flags. vm_get_page_prot()
>> implementation, could again call platform overrides arch_vm_get_page_prot()
>> and arch_filter_pgprot(). Some platforms override protection_map[] that was
>> originally built with __SXXX/__PXXX with different runtime values.
>>
>> Currently there are multiple layers of abstraction i.e __SXXX/__PXXX macros
>> , protection_map[], arch_vm_get_page_prot() and arch_filter_pgprot() built
>> between the platform and generic MM, finally defining vm_get_page_prot().
>>
>> Hence this series proposes to drop all these abstraction levels and instead
>> just move the responsibility of defining vm_get_page_prot() to the platform
>> itself making it clean and simple.
>>
>> This first introduces ARCH_HAS_VM_GET_PAGE_PROT which enables the platforms
>> to define custom vm_get_page_prot(). This starts converting platforms that
>> either change protection_map[] or define the overrides arch_filter_pgprot()
>> or arch_vm_get_page_prot() which enables for those constructs to be dropped
>> off completely. This series then converts remaining platforms which enables
>> for __SXXX/__PXXX constructs to be dropped off completely. Finally it drops
>> the generic vm_get_page_prot() and then ARCH_HAS_VM_GET_PAGE_PROT as every
>> platform now defines their own vm_get_page_prot().
>
> I generally like the idea, I just think the conversion can be more straight
> forward. Rather than adding ARCH_HAS_VM_GET_PAGE_PROT and then dropping it,
> why won't me make the generic vm_get_page_prot() __weak, then add per-arch
> implementation and in the end drop the generic one?
>

Is not the ARCH_HAS_ config based switch over a relatively better method ?
IIUC some existing platform overrides could have been implemented via __weak
identifier, although ARCH_HAS_ method was preferred. But I might be missing
something here.

2022-02-01 15:20:57

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 02/31] mm/mmap: Clarify protection_map[] indices



On 1/27/22 6:09 PM, Mike Rapoport wrote:
> On Mon, Jan 24, 2022 at 06:26:39PM +0530, Anshuman Khandual wrote:
>> protection_map[] maps vm_flags access combinations into page protection
>> value as defined by the platform via __PXXX and __SXXX macros. The array
>> indices in protection_map[], represents vm_flags access combinations but
>> it's not very intuitive to derive. This makes it clear and explicit.
>
> The protection_map is going to be removed in one of the next patches, why
> bother with this patch at all?
This makes the transition from protection_map[] into __vm_get_page_prot()
more intuitive, where protection_map[] gets dropped. This helps platforms
(first ones subscribing ARCH_HAS_VM_GET_PAGE_PROT before this drop) create
/formulate the required switch case elements in their vm_get_page_prot().

The existing protection_map[] is not clear in demonstrating how exactly
the vm_flags combination is mapped into page protection map. This helps
clarify the underlying switch before we move on defining it on platforms.

>
>> Cc: Andrew Morton <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>> mm/mmap.c | 18 ++++++++++++++++--
>> 1 file changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 1e8fdb0b51ed..254d716220df 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -102,8 +102,22 @@ static void unmap_region(struct mm_struct *mm,
>> * x: (yes) yes
>> */
>> pgprot_t protection_map[16] __ro_after_init = {
>> - __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111,
>> - __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111
>> + [VM_NONE] = __P000,
>> + [VM_READ] = __P001,
>> + [VM_WRITE] = __P010,
>> + [VM_READ|VM_WRITE] = __P011,
>> + [VM_EXEC] = __P100,
>> + [VM_EXEC|VM_READ] = __P101,
>> + [VM_EXEC|VM_WRITE] = __P110,
>> + [VM_EXEC|VM_READ|VM_WRITE] = __P111,
>> + [VM_SHARED] = __S000,
>> + [VM_SHARED|VM_READ] = __S001,
>> + [VM_SHARED|VM_WRITE] = __S010,
>> + [VM_SHARED|VM_READ|VM_WRITE] = __S011,
>> + [VM_SHARED|VM_EXEC] = __S100,
>> + [VM_SHARED|VM_READ|VM_EXEC] = __S101,
>> + [VM_SHARED|VM_WRITE|VM_EXEC] = __S110,
>> + [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111
>> };
>>
>> #ifndef CONFIG_ARCH_HAS_FILTER_PGPROT
>> --
>> 2.25.1
>>
>>
>

2022-02-03 20:30:56

by Mike Rapoport

[permalink] [raw]
Subject: Re: [RFC V1 04/31] powerpc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

On Mon, Jan 24, 2022 at 06:26:41PM +0530, Anshuman Khandual wrote:
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed. While here, this also
> localizes arch_vm_get_page_prot() as powerpc_vm_get_page_prot().
>
> Cc: Michael Ellerman <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/asm/mman.h | 3 +-
> arch/powerpc/include/asm/pgtable.h | 19 ------------
> arch/powerpc/mm/mmap.c | 47 ++++++++++++++++++++++++++++++
> 4 files changed, 49 insertions(+), 21 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index b779603978e1..ddb4a3687c05 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -135,6 +135,7 @@ config PPC
> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> select ARCH_HAS_UACCESS_FLUSHCACHE
> select ARCH_HAS_UBSAN_SANITIZE_ALL
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select ARCH_HAVE_NMI_SAFE_CMPXCHG
> select ARCH_KEEP_MEMBLOCK
> select ARCH_MIGHT_HAVE_PC_PARPORT
> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
> index 7cb6d18f5cd6..7b10c2031e82 100644
> --- a/arch/powerpc/include/asm/mman.h
> +++ b/arch/powerpc/include/asm/mman.h
> @@ -24,7 +24,7 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
> }
> #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
>
> -static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
> +static inline pgprot_t powerpc_vm_get_page_prot(unsigned long vm_flags)
> {
> #ifdef CONFIG_PPC_MEM_KEYS
> return (vm_flags & VM_SAO) ?
> @@ -34,7 +34,6 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
> return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0);
> #endif
> }
> -#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
>
> static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
> {
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index d564d0ecd4cd..3cbb6de20f9d 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -20,25 +20,6 @@ struct mm_struct;
> #include <asm/nohash/pgtable.h>
> #endif /* !CONFIG_PPC_BOOK3S */
>
> -/* Note due to the way vm flags are laid out, the bits are XWR */
> -#define __P000 PAGE_NONE
> -#define __P001 PAGE_READONLY
> -#define __P010 PAGE_COPY
> -#define __P011 PAGE_COPY
> -#define __P100 PAGE_READONLY_X
> -#define __P101 PAGE_READONLY_X
> -#define __P110 PAGE_COPY_X
> -#define __P111 PAGE_COPY_X
> -
> -#define __S000 PAGE_NONE
> -#define __S001 PAGE_READONLY
> -#define __S010 PAGE_SHARED
> -#define __S011 PAGE_SHARED
> -#define __S100 PAGE_READONLY_X
> -#define __S101 PAGE_READONLY_X
> -#define __S110 PAGE_SHARED_X
> -#define __S111 PAGE_SHARED_X
> -
> #ifndef __ASSEMBLY__
>
> #ifndef MAX_PTRS_PER_PGD
> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
> index c475cf810aa8..7f05e7903bd2 100644
> --- a/arch/powerpc/mm/mmap.c
> +++ b/arch/powerpc/mm/mmap.c
> @@ -254,3 +254,50 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> mm->get_unmapped_area = arch_get_unmapped_area_topdown;
> }
> }
> +
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return PAGE_NONE;
> + case VM_READ:
> + return PAGE_READONLY;
> + case VM_WRITE:
> + return PAGE_COPY;
> + case VM_READ | VM_WRITE:
> + return PAGE_COPY;
> + case VM_EXEC:
> + return PAGE_READONLY_X;
> + case VM_EXEC | VM_READ:
> + return PAGE_READONLY_X;
> + case VM_EXEC | VM_WRITE:
> + return PAGE_COPY_X;
> + case VM_EXEC | VM_READ | VM_WRITE:
> + return PAGE_COPY_X;
> + case VM_SHARED:
> + return PAGE_NONE;
> + case VM_SHARED | VM_READ:
> + return PAGE_READONLY;
> + case VM_SHARED | VM_WRITE:
> + return PAGE_SHARED;
> + case VM_SHARED | VM_READ | VM_WRITE:
> + return PAGE_SHARED;
> + case VM_SHARED | VM_EXEC:
> + return PAGE_READONLY_X;
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return PAGE_READONLY_X;
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + return PAGE_SHARED_X;
> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
> + return PAGE_SHARED_X;
> + default:
> + BUILD_BUG();
> + }
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + return __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> + pgprot_val(powerpc_vm_get_page_prot(vm_flags)));

Any reason to keep powerpc_vm_get_page_prot() rather than open code it
here?

This applies to other architectures that implement arch_vm_get_page_prot()
and/or arch_filter_pgprot() as well.

> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
> --
> 2.25.1
>
>

--
Sincerely yours,
Mike.

2022-02-04 07:15:29

by Mike Rapoport

[permalink] [raw]
Subject: Re: [RFC V1 04/31] powerpc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

On Fri, Feb 04, 2022 at 08:27:37AM +0530, Anshuman Khandual wrote:
>
> On 2/3/22 11:45 PM, Mike Rapoport wrote:
> > On Mon, Jan 24, 2022 at 06:26:41PM +0530, Anshuman Khandual wrote:
> >> This defines and exports a platform specific custom vm_get_page_prot() via
> >> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> >> macros can be dropped which are no longer needed. While here, this also
> >> localizes arch_vm_get_page_prot() as powerpc_vm_get_page_prot().
> >>
> >> Cc: Michael Ellerman <[email protected]>
> >> Cc: Paul Mackerras <[email protected]>
> >> Cc: [email protected]
> >> Cc: [email protected]
> >> Signed-off-by: Anshuman Khandual <[email protected]>
> >> ---
> >> arch/powerpc/Kconfig | 1 +
> >> arch/powerpc/include/asm/mman.h | 3 +-
> >> arch/powerpc/include/asm/pgtable.h | 19 ------------
> >> arch/powerpc/mm/mmap.c | 47 ++++++++++++++++++++++++++++++
> >> 4 files changed, 49 insertions(+), 21 deletions(-)
> >>
> >> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> >> index b779603978e1..ddb4a3687c05 100644
> >> --- a/arch/powerpc/Kconfig
> >> +++ b/arch/powerpc/Kconfig
> >> @@ -135,6 +135,7 @@ config PPC
> >> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> >> select ARCH_HAS_UACCESS_FLUSHCACHE
> >> select ARCH_HAS_UBSAN_SANITIZE_ALL
> >> + select ARCH_HAS_VM_GET_PAGE_PROT
> >> select ARCH_HAVE_NMI_SAFE_CMPXCHG
> >> select ARCH_KEEP_MEMBLOCK
> >> select ARCH_MIGHT_HAVE_PC_PARPORT
> >> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
> >> index 7cb6d18f5cd6..7b10c2031e82 100644
> >> --- a/arch/powerpc/include/asm/mman.h
> >> +++ b/arch/powerpc/include/asm/mman.h
> >> @@ -24,7 +24,7 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
> >> }
> >> #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
> >>
> >> -static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
> >> +static inline pgprot_t powerpc_vm_get_page_prot(unsigned long vm_flags)
> >> {
> >> #ifdef CONFIG_PPC_MEM_KEYS
> >> return (vm_flags & VM_SAO) ?
> >> @@ -34,7 +34,6 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
> >> return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0);
> >> #endif
> >> }
> >> -#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
> >>
> >> static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
> >> {
> >> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> >> index d564d0ecd4cd..3cbb6de20f9d 100644
> >> --- a/arch/powerpc/include/asm/pgtable.h
> >> +++ b/arch/powerpc/include/asm/pgtable.h
> >> @@ -20,25 +20,6 @@ struct mm_struct;
> >> #include <asm/nohash/pgtable.h>
> >> #endif /* !CONFIG_PPC_BOOK3S */
> >>
> >> -/* Note due to the way vm flags are laid out, the bits are XWR */
> >> -#define __P000 PAGE_NONE
> >> -#define __P001 PAGE_READONLY
> >> -#define __P010 PAGE_COPY
> >> -#define __P011 PAGE_COPY
> >> -#define __P100 PAGE_READONLY_X
> >> -#define __P101 PAGE_READONLY_X
> >> -#define __P110 PAGE_COPY_X
> >> -#define __P111 PAGE_COPY_X
> >> -
> >> -#define __S000 PAGE_NONE
> >> -#define __S001 PAGE_READONLY
> >> -#define __S010 PAGE_SHARED
> >> -#define __S011 PAGE_SHARED
> >> -#define __S100 PAGE_READONLY_X
> >> -#define __S101 PAGE_READONLY_X
> >> -#define __S110 PAGE_SHARED_X
> >> -#define __S111 PAGE_SHARED_X
> >> -
> >> #ifndef __ASSEMBLY__
> >>
> >> #ifndef MAX_PTRS_PER_PGD
> >> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
> >> index c475cf810aa8..7f05e7903bd2 100644
> >> --- a/arch/powerpc/mm/mmap.c
> >> +++ b/arch/powerpc/mm/mmap.c
> >> @@ -254,3 +254,50 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
> >> mm->get_unmapped_area = arch_get_unmapped_area_topdown;
> >> }
> >> }
> >> +
> >> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> >> +{
> >> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> >> + case VM_NONE:
> >> + return PAGE_NONE;
> >> + case VM_READ:
> >> + return PAGE_READONLY;
> >> + case VM_WRITE:
> >> + return PAGE_COPY;
> >> + case VM_READ | VM_WRITE:
> >> + return PAGE_COPY;
> >> + case VM_EXEC:
> >> + return PAGE_READONLY_X;
> >> + case VM_EXEC | VM_READ:
> >> + return PAGE_READONLY_X;
> >> + case VM_EXEC | VM_WRITE:
> >> + return PAGE_COPY_X;
> >> + case VM_EXEC | VM_READ | VM_WRITE:
> >> + return PAGE_COPY_X;
> >> + case VM_SHARED:
> >> + return PAGE_NONE;
> >> + case VM_SHARED | VM_READ:
> >> + return PAGE_READONLY;
> >> + case VM_SHARED | VM_WRITE:
> >> + return PAGE_SHARED;
> >> + case VM_SHARED | VM_READ | VM_WRITE:
> >> + return PAGE_SHARED;
> >> + case VM_SHARED | VM_EXEC:
> >> + return PAGE_READONLY_X;
> >> + case VM_SHARED | VM_EXEC | VM_READ:
> >> + return PAGE_READONLY_X;
> >> + case VM_SHARED | VM_EXEC | VM_WRITE:
> >> + return PAGE_SHARED_X;
> >> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
> >> + return PAGE_SHARED_X;
> >> + default:
> >> + BUILD_BUG();
> >> + }
> >> +}
> >> +
> >> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> >> +{
> >> + return __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> >> + pgprot_val(powerpc_vm_get_page_prot(vm_flags)));
> > Any reason to keep powerpc_vm_get_page_prot() rather than open code it
> > here?
> >
> > This applies to other architectures that implement arch_vm_get_page_prot()
> > and/or arch_filter_pgprot() as well.
>
> Just to minimize the code churn ! But I will be happy to open code them
> here (and in other platforms) if that will be preferred.

I think this will be clearer because all the processing will be at one place.
Besides, this way include/asm/pgtable.h becomes shorter and less crowded.

--
Sincerely yours,
Mike.

2022-02-04 14:13:06

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 31/31] mm/mmap: Define macros for vm_flags access permission combinations



On 1/24/22 6:27 PM, Anshuman Khandual wrote:
> These macros will be useful in cleaning up the all those switch statements
> in vm_get_page_prot() across all platforms.
>
> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> include/linux/mm.h | 39 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 39 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 6c0844b99b3e..b3691eeec500 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2828,6 +2828,45 @@ static inline bool range_in_vma(struct vm_area_struct *vma,
> return (vma && vma->vm_start <= start && end <= vma->vm_end);
> }
>
> +/*
> + * Access permission related vm_flags combination is used to map into
> + * platform defined page protection flags. This enumeration helps in
> + * abstracting out possible indices after vm_flags is probed for all
> + * access permission i.e (VM_SHARED | VM_EXEC | VM_READ | VM_WRITE).
> + *
> + * VM_EXEC ---------------------|
> + * |
> + * VM_WRITE ---------------| |
> + * | |
> + * VM_READ -----------| | |
> + * | | |
> + * VM_SHARED ----| | | |
> + * | | | |
> + * v v v v
> + * VMFLAGS_IDX_(S|X)(R|X)(W|X)(E|X)
> + *
> + * X - Indicates that the access flag is absent
> + */
> +enum vmflags_idx {
> + VMFLAGS_IDX_XXXX, /* (VM_NONE) */
> + VMFLAGS_IDX_XRXX, /* (VM_READ) */
> + VMFLAGS_IDX_XXWX, /* (VM_WRITE) */
> + VMFLAGS_IDX_XRWX, /* (VM_READ | VM_WRITE) */
> + VMFLAGS_IDX_XXXE, /* (VM_EXEC) */
> + VMFLAGS_IDX_XRXE, /* (VM_EXEC | VM_READ) */
> + VMFLAGS_IDX_XXWE, /* (VM_EXEC | VM_WRITE) */
> + VMFLAGS_IDX_XRWE, /* (VM_EXEC | VM_READ | VM_WRITE) */
> + VMFLAGS_IDX_SXXX, /* (VM_SHARED | VM_NONE) */
> + VMFLAGS_IDX_SRXX, /* (VM_SHARED | VM_READ) */
> + VMFLAGS_IDX_SXWX, /* (VM_SHARED | VM_WRITE) */
> + VMFLAGS_IDX_SRWX, /* (VM_SHARED | VM_READ | VM_WRITE) */
> + VMFLAGS_IDX_SXXE, /* (VM_SHARED | VM_EXEC) */
> + VMFLAGS_IDX_SRXE, /* (VM_SHARED | VM_EXEC | VM_READ) */
> + VMFLAGS_IDX_SXWE, /* (VM_SHARED | VM_EXEC | VM_WRITE) */
> + VMFLAGS_IDX_SRWE, /* (VM_SHARED | VM_EXEC | VM_READ | VM_WRITE) */
> + VMFLAGS_IDX_MAX
> +};

Defining platform specific vm_get_page_prot() involves a switch statement
with various vm_flags access combinations as cases. Hence I am wondering,
will it help to use these above macros, instead of existing combinations
separated with '|' flags. I can move this patch earlier in the series and
change all platform switch cases. Please do suggest. Thank you.

- Anshuman

2022-02-04 21:17:50

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 04/31] powerpc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT



On 2/3/22 11:45 PM, Mike Rapoport wrote:
> On Mon, Jan 24, 2022 at 06:26:41PM +0530, Anshuman Khandual wrote:
>> This defines and exports a platform specific custom vm_get_page_prot() via
>> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
>> macros can be dropped which are no longer needed. While here, this also
>> localizes arch_vm_get_page_prot() as powerpc_vm_get_page_prot().
>>
>> Cc: Michael Ellerman <[email protected]>
>> Cc: Paul Mackerras <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>> arch/powerpc/Kconfig | 1 +
>> arch/powerpc/include/asm/mman.h | 3 +-
>> arch/powerpc/include/asm/pgtable.h | 19 ------------
>> arch/powerpc/mm/mmap.c | 47 ++++++++++++++++++++++++++++++
>> 4 files changed, 49 insertions(+), 21 deletions(-)
>>
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index b779603978e1..ddb4a3687c05 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -135,6 +135,7 @@ config PPC
>> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>> select ARCH_HAS_UACCESS_FLUSHCACHE
>> select ARCH_HAS_UBSAN_SANITIZE_ALL
>> + select ARCH_HAS_VM_GET_PAGE_PROT
>> select ARCH_HAVE_NMI_SAFE_CMPXCHG
>> select ARCH_KEEP_MEMBLOCK
>> select ARCH_MIGHT_HAVE_PC_PARPORT
>> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
>> index 7cb6d18f5cd6..7b10c2031e82 100644
>> --- a/arch/powerpc/include/asm/mman.h
>> +++ b/arch/powerpc/include/asm/mman.h
>> @@ -24,7 +24,7 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
>> }
>> #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
>>
>> -static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
>> +static inline pgprot_t powerpc_vm_get_page_prot(unsigned long vm_flags)
>> {
>> #ifdef CONFIG_PPC_MEM_KEYS
>> return (vm_flags & VM_SAO) ?
>> @@ -34,7 +34,6 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
>> return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0);
>> #endif
>> }
>> -#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
>>
>> static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
>> {
>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
>> index d564d0ecd4cd..3cbb6de20f9d 100644
>> --- a/arch/powerpc/include/asm/pgtable.h
>> +++ b/arch/powerpc/include/asm/pgtable.h
>> @@ -20,25 +20,6 @@ struct mm_struct;
>> #include <asm/nohash/pgtable.h>
>> #endif /* !CONFIG_PPC_BOOK3S */
>>
>> -/* Note due to the way vm flags are laid out, the bits are XWR */
>> -#define __P000 PAGE_NONE
>> -#define __P001 PAGE_READONLY
>> -#define __P010 PAGE_COPY
>> -#define __P011 PAGE_COPY
>> -#define __P100 PAGE_READONLY_X
>> -#define __P101 PAGE_READONLY_X
>> -#define __P110 PAGE_COPY_X
>> -#define __P111 PAGE_COPY_X
>> -
>> -#define __S000 PAGE_NONE
>> -#define __S001 PAGE_READONLY
>> -#define __S010 PAGE_SHARED
>> -#define __S011 PAGE_SHARED
>> -#define __S100 PAGE_READONLY_X
>> -#define __S101 PAGE_READONLY_X
>> -#define __S110 PAGE_SHARED_X
>> -#define __S111 PAGE_SHARED_X
>> -
>> #ifndef __ASSEMBLY__
>>
>> #ifndef MAX_PTRS_PER_PGD
>> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
>> index c475cf810aa8..7f05e7903bd2 100644
>> --- a/arch/powerpc/mm/mmap.c
>> +++ b/arch/powerpc/mm/mmap.c
>> @@ -254,3 +254,50 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
>> mm->get_unmapped_area = arch_get_unmapped_area_topdown;
>> }
>> }
>> +
>> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
>> +{
>> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
>> + case VM_NONE:
>> + return PAGE_NONE;
>> + case VM_READ:
>> + return PAGE_READONLY;
>> + case VM_WRITE:
>> + return PAGE_COPY;
>> + case VM_READ | VM_WRITE:
>> + return PAGE_COPY;
>> + case VM_EXEC:
>> + return PAGE_READONLY_X;
>> + case VM_EXEC | VM_READ:
>> + return PAGE_READONLY_X;
>> + case VM_EXEC | VM_WRITE:
>> + return PAGE_COPY_X;
>> + case VM_EXEC | VM_READ | VM_WRITE:
>> + return PAGE_COPY_X;
>> + case VM_SHARED:
>> + return PAGE_NONE;
>> + case VM_SHARED | VM_READ:
>> + return PAGE_READONLY;
>> + case VM_SHARED | VM_WRITE:
>> + return PAGE_SHARED;
>> + case VM_SHARED | VM_READ | VM_WRITE:
>> + return PAGE_SHARED;
>> + case VM_SHARED | VM_EXEC:
>> + return PAGE_READONLY_X;
>> + case VM_SHARED | VM_EXEC | VM_READ:
>> + return PAGE_READONLY_X;
>> + case VM_SHARED | VM_EXEC | VM_WRITE:
>> + return PAGE_SHARED_X;
>> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
>> + return PAGE_SHARED_X;
>> + default:
>> + BUILD_BUG();
>> + }
>> +}
>> +
>> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
>> +{
>> + return __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
>> + pgprot_val(powerpc_vm_get_page_prot(vm_flags)));
> Any reason to keep powerpc_vm_get_page_prot() rather than open code it
> here?
>
> This applies to other architectures that implement arch_vm_get_page_prot()
> and/or arch_filter_pgprot() as well.

Just to minimize the code churn ! But I will be happy to open code them
here (and in other platforms) if that will be preferred.

2022-02-07 11:23:38

by Stafford Horne

[permalink] [raw]
Subject: Re: [OpenRISC] [RFC V1 22/31] openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

On Mon, Jan 24, 2022 at 06:26:59PM +0530, Anshuman Khandual wrote:
> This defines and exports a platform specific custom vm_get_page_prot() via
> subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX
> macros can be dropped which are no longer needed.
>
> Cc: Jonas Bonn <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>

For one thing this is easier to read than the __P000 codes.

Acked-by: Stafford Horne <[email protected]>

> ---
> arch/openrisc/Kconfig | 1 +
> arch/openrisc/include/asm/pgtable.h | 18 -------------
> arch/openrisc/mm/init.c | 41 +++++++++++++++++++++++++++++
> 3 files changed, 42 insertions(+), 18 deletions(-)
>
> diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
> index f724b3f1aeed..842a61426816 100644
> --- a/arch/openrisc/Kconfig
> +++ b/arch/openrisc/Kconfig
> @@ -10,6 +10,7 @@ config OPENRISC
> select ARCH_HAS_DMA_SET_UNCACHED
> select ARCH_HAS_DMA_CLEAR_UNCACHED
> select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> + select ARCH_HAS_VM_GET_PAGE_PROT
> select COMMON_CLK
> select OF
> select OF_EARLY_FLATTREE
> diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h
> index cdd657f80bfa..fe686c4b7065 100644
> --- a/arch/openrisc/include/asm/pgtable.h
> +++ b/arch/openrisc/include/asm/pgtable.h
> @@ -176,24 +176,6 @@ extern void paging_init(void);
> __pgprot(_PAGE_ALL | _PAGE_SRE | _PAGE_SWE \
> | _PAGE_SHARED | _PAGE_DIRTY | _PAGE_EXEC | _PAGE_CI)
>
> -#define __P000 PAGE_NONE
> -#define __P001 PAGE_READONLY_X
> -#define __P010 PAGE_COPY
> -#define __P011 PAGE_COPY_X
> -#define __P100 PAGE_READONLY
> -#define __P101 PAGE_READONLY_X
> -#define __P110 PAGE_COPY
> -#define __P111 PAGE_COPY_X
> -
> -#define __S000 PAGE_NONE
> -#define __S001 PAGE_READONLY_X
> -#define __S010 PAGE_SHARED
> -#define __S011 PAGE_SHARED_X
> -#define __S100 PAGE_READONLY
> -#define __S101 PAGE_READONLY_X
> -#define __S110 PAGE_SHARED
> -#define __S111 PAGE_SHARED_X
> -
> /* zero page used for uninitialized stuff */
> extern unsigned long empty_zero_page[2048];
> #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
> diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
> index 97305bde1b16..c9f5e7d6bb59 100644
> --- a/arch/openrisc/mm/init.c
> +++ b/arch/openrisc/mm/init.c
> @@ -210,3 +210,44 @@ void __init mem_init(void)
> mem_init_done = 1;
> return;
> }
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> + case VM_NONE:
> + return PAGE_NONE;
> + case VM_READ:
> + return PAGE_READONLY_X;
> + case VM_WRITE:
> + return PAGE_COPY;
> + case VM_READ | VM_WRITE:
> + return PAGE_COPY_X;
> + case VM_EXEC:
> + return PAGE_READONLY;
> + case VM_EXEC | VM_READ:
> + return PAGE_READONLY_X;
> + case VM_EXEC | VM_WRITE:
> + return PAGE_COPY;
> + case VM_EXEC | VM_READ | VM_WRITE:
> + return PAGE_COPY_X;
> + case VM_SHARED:
> + return PAGE_NONE;
> + case VM_SHARED | VM_READ:
> + return PAGE_READONLY_X;
> + case VM_SHARED | VM_WRITE:
> + return PAGE_SHARED;
> + case VM_SHARED | VM_READ | VM_WRITE:
> + return PAGE_SHARED_X;
> + case VM_SHARED | VM_EXEC:
> + return PAGE_READONLY;
> + case VM_SHARED | VM_EXEC | VM_READ:
> + return PAGE_READONLY_X;
> + case VM_SHARED | VM_EXEC | VM_WRITE:
> + return PAGE_SHARED;
> + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE:
> + return PAGE_SHARED_X;
> + default:
> + BUILD_BUG();
> + }
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
> --
> 2.25.1
>
> _______________________________________________
> OpenRISC mailing list
> [email protected]
> https://lists.librecores.org/listinfo/openrisc

2022-02-07 15:33:36

by Firo Yang

[permalink] [raw]
Subject: Re: [RFC V1 02/31] mm/mmap: Clarify protection_map[] indices

The 01/24/2022 18:26, Anshuman Khandual wrote:
> protection_map[] maps vm_flags access combinations into page protection
> value as defined by the platform via __PXXX and __SXXX macros. The array
> indices in protection_map[], represents vm_flags access combinations but
> it's not very intuitive to derive. This makes it clear and explicit.
>
> Cc: Andrew Morton <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> mm/mmap.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 1e8fdb0b51ed..254d716220df 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -102,8 +102,22 @@ static void unmap_region(struct mm_struct *mm,
> * x: (yes) yes
> */
> pgprot_t protection_map[16] __ro_after_init = {
> - __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111,
> - __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111
> + [VM_NONE] = __P000,
> + [VM_READ] = __P001,
> + [VM_WRITE] = __P010,
> + [VM_READ|VM_WRITE] = __P011,
> + [VM_EXEC] = __P100,
> + [VM_EXEC|VM_READ] = __P101,
> + [VM_EXEC|VM_WRITE] = __P110,
> + [VM_EXEC|VM_READ|VM_WRITE] = __P111,
> + [VM_SHARED] = __S000,
> + [VM_SHARED|VM_READ] = __S001,
> + [VM_SHARED|VM_WRITE] = __S010,
> + [VM_SHARED|VM_READ|VM_WRITE] = __S011,
> + [VM_SHARED|VM_EXEC] = __S100,
> + [VM_SHARED|VM_READ|VM_EXEC] = __S101,
> + [VM_SHARED|VM_WRITE|VM_EXEC] = __S110,
> + [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111

Just a little bit picky:)
Would you mind rearranging vm_flags access commbination in the order as
the access bits appear in __SXXX or __PXXX? For example, change the following:

[VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111
to
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111

I think it's would be more clear for looking.

Best,
// Firo

2022-02-09 06:11:14

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [RFC V1 02/31] mm/mmap: Clarify protection_map[] indices



On 2/5/22 2:40 PM, Firo Yang wrote:
> The 01/24/2022 18:26, Anshuman Khandual wrote:
>> protection_map[] maps vm_flags access combinations into page protection
>> value as defined by the platform via __PXXX and __SXXX macros. The array
>> indices in protection_map[], represents vm_flags access combinations but
>> it's not very intuitive to derive. This makes it clear and explicit.
>>
>> Cc: Andrew Morton <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>> mm/mmap.c | 18 ++++++++++++++++--
>> 1 file changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 1e8fdb0b51ed..254d716220df 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -102,8 +102,22 @@ static void unmap_region(struct mm_struct *mm,
>> * x: (yes) yes
>> */
>> pgprot_t protection_map[16] __ro_after_init = {
>> - __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111,
>> - __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111
>> + [VM_NONE] = __P000,
>> + [VM_READ] = __P001,
>> + [VM_WRITE] = __P010,
>> + [VM_READ|VM_WRITE] = __P011,
>> + [VM_EXEC] = __P100,
>> + [VM_EXEC|VM_READ] = __P101,
>> + [VM_EXEC|VM_WRITE] = __P110,
>> + [VM_EXEC|VM_READ|VM_WRITE] = __P111,
>> + [VM_SHARED] = __S000,
>> + [VM_SHARED|VM_READ] = __S001,
>> + [VM_SHARED|VM_WRITE] = __S010,
>> + [VM_SHARED|VM_READ|VM_WRITE] = __S011,
>> + [VM_SHARED|VM_EXEC] = __S100,
>> + [VM_SHARED|VM_READ|VM_EXEC] = __S101,
>> + [VM_SHARED|VM_WRITE|VM_EXEC] = __S110,
>> + [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111
>
> Just a little bit picky:)
> Would you mind rearranging vm_flags access commbination in the order as
> the access bits appear in __SXXX or __PXXX? For example, change the following:
>
> [VM_SHARED|VM_READ|VM_WRITE|VM_EXEC] = __S111
> to
> [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111
>
> I think it's would be more clear for looking.

So the vm_flags combination set here (and like in the platforms)
should be like the following ..

[VM_NONE]
[VM_READ]
[VM_WRITE]
[VM_WRITE | VM_READ]
[VM_EXEC]
[VM_EXEC|VM_READ]
[VM_EXEC|VM_WRITE]
[VM_EXEC|VM_WRITE | VM_READ]
[VM_SHARED]
[VM_SHARED|VM_READ]
[VM_SHARED|VM_WRITE]
[VM_SHARED|VM_WRITE | VM_READ]
[VM_SHARED|VM_EXEC]
[VM_SHARED|VM_EXEC | VM_READ]
[VM_SHARED|VM_EXEC | VM_WRITE]
[VM_SHARED|VM_EXEC | VM_WRITE | VM_READ]

Implying the relative position for these flags among each other.

[VM_SHARED] [VM_EXEC] [VM_WRITE] [VM_WRITE]

This makes sense, will change the series accordingly.

- Anshuman