2020-04-30 20:41:02

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 00/10] Remove duplicated kmap code

From: Ira Weiny <[email protected]>

The kmap infrastructure has been copied almost verbatim to every architecture.
This series consolidates obvious duplicated code by defining core functions
which call into the architectures only when needed.

Some of the k[un]map_atomic() implementations have some similarities but the
similarities were not sufficient to warrant further changes.

In addition we remove a duplicate implementation of kmap() in DRM.

Testing was done by 0day to cover all the architectures I can't readily
build/test.

---
Changes from V0:
rebase to 5.7-rc4
Define kmap_flush_tlb() and make kmap() truely arch independent.
Redefine the k[un]map_atomic_* code to call into the architectures for
high mem pages
Ensure all architectures define kmap_prot, use it appropriately, and
define kmap_atomic_prot()
Remove drm implementation of kmap_atomic()


Ira Weiny (10):
arch/kmap: Remove BUG_ON()
arch/xtensa: Move kmap build bug out of the way
arch/kmap: Remove redundant arch specific kmaps
arch/kunmap: Remove duplicate kunmap implementations
arch/kmap_atomic: Consolidate duplicate code
arch/kunmap_atomic: Consolidate duplicate code
arch/kmap: Ensure kmap_prot visibility
arch/kmap: Don't hard code kmap_prot values
arch/kmap: Define kmap_atomic_prot() for all arch's
drm: Remove drm specific kmap_atomic code

arch/arc/include/asm/highmem.h | 16 +------
arch/arc/mm/highmem.c | 28 +++----------
arch/arm/include/asm/highmem.h | 9 +---
arch/arm/mm/highmem.c | 35 +++-------------
arch/csky/include/asm/highmem.h | 11 ++---
arch/csky/mm/highmem.c | 43 +++++--------------
arch/microblaze/include/asm/highmem.h | 29 ++-----------
arch/microblaze/mm/highmem.c | 16 ++-----
arch/microblaze/mm/init.c | 3 --
arch/mips/include/asm/highmem.h | 11 ++---
arch/mips/mm/cache.c | 6 +--
arch/mips/mm/highmem.c | 49 ++++------------------
arch/nds32/include/asm/highmem.h | 9 +---
arch/nds32/mm/highmem.c | 39 +++--------------
arch/parisc/include/asm/cacheflush.h | 4 +-
arch/powerpc/include/asm/highmem.h | 30 ++------------
arch/powerpc/mm/highmem.c | 21 ++--------
arch/powerpc/mm/mem.c | 3 --
arch/sparc/include/asm/highmem.h | 23 +---------
arch/sparc/mm/highmem.c | 18 +++-----
arch/x86/include/asm/highmem.h | 11 +----
arch/x86/mm/highmem_32.c | 50 ++--------------------
arch/xtensa/include/asm/highmem.h | 28 +------------
arch/xtensa/mm/highmem.c | 23 +++++-----
drivers/gpu/drm/ttm/ttm_bo_util.c | 56 ++-----------------------
drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 16 +++----
include/drm/ttm/ttm_bo_api.h | 4 --
include/linux/highmem.h | 60 +++++++++++++++++++++++++--
28 files changed, 159 insertions(+), 492 deletions(-)

--
2.25.1


2020-04-30 20:41:11

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 07/10] arch/kmap: Ensure kmap_prot visibility

From: Ira Weiny <[email protected]>

We want to support kmap_atomic_prot() on all architectures and it makes
sense to define kmap_atomic() to use the default kmap_prot.

So we ensure all arch's have a globally available kmap_prot either as a
define or exported symbol.

Signed-off-by: Ira Weiny <[email protected]>
---
arch/microblaze/include/asm/highmem.h | 2 +-
arch/microblaze/mm/init.c | 3 ---
arch/powerpc/include/asm/highmem.h | 2 +-
arch/powerpc/mm/mem.c | 3 ---
arch/sparc/mm/highmem.c | 1 +
5 files changed, 3 insertions(+), 8 deletions(-)

diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h
index 5fc56b0107be..66521fdc3a47 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -25,8 +25,8 @@
#include <linux/uaccess.h>
#include <asm/fixmap.h>

+#define kmap_prot PAGE_KERNEL
extern pte_t *kmap_pte;
-extern pgprot_t kmap_prot;
extern pte_t *pkmap_page_table;

/*
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index 1ffbfa96b9b8..a467686c13af 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -49,8 +49,6 @@ unsigned long lowmem_size;
#ifdef CONFIG_HIGHMEM
pte_t *kmap_pte;
EXPORT_SYMBOL(kmap_pte);
-pgprot_t kmap_prot;
-EXPORT_SYMBOL(kmap_prot);

static inline pte_t *virt_to_kpte(unsigned long vaddr)
{
@@ -68,7 +66,6 @@ static void __init highmem_init(void)
pkmap_page_table = virt_to_kpte(PKMAP_BASE);

kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
- kmap_prot = PAGE_KERNEL;
}

static void highmem_setup(void)
diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h
index 1845fbd7ce61..d264aebcaa9b 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -29,8 +29,8 @@
#include <asm/page.h>
#include <asm/fixmap.h>

+#define kmap_prot PAGE_KERNEL
extern pte_t *kmap_pte;
-extern pgprot_t kmap_prot;
extern pte_t *pkmap_page_table;

/*
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 041ed7cfd341..3f642b058731 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -64,8 +64,6 @@ bool init_mem_is_free;
#ifdef CONFIG_HIGHMEM
pte_t *kmap_pte;
EXPORT_SYMBOL(kmap_pte);
-pgprot_t kmap_prot;
-EXPORT_SYMBOL(kmap_prot);
#endif

pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
@@ -245,7 +243,6 @@ void __init paging_init(void)
pkmap_page_table = virt_to_kpte(PKMAP_BASE);

kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
- kmap_prot = PAGE_KERNEL;
#endif /* CONFIG_HIGHMEM */

printk(KERN_DEBUG "Top of RAM: 0x%llx, Total RAM: 0x%llx\n",
diff --git a/arch/sparc/mm/highmem.c b/arch/sparc/mm/highmem.c
index 469786bc430f..9f06d75e88e1 100644
--- a/arch/sparc/mm/highmem.c
+++ b/arch/sparc/mm/highmem.c
@@ -33,6 +33,7 @@
#include <asm/vaddrs.h>

pgprot_t kmap_prot;
+EXPORT_SYMBOL(kmap_prot);

static pte_t *kmap_pte;

--
2.25.1

2020-04-30 20:41:11

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 09/10] arch/kmap: Define kmap_atomic_prot() for all arch's

From: Ira Weiny <[email protected]>

To support kmap_atomic_prot(), all architectures need to support
protections passed to their kmap_atomic_high() function. Pass
protections into kmap_atomic_high() and change the name to
kmap_atomic_high_prot() to match.

Then define kmap_atomic_prot() as a core function which calls
kmap_atomic_high_prot() when needed.

Finally, redefine kmap_atomic() as a wrapper of kmap_atomic_prot() with
the default kmap_prot exported by the architectures.

Signed-off-by: Ira Weiny <[email protected]>
---
arch/arc/include/asm/highmem.h | 2 +-
arch/arc/mm/highmem.c | 6 +++---
arch/arm/include/asm/highmem.h | 2 +-
arch/arm/mm/highmem.c | 6 +++---
arch/csky/include/asm/highmem.h | 2 +-
arch/csky/mm/highmem.c | 6 +++---
arch/microblaze/include/asm/highmem.h | 7 +------
arch/microblaze/mm/highmem.c | 4 ++--
arch/mips/include/asm/highmem.h | 2 +-
arch/mips/mm/highmem.c | 6 +++---
arch/nds32/include/asm/highmem.h | 2 +-
arch/nds32/mm/highmem.c | 6 +++---
arch/powerpc/include/asm/highmem.h | 8 +-------
arch/powerpc/mm/highmem.c | 4 ++--
arch/sparc/include/asm/highmem.h | 2 +-
arch/sparc/mm/highmem.c | 6 +++---
arch/x86/include/asm/highmem.h | 6 +-----
arch/x86/mm/highmem_32.c | 4 ++--
arch/xtensa/include/asm/highmem.h | 2 +-
arch/xtensa/mm/highmem.c | 6 +++---
include/linux/highmem.h | 5 +++--
21 files changed, 40 insertions(+), 54 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index e16531495620..09f86bde6809 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -30,7 +30,7 @@

#include <asm/cacheflush.h>

-extern void *kmap_atomic_high(struct page *page);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
extern void kunmap_atomic_high(void *kvaddr);

extern void kmap_init(void);
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 5d3eab4ac0b0..479b0d72d3cf 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -49,7 +49,7 @@
extern pte_t * pkmap_page_table;
static pte_t * fixmap_page_table;

-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
int idx, cpu_idx;
unsigned long vaddr;
@@ -59,11 +59,11 @@ void *kmap_atomic_high(struct page *page)
vaddr = FIXMAP_ADDR(idx);

set_pte_at(&init_mm, vaddr, fixmap_page_table + idx,
- mk_pte(page, kmap_prot));
+ mk_pte(page, prot));

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kv)
{
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index a9d5e9bce1cc..e35f2f73f6aa 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -60,7 +60,7 @@ static inline void *kmap_high_get(struct page *page)
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
-extern void *kmap_atomic_high(struct page *page);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
extern void kunmap_atomic_high(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
#endif
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index ac8394655a6e..e013f6b81328 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -31,7 +31,7 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
return *ptep;
}

-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
unsigned int idx;
unsigned long vaddr;
@@ -67,11 +67,11 @@ void *kmap_atomic_high(struct page *page)
* in place, so the contained TLB flush ensures the TLB is updated
* with the new mapping.
*/
- set_fixmap_pte(idx, mk_pte(page, kmap_prot));
+ set_fixmap_pte(idx, mk_pte(page, prot));

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kvaddr)
{
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index 5bbbe59e60a9..59854c7ccf78 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -32,7 +32,7 @@ extern pte_t *pkmap_page_table;

#define ARCH_HAS_KMAP_FLUSH_TLB
extern void kmap_flush_tlb(unsigned long addr);
-extern void *kmap_atomic_high(struct page *page);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
extern void kunmap_atomic_high(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index f4311669b5bb..3ae5c8cd7619 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -21,7 +21,7 @@ EXPORT_SYMBOL(kmap_flush_tlb);

EXPORT_SYMBOL(kmap);

-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
unsigned long vaddr;
int idx, type;
@@ -32,12 +32,12 @@ void *kmap_atomic_high(struct page *page)
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte - idx)));
#endif
- set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
+ set_pte(kmap_pte-idx, mk_pte(page, prot));
flush_tlb_one((unsigned long)vaddr);

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kvaddr)
{
diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h
index 66521fdc3a47..eb0a2cb883bd 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -51,14 +51,9 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
extern void kunmap_atomic_high(void *kvaddr);

-static inline void *kmap_atomic_high(struct page *page)
-{
- return kmap_atomic_prot(page, kmap_prot);
-}
-
#define flush_cache_kmaps() { flush_icache(); flush_dcache(); }

#endif /* __KERNEL__ */
diff --git a/arch/microblaze/mm/highmem.c b/arch/microblaze/mm/highmem.c
index 1026aeffe11a..ee8a422b2b76 100644
--- a/arch/microblaze/mm/highmem.c
+++ b/arch/microblaze/mm/highmem.c
@@ -32,7 +32,7 @@
*/
#include <asm/tlbflush.h>

-void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{

unsigned long vaddr;
@@ -49,7 +49,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)

return (void *) vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_prot);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kvaddr)
{
diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
index d9f774bd4938..c9f46b450a68 100644
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -48,7 +48,7 @@ extern pte_t *pkmap_page_table;

#define ARCH_HAS_KMAP_FLUSH_TLB
extern void kmap_flush_tlb(unsigned long addr);
-extern void *kmap_atomic_high(struct page *page);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
extern void kunmap_atomic_high(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);

diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index 87023bd1a33c..37e244cdb14e 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -18,7 +18,7 @@ void kmap_flush_tlb(unsigned long addr)
}
EXPORT_SYMBOL(kmap_flush_tlb);

-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
unsigned long vaddr;
int idx, type;
@@ -29,12 +29,12 @@ void *kmap_atomic_high(struct page *page)
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte - idx)));
#endif
- set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
+ set_pte(kmap_pte-idx, mk_pte(page, prot));
local_flush_tlb_one((unsigned long)vaddr);

return (void*) vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kvaddr)
{
diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
index 97648b678108..1f9fc74d112d 100644
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -51,7 +51,7 @@ extern void kmap_init(void);
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
-extern void *kmap_atomic_high(struct page *page);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
extern void kunmap_atomic_high(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index 809f8c830f06..63ded527c1e8 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -10,7 +10,7 @@
#include <asm/fixmap.h>
#include <asm/tlbflush.h>

-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
unsigned int idx;
unsigned long vaddr, pte;
@@ -21,7 +21,7 @@ void *kmap_atomic_high(struct page *page)

idx = type + KM_TYPE_NR * smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
- pte = (page_to_pfn(page) << PAGE_SHIFT) | (kmap_prot);
+ pte = (page_to_pfn(page) << PAGE_SHIFT) | prot;
ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
set_pte(ptep, pte);

@@ -32,7 +32,7 @@ void *kmap_atomic_high(struct page *page)
return (void *)vaddr;
}

-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kvaddr)
{
diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h
index d264aebcaa9b..edd01bbe5a44 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -59,15 +59,9 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
extern void kunmap_atomic_high(void *kvaddr);

-static inline void *kmap_atomic_high(struct page *page)
-{
- return kmap_atomic_prot(page, kmap_prot);
-}
-
-
#define flush_cache_kmaps() flush_cache_all()

#endif /* __KERNEL__ */
diff --git a/arch/powerpc/mm/highmem.c b/arch/powerpc/mm/highmem.c
index 162958321e28..35071c2913f1 100644
--- a/arch/powerpc/mm/highmem.c
+++ b/arch/powerpc/mm/highmem.c
@@ -24,7 +24,7 @@
#include <linux/highmem.h>
#include <linux/module.h>

-void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
unsigned long vaddr;
int idx, type;
@@ -38,7 +38,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)

return (void*) vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_prot);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kvaddr)
{
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index 94dd6e4c5fa4..d5c5700672de 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -50,7 +50,7 @@ void kmap_init(void) __init;

#define PKMAP_END (PKMAP_ADDR(LAST_PKMAP))

-void *kmap_atomic_high(struct page *page);
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
void kunmap_atomic_high(void *kvaddr);

#define flush_cache_kmaps() flush_cache_all()
diff --git a/arch/sparc/mm/highmem.c b/arch/sparc/mm/highmem.c
index 9f06d75e88e1..414f578d1e57 100644
--- a/arch/sparc/mm/highmem.c
+++ b/arch/sparc/mm/highmem.c
@@ -54,7 +54,7 @@ void __init kmap_init(void)
kmap_prot = __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
}

-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
unsigned long vaddr;
long idx, type;
@@ -73,7 +73,7 @@ void *kmap_atomic_high(struct page *page)
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte-idx)));
#endif
- set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
+ set_pte(kmap_pte-idx, mk_pte(page, prot));
/* XXX Fix - Anton */
#if 0
__flush_tlb_one(vaddr);
@@ -83,7 +83,7 @@ void *kmap_atomic_high(struct page *page)

return (void*) vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kvaddr)
{
diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index ff87aba96eee..009b8e22e906 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -58,11 +58,7 @@ extern unsigned long highstart_pfn, highend_pfn;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
-void *kmap_atomic_high(struct page *page)
-{
- return kmap_atomic_prot(page, kmap_prot);
-}
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
void kunmap_atomic_high(void *kvaddr);
void *kmap_atomic_pfn(unsigned long pfn);
void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot);
diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index af07a6842743..075fe51317b0 100644
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -4,7 +4,7 @@
#include <linux/swap.h> /* for totalram_pages */
#include <linux/memblock.h>

-void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
unsigned long vaddr;
int idx, type;
@@ -18,7 +18,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_prot);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

/*
* This is the same as kmap_atomic() but can map memory that doesn't
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index a60a02dc68f6..7152aeb1e3a4 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -68,7 +68,7 @@ static inline void flush_cache_kmaps(void)
flush_cache_all();
}

-void *kmap_atomic_high(struct page *page);
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
void kunmap_atomic_high(void *kvaddr);

void kmap_init(void);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 8c58c4c37033..fe56644d7b23 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -37,7 +37,7 @@ static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
color;
}

-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
{
enum fixed_addresses idx;
unsigned long vaddr;
@@ -48,11 +48,11 @@ void *kmap_atomic_high(struct page *page)
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte + idx)));
#endif
- set_pte(kmap_pte + idx, mk_pte(page, kmap_prot));
+ set_pte(kmap_pte + idx, mk_pte(page, prot));

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);

void kunmap_atomic_high(void *kvaddr)
{
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 601df07607a4..b10e8a39ae60 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -74,14 +74,15 @@ static inline void kunmap(struct page *page)
* be used in IRQ contexts, so in some (very limited) cases we need
* it.
*/
-static inline void *kmap_atomic(struct page *page)
+static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
{
preempt_disable();
pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
- return kmap_atomic_high(page);
+ return kmap_atomic_high_prot(page, prot);
}
+#define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot)

/* declarations for linux/mm/highmem.c */
unsigned int nr_free_highpages(void);
--
2.25.1

2020-04-30 20:41:12

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 06/10] arch/kunmap_atomic: Consolidate duplicate code

From: Ira Weiny <[email protected]>

Every single architecture (including !CONFIG_HIGHMEM) calls...

pagefault_enable();
preempt_enable();

... before returning from __kunmap_atomic(). Lift this code into the
kunmap_atomic() macro.

While we are at it rename __kunmap_atomic() to kunmap_atomic_high() to
be consistent.

Signed-off-by: Ira Weiny <[email protected]>

---
Changes from V0:
rename __kunmap_atomic() to kunmap_atomic_high()
Fix mips issue
---
arch/arc/include/asm/highmem.h | 2 +-
arch/arc/mm/highmem.c | 7 ++-----
arch/arm/include/asm/highmem.h | 2 +-
arch/arm/mm/highmem.c | 6 ++----
arch/csky/include/asm/highmem.h | 2 +-
arch/csky/mm/highmem.c | 9 +++------
arch/microblaze/include/asm/highmem.h | 2 +-
arch/microblaze/mm/highmem.c | 6 ++----
arch/mips/include/asm/highmem.h | 2 +-
arch/mips/mm/cache.c | 4 ++--
arch/mips/mm/highmem.c | 6 ++----
arch/nds32/include/asm/highmem.h | 2 +-
arch/nds32/mm/highmem.c | 6 ++----
arch/parisc/include/asm/cacheflush.h | 4 +---
arch/powerpc/include/asm/highmem.h | 2 +-
arch/powerpc/mm/highmem.c | 6 ++----
arch/sparc/include/asm/highmem.h | 2 +-
arch/sparc/mm/highmem.c | 6 ++----
arch/x86/include/asm/highmem.h | 2 +-
arch/x86/mm/highmem_32.c | 7 ++-----
arch/xtensa/include/asm/highmem.h | 2 +-
arch/xtensa/mm/highmem.c | 7 ++-----
include/linux/highmem.h | 10 ++++++----
23 files changed, 40 insertions(+), 64 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 75bd0fa77fe2..e16531495620 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -31,7 +31,7 @@
#include <asm/cacheflush.h>

extern void *kmap_atomic_high(struct page *page);
-extern void __kunmap_atomic(void *kvaddr);
+extern void kunmap_atomic_high(void *kvaddr);

extern void kmap_init(void);

diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 0964b011c29f..5d3eab4ac0b0 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -65,7 +65,7 @@ void *kmap_atomic_high(struct page *page)
}
EXPORT_SYMBOL(kmap_atomic_high);

-void __kunmap_atomic(void *kv)
+void kunmap_atomic_high(void *kv)
{
unsigned long kvaddr = (unsigned long)kv;

@@ -87,11 +87,8 @@ void __kunmap_atomic(void *kv)

kmap_atomic_idx_pop();
}
-
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);

static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
{
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index 4edb6db3a5c8..a9d5e9bce1cc 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -61,7 +61,7 @@ static inline void *kmap_high_get(struct page *page)
*/
#ifdef CONFIG_HIGHMEM
extern void *kmap_atomic_high(struct page *page);
-extern void __kunmap_atomic(void *kvaddr);
+extern void kunmap_atomic_high(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
#endif

diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index 075fdc235091..ac8394655a6e 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -73,7 +73,7 @@ void *kmap_atomic_high(struct page *page)
}
EXPORT_SYMBOL(kmap_atomic_high);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
int idx, type;
@@ -95,10 +95,8 @@ void __kunmap_atomic(void *kvaddr)
/* this address was obtained through kmap_high_get() */
kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)]));
}
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);

void *kmap_atomic_pfn(unsigned long pfn)
{
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index 6807df1232f3..5bbbe59e60a9 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -33,7 +33,7 @@ extern pte_t *pkmap_page_table;
#define ARCH_HAS_KMAP_FLUSH_TLB
extern void kmap_flush_tlb(unsigned long addr);
extern void *kmap_atomic_high(struct page *page);
-extern void __kunmap_atomic(void *kvaddr);
+extern void kunmap_atomic_high(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);

diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 63d74b47eee6..0aafbbbe651c 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -39,13 +39,13 @@ void *kmap_atomic_high(struct page *page)
}
EXPORT_SYMBOL(kmap_atomic_high);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
int idx;

if (vaddr < FIXADDR_START)
- goto out;
+ return;

#ifdef CONFIG_DEBUG_HIGHMEM
idx = KM_TYPE_NR*smp_processor_id() + kmap_atomic_idx();
@@ -58,11 +58,8 @@ void __kunmap_atomic(void *kvaddr)
(void) idx; /* to kill a warning */
#endif
kmap_atomic_idx_pop();
-out:
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);

/*
* This is the same as kmap_atomic() but can map memory that doesn't
diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h
index fe4ad8bac9ae..5fc56b0107be 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -52,7 +52,7 @@ extern pte_t *pkmap_page_table;
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
-extern void __kunmap_atomic(void *kvaddr);
+extern void kunmap_atomic_high(void *kvaddr);

static inline void *kmap_atomic_high(struct page *page)
{
diff --git a/arch/microblaze/mm/highmem.c b/arch/microblaze/mm/highmem.c
index a14f356b055b..1026aeffe11a 100644
--- a/arch/microblaze/mm/highmem.c
+++ b/arch/microblaze/mm/highmem.c
@@ -51,7 +51,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
}
EXPORT_SYMBOL(kmap_atomic_prot);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
int type;
@@ -77,7 +77,5 @@ void __kunmap_atomic(void *kvaddr)
local_flush_tlb_page(NULL, vaddr);

kmap_atomic_idx_pop();
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);
diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
index a515bcf15d4b..d9f774bd4938 100644
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -49,7 +49,7 @@ extern pte_t *pkmap_page_table;
#define ARCH_HAS_KMAP_FLUSH_TLB
extern void kmap_flush_tlb(unsigned long addr);
extern void *kmap_atomic_high(struct page *page);
-extern void __kunmap_atomic(void *kvaddr);
+extern void kunmap_atomic_high(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);

#define flush_cache_kmaps() BUG_ON(cpu_has_dc_aliases)
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index f015bb51fab0..1873c2a01fdb 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -104,7 +104,7 @@ void __flush_dcache_page(struct page *page)
flush_data_cache_page(addr);

if (PageHighMem(page))
- __kunmap_atomic((void *)addr);
+ kunmap_atomic((void *)addr);
}

EXPORT_SYMBOL(__flush_dcache_page);
@@ -147,7 +147,7 @@ void __update_cache(unsigned long address, pte_t pte)
flush_data_cache_page(addr);

if (PageHighMem(page))
- __kunmap_atomic((void *)addr);
+ kunmap_atomic((void *)addr);

ClearPageDcacheDirty(page);
}
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index 2bda56372995..155fbb107b35 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -36,7 +36,7 @@ void *kmap_atomic_high(struct page *page)
}
EXPORT_SYMBOL(kmap_atomic_high);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
int type __maybe_unused;
@@ -63,10 +63,8 @@ void __kunmap_atomic(void *kvaddr)
}
#endif
kmap_atomic_idx_pop();
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);

/*
* This is the same as kmap_atomic() but can map memory that doesn't
diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
index 28f5e7072c70..97648b678108 100644
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -52,7 +52,7 @@ extern void kmap_init(void);
*/
#ifdef CONFIG_HIGHMEM
extern void *kmap_atomic_high(struct page *page);
-extern void __kunmap_atomic(void *kvaddr);
+extern void kunmap_atomic_high(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);
#endif
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index f5f3a21460c4..f6e6915c0d31 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -34,7 +34,7 @@ void *kmap_atomic_high(struct page *page)

EXPORT_SYMBOL(kmap_atomic_high);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
if (kvaddr >= (void *)FIXADDR_START) {
unsigned long vaddr = (unsigned long)kvaddr;
@@ -45,8 +45,6 @@ void __kunmap_atomic(void *kvaddr)
ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
set_pte(ptep, 0);
}
- pagefault_enable();
- preempt_enable();
}

-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 0c83644bfa5c..119c9a7681bc 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -122,11 +122,9 @@ static inline void *kmap_atomic(struct page *page)
return page_address(page);
}

-static inline void __kunmap_atomic(void *addr)
+static inline void kunmap_atomic_high(void *addr)
{
flush_kernel_dcache_page_addr(addr);
- pagefault_enable();
- preempt_enable();
}

#define kmap_atomic_prot(page, prot) kmap_atomic(page)
diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h
index ac0efc2cf08a..1845fbd7ce61 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -60,7 +60,7 @@ extern pte_t *pkmap_page_table;
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
-extern void __kunmap_atomic(void *kvaddr);
+extern void kunmap_atomic_high(void *kvaddr);

static inline void *kmap_atomic_high(struct page *page)
{
diff --git a/arch/powerpc/mm/highmem.c b/arch/powerpc/mm/highmem.c
index f9558ef4b8fa..162958321e28 100644
--- a/arch/powerpc/mm/highmem.c
+++ b/arch/powerpc/mm/highmem.c
@@ -40,7 +40,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
}
EXPORT_SYMBOL(kmap_atomic_prot);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;

@@ -66,7 +66,5 @@ void __kunmap_atomic(void *kvaddr)
}

kmap_atomic_idx_pop();
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index c96a0603d821..94dd6e4c5fa4 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -51,7 +51,7 @@ void kmap_init(void) __init;
#define PKMAP_END (PKMAP_ADDR(LAST_PKMAP))

void *kmap_atomic_high(struct page *page);
-void __kunmap_atomic(void *kvaddr);
+void kunmap_atomic_high(void *kvaddr);

#define flush_cache_kmaps() flush_cache_all()

diff --git a/arch/sparc/mm/highmem.c b/arch/sparc/mm/highmem.c
index b53070ab6a31..469786bc430f 100644
--- a/arch/sparc/mm/highmem.c
+++ b/arch/sparc/mm/highmem.c
@@ -84,7 +84,7 @@ void *kmap_atomic_high(struct page *page)
}
EXPORT_SYMBOL(kmap_atomic_high);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
int type;
@@ -126,7 +126,5 @@ void __kunmap_atomic(void *kvaddr)
#endif

kmap_atomic_idx_pop();
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);
diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 72e154e17416..ff87aba96eee 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -63,7 +63,7 @@ void *kmap_atomic_high(struct page *page)
{
return kmap_atomic_prot(page, kmap_prot);
}
-void __kunmap_atomic(void *kvaddr);
+void kunmap_atomic_high(void *kvaddr);
void *kmap_atomic_pfn(unsigned long pfn);
void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot);

diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index 937d2cc40389..af07a6842743 100644
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -30,7 +30,7 @@ void *kmap_atomic_pfn(unsigned long pfn)
}
EXPORT_SYMBOL_GPL(kmap_atomic_pfn);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;

@@ -60,11 +60,8 @@ void __kunmap_atomic(void *kvaddr)
BUG_ON(vaddr >= (unsigned long)high_memory);
}
#endif
-
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);

void __init set_highmem_pages_init(void)
{
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index d4fb9f78ba32..a60a02dc68f6 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -69,7 +69,7 @@ static inline void flush_cache_kmaps(void)
}

void *kmap_atomic_high(struct page *page);
-void __kunmap_atomic(void *kvaddr);
+void kunmap_atomic_high(void *kvaddr);

void kmap_init(void);

diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 217f2ebaa298..f57a7770eb08 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -54,7 +54,7 @@ void *kmap_atomic_high(struct page *page)
}
EXPORT_SYMBOL(kmap_atomic_high);

-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
{
if (kvaddr >= (void *)FIXADDR_START &&
kvaddr < (void *)FIXADDR_TOP) {
@@ -73,11 +73,8 @@ void __kunmap_atomic(void *kvaddr)

kmap_atomic_idx_pop();
}
-
- pagefault_enable();
- preempt_enable();
}
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);

void __init kmap_init(void)
{
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index e0106b4f7dbb..601df07607a4 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -145,10 +145,10 @@ static inline void *kmap_atomic(struct page *page)
}
#define kmap_atomic_prot(page, prot) kmap_atomic(page)

-static inline void __kunmap_atomic(void *addr)
+static inline void kunmap_atomic_high(void *addr)
{
- pagefault_enable();
- preempt_enable();
+ /* Nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic()
+ * handles re-enabling faults + preemption */
}

#define kmap_atomic_pfn(pfn) kmap_atomic(pfn_to_page(pfn))
@@ -198,7 +198,9 @@ static inline void kmap_atomic_idx_pop(void)
#define kunmap_atomic(addr) \
do { \
BUILD_BUG_ON(__same_type((addr), struct page *)); \
- __kunmap_atomic(addr); \
+ kunmap_atomic_high(addr); \
+ pagefault_enable(); \
+ preempt_enable(); \
} while (0)


--
2.25.1

2020-04-30 20:41:37

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 08/10] arch/kmap: Don't hard code kmap_prot values

From: Ira Weiny <[email protected]>

To support kmap_atomic_prot() on all architectures each arch must
support protections passed in to them.

Change csky, mips, nds32 and xtensa to use their global kmap_prot value
rather than a hard coded value which was equal.

Signed-off-by: Ira Weiny <[email protected]>
---
arch/csky/mm/highmem.c | 2 +-
arch/mips/mm/highmem.c | 2 +-
arch/nds32/mm/highmem.c | 2 +-
arch/xtensa/mm/highmem.c | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 0aafbbbe651c..f4311669b5bb 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -32,7 +32,7 @@ void *kmap_atomic_high(struct page *page)
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte - idx)));
#endif
- set_pte(kmap_pte-idx, mk_pte(page, PAGE_KERNEL));
+ set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
flush_tlb_one((unsigned long)vaddr);

return (void *)vaddr;
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index 155fbb107b35..87023bd1a33c 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -29,7 +29,7 @@ void *kmap_atomic_high(struct page *page)
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte - idx)));
#endif
- set_pte(kmap_pte-idx, mk_pte(page, PAGE_KERNEL));
+ set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
local_flush_tlb_one((unsigned long)vaddr);

return (void*) vaddr;
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index f6e6915c0d31..809f8c830f06 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -21,7 +21,7 @@ void *kmap_atomic_high(struct page *page)

idx = type + KM_TYPE_NR * smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
- pte = (page_to_pfn(page) << PAGE_SHIFT) | (PAGE_KERNEL);
+ pte = (page_to_pfn(page) << PAGE_SHIFT) | (kmap_prot);
ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
set_pte(ptep, pte);

diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index f57a7770eb08..8c58c4c37033 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -48,7 +48,7 @@ void *kmap_atomic_high(struct page *page)
#ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte + idx)));
#endif
- set_pte(kmap_pte + idx, mk_pte(page, PAGE_KERNEL_EXEC));
+ set_pte(kmap_pte + idx, mk_pte(page, kmap_prot));

return (void *)vaddr;
}
--
2.25.1

2020-04-30 20:41:37

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 10/10] drm: Remove drm specific kmap_atomic code

From: Ira Weiny <[email protected]>

kmap_atomic_prot() is now exported by all architectures. Use this
function rather than open coding a driver specific kmap_atomic.

Signed-off-by: Ira Weiny <[email protected]>
---
drivers/gpu/drm/ttm/ttm_bo_util.c | 56 ++--------------------------
drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 16 ++++----
include/drm/ttm/ttm_bo_api.h | 4 --
3 files changed, 12 insertions(+), 64 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 52d2b71f1588..f09b096ba4fd 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -257,54 +257,6 @@ static int ttm_copy_io_page(void *dst, void *src, unsigned long page)
return 0;
}

-#ifdef CONFIG_X86
-#define __ttm_kmap_atomic_prot(__page, __prot) kmap_atomic_prot(__page, __prot)
-#define __ttm_kunmap_atomic(__addr) kunmap_atomic(__addr)
-#else
-#define __ttm_kmap_atomic_prot(__page, __prot) vmap(&__page, 1, 0, __prot)
-#define __ttm_kunmap_atomic(__addr) vunmap(__addr)
-#endif
-
-
-/**
- * ttm_kmap_atomic_prot - Efficient kernel map of a single page with
- * specified page protection.
- *
- * @page: The page to map.
- * @prot: The page protection.
- *
- * This function maps a TTM page using the kmap_atomic api if available,
- * otherwise falls back to vmap. The user must make sure that the
- * specified page does not have an aliased mapping with a different caching
- * policy unless the architecture explicitly allows it. Also mapping and
- * unmapping using this api must be correctly nested. Unmapping should
- * occur in the reverse order of mapping.
- */
-void *ttm_kmap_atomic_prot(struct page *page, pgprot_t prot)
-{
- if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))
- return kmap_atomic(page);
- else
- return __ttm_kmap_atomic_prot(page, prot);
-}
-EXPORT_SYMBOL(ttm_kmap_atomic_prot);
-
-/**
- * ttm_kunmap_atomic_prot - Unmap a page that was mapped using
- * ttm_kmap_atomic_prot.
- *
- * @addr: The virtual address from the map.
- * @prot: The page protection.
- */
-void ttm_kunmap_atomic_prot(void *addr, pgprot_t prot)
-{
- if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))
- kunmap_atomic(addr);
- else
- __ttm_kunmap_atomic(addr);
-}
-EXPORT_SYMBOL(ttm_kunmap_atomic_prot);
-
static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void *src,
unsigned long page,
pgprot_t prot)
@@ -316,13 +268,13 @@ static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void *src,
return -ENOMEM;

src = (void *)((unsigned long)src + (page << PAGE_SHIFT));
- dst = ttm_kmap_atomic_prot(d, prot);
+ dst = kmap_atomic_prot(d, prot);
if (!dst)
return -ENOMEM;

memcpy_fromio(dst, src, PAGE_SIZE);

- ttm_kunmap_atomic_prot(dst, prot);
+ kunmap_atomic(dst);

return 0;
}
@@ -338,13 +290,13 @@ static int ttm_copy_ttm_io_page(struct ttm_tt *ttm, void *dst,
return -ENOMEM;

dst = (void *)((unsigned long)dst + (page << PAGE_SHIFT));
- src = ttm_kmap_atomic_prot(s, prot);
+ src = kmap_atomic_prot(s, prot);
if (!src)
return -ENOMEM;

memcpy_toio(dst, src, PAGE_SIZE);

- ttm_kunmap_atomic_prot(src, prot);
+ kunmap_atomic(src);

return 0;
}
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
index bb46ca0c458f..94d456a1d1a9 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
@@ -374,12 +374,12 @@ static int vmw_bo_cpu_blit_line(struct vmw_bo_blit_line_data *d,
copy_size = min_t(u32, copy_size, PAGE_SIZE - src_page_offset);

if (unmap_src) {
- ttm_kunmap_atomic_prot(d->src_addr, d->src_prot);
+ kunmap_atomic(d->src_addr);
d->src_addr = NULL;
}

if (unmap_dst) {
- ttm_kunmap_atomic_prot(d->dst_addr, d->dst_prot);
+ kunmap_atomic(d->dst_addr);
d->dst_addr = NULL;
}

@@ -388,8 +388,8 @@ static int vmw_bo_cpu_blit_line(struct vmw_bo_blit_line_data *d,
return -EINVAL;

d->dst_addr =
- ttm_kmap_atomic_prot(d->dst_pages[dst_page],
- d->dst_prot);
+ kmap_atomic_prot(d->dst_pages[dst_page],
+ d->dst_prot);
if (!d->dst_addr)
return -ENOMEM;

@@ -401,8 +401,8 @@ static int vmw_bo_cpu_blit_line(struct vmw_bo_blit_line_data *d,
return -EINVAL;

d->src_addr =
- ttm_kmap_atomic_prot(d->src_pages[src_page],
- d->src_prot);
+ kmap_atomic_prot(d->src_pages[src_page],
+ d->src_prot);
if (!d->src_addr)
return -ENOMEM;

@@ -499,9 +499,9 @@ int vmw_bo_cpu_blit(struct ttm_buffer_object *dst,
}
out:
if (d.src_addr)
- ttm_kunmap_atomic_prot(d.src_addr, d.src_prot);
+ kunmap_atomic(d.src_addr);
if (d.dst_addr)
- ttm_kunmap_atomic_prot(d.dst_addr, d.dst_prot);
+ kunmap_atomic(d.dst_addr);

return ret;
}
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 0a9d042e075a..de1ccdcd5703 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -668,10 +668,6 @@ int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo);
int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma,
struct ttm_bo_device *bdev);

-void *ttm_kmap_atomic_prot(struct page *page, pgprot_t prot);
-
-void ttm_kunmap_atomic_prot(void *addr, pgprot_t prot);
-
/**
* ttm_bo_io
*
--
2.25.1

2020-04-30 20:41:52

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 03/10] arch/kmap: Remove redundant arch specific kmaps

From: Ira Weiny <[email protected]>

The kmap code for all the architectures is almost 100% identical.

Lift the common code to the core. Use ARCH_HAS_KMAP_FLUSH_TLB to
indicate if an arch defines kmap_flush_tlb() and call if if needed.

This also has the benefit of changing kmap() on a number of
architectures to be an inline call rather than an actual function.

Signed-off-by: Ira Weiny <[email protected]>

---
Changes from V0:
Define kmap_flush_tlb() and define it in csky/mips
---
arch/arc/include/asm/highmem.h | 2 --
arch/arc/mm/highmem.c | 10 ----------
arch/arm/include/asm/highmem.h | 2 --
arch/arm/mm/highmem.c | 9 ---------
arch/csky/include/asm/highmem.h | 4 ++--
arch/csky/mm/highmem.c | 14 ++++----------
arch/microblaze/include/asm/highmem.h | 9 ---------
arch/mips/include/asm/highmem.h | 4 ++--
arch/mips/mm/highmem.c | 14 +++-----------
arch/nds32/include/asm/highmem.h | 2 --
arch/nds32/mm/highmem.c | 12 ------------
arch/powerpc/include/asm/highmem.h | 9 ---------
arch/sparc/include/asm/highmem.h | 9 ---------
arch/x86/include/asm/highmem.h | 2 --
arch/x86/mm/highmem_32.c | 9 ---------
arch/xtensa/include/asm/highmem.h | 9 ---------
include/linux/highmem.h | 18 ++++++++++++++++++
17 files changed, 29 insertions(+), 109 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 042e92921c4c..96eb67c86961 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -30,8 +30,6 @@

#include <asm/cacheflush.h>

-extern void *kmap(struct page *page);
-extern void *kmap_high(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void kunmap_high(struct page *page);
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 39ef7b9a3aa9..4db13a6b9f3b 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -49,16 +49,6 @@
extern pte_t * pkmap_page_table;
static pte_t * fixmap_page_table;

-void *kmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
-
- return kmap_high(page);
-}
-EXPORT_SYMBOL(kmap);
-
void *kmap_atomic(struct page *page)
{
int idx, cpu_idx;
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index eb4e4207cd3c..c917522541de 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -20,7 +20,6 @@

extern pte_t *pkmap_page_table;

-extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);

/*
@@ -63,7 +62,6 @@ static inline void *kmap_high_get(struct page *page)
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
-extern void *kmap(struct page *page);
extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index cc6eb79ef20c..e8ba37c36590 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -31,15 +31,6 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
return *ptep;
}

-void *kmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- return kmap_high(page);
-}
-EXPORT_SYMBOL(kmap);
-
void kunmap(struct page *page)
{
might_sleep();
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index a345a2f2c22e..9d0516e38110 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -30,10 +30,10 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);

-extern void *kmap(struct page *page);
+#define ARCH_HAS_KMAP_FLUSH_TLB
+extern void kmap_flush_tlb(unsigned long addr);
extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 690d678649d1..4a3c273bc8b9 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -13,18 +13,12 @@ static pte_t *kmap_pte;

unsigned long highstart_pfn, highend_pfn;

-void *kmap(struct page *page)
+void kmap_flush_tlb(unsigned long addr)
{
- void *addr;
-
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- addr = kmap_high(page);
- flush_tlb_one((unsigned long)addr);
-
- return addr;
+ flush_tlb_one(addr);
}
+EXPORT_SYMBOL(kmap_flush_tlb);
+
EXPORT_SYMBOL(kmap);

void kunmap(struct page *page)
diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h
index 99ced7278b5c..8c5bfd228bd8 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -51,19 +51,10 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);
extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
extern void __kunmap_atomic(void *kvaddr);

-static inline void *kmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- return kmap_high(page);
-}
-
static inline void kunmap(struct page *page)
{
might_sleep();
diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
index 9d84aafc33d0..1f741e3ecabf 100644
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -46,10 +46,10 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void * kmap_high(struct page *page);
extern void kunmap_high(struct page *page);

-extern void *kmap(struct page *page);
+#define ARCH_HAS_KMAP_FLUSH_TLB
+extern void kmap_flush_tlb(unsigned long addr);
extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index edd889f6cede..c72058bfead6 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -12,19 +12,11 @@ static pte_t *kmap_pte;

unsigned long highstart_pfn, highend_pfn;

-void *kmap(struct page *page)
+void kmap_flush_tlb(unsigned long addr)
{
- void *addr;
-
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- addr = kmap_high(page);
- flush_tlb_one((unsigned long)addr);
-
- return addr;
+ flush_tlb_one(addr);
}
-EXPORT_SYMBOL(kmap);
+EXPORT_SYMBOL(kmap_flush_tlb);

void kunmap(struct page *page)
{
diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
index b3a82c97ded3..b13654a79069 100644
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -44,7 +44,6 @@ extern unsigned long highstart_pfn, highend_pfn;

extern pte_t *pkmap_page_table;

-extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);

extern void kmap_init(void);
@@ -54,7 +53,6 @@ extern void kmap_init(void);
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
-extern void *kmap(struct page *page);
extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index 4c7c28e994ea..d0cde53b84ae 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -10,18 +10,6 @@
#include <asm/fixmap.h>
#include <asm/tlbflush.h>

-void *kmap(struct page *page)
-{
- unsigned long vaddr;
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- vaddr = (unsigned long)kmap_high(page);
- return (void *)vaddr;
-}
-
-EXPORT_SYMBOL(kmap);
-
void kunmap(struct page *page)
{
might_sleep();
diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h
index 529512f6d65a..f14e4feef6d5 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -59,19 +59,10 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);
extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
extern void __kunmap_atomic(void *kvaddr);

-static inline void *kmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- return kmap_high(page);
-}
-
static inline void kunmap(struct page *page)
{
might_sleep();
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index 7dd2d4b3f980..2ff1192047f7 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -50,17 +50,8 @@ void kmap_init(void) __init;

#define PKMAP_END (PKMAP_ADDR(LAST_PKMAP))

-void *kmap_high(struct page *page);
void kunmap_high(struct page *page);

-static inline void *kmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- return kmap_high(page);
-}
-
static inline void kunmap(struct page *page)
{
might_sleep();
diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index a8059930056d..c916a28a9738 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -58,10 +58,8 @@ extern unsigned long highstart_pfn, highend_pfn;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);

-void *kmap(struct page *page);
void kunmap(struct page *page);

void *kmap_atomic_prot(struct page *page, pgprot_t prot);
diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index 8af66382672b..12591a81b85c 100644
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -4,15 +4,6 @@
#include <linux/swap.h> /* for totalram_pages */
#include <linux/memblock.h>

-void *kmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- return kmap_high(page);
-}
-EXPORT_SYMBOL(kmap);
-
void kunmap(struct page *page)
{
might_sleep();
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index a9587c85be85..2546b88ddecf 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -63,17 +63,8 @@ static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)

extern pte_t *pkmap_page_table;

-void *kmap_high(struct page *page);
void kunmap_high(struct page *page);

-static inline void *kmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return page_address(page);
- return kmap_high(page);
-}
-
static inline void kunmap(struct page *page)
{
might_sleep();
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index ea5cdbd8c2c3..fc3adc51254a 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -34,6 +34,24 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size)
#ifdef CONFIG_HIGHMEM
#include <asm/highmem.h>

+#ifndef ARCH_HAS_KMAP_FLUSH_TLB
+static inline void kmap_flush_tlb(unsigned long addr) { }
+#endif
+
+void *kmap_high(struct page *page);
+static inline void *kmap(struct page *page)
+{
+ void *addr;
+
+ might_sleep();
+ if (!PageHighMem(page))
+ addr = page_address(page);
+ else
+ addr = kmap_high(page);
+ kmap_flush_tlb((unsigned long)addr);
+ return addr;
+}
+
/* declarations for linux/mm/highmem.c */
unsigned int nr_free_highpages(void);
extern atomic_long_t _totalhigh_pages;
--
2.25.1

2020-04-30 20:42:50

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 04/10] arch/kunmap: Remove duplicate kunmap implementations

From: Ira Weiny <[email protected]>

All architectures do exactly the same thing for kunmap(); remove all the
duplicate definitions and lift the call to the core.

This also has the benefit of changing kmap_unmap() on a number of
architectures to be an inline call rather than an actual function.

Signed-off-by: Ira Weiny <[email protected]>

---
Changes from V0:
lift kunmap_high() declaration as well.
---
arch/arc/include/asm/highmem.h | 10 ----------
arch/arm/include/asm/highmem.h | 3 ---
arch/arm/mm/highmem.c | 9 ---------
arch/csky/include/asm/highmem.h | 3 ---
arch/csky/mm/highmem.c | 9 ---------
arch/microblaze/include/asm/highmem.h | 9 ---------
arch/mips/include/asm/highmem.h | 3 ---
arch/mips/mm/highmem.c | 9 ---------
arch/nds32/include/asm/highmem.h | 3 ---
arch/nds32/mm/highmem.c | 10 ----------
arch/powerpc/include/asm/highmem.h | 9 ---------
arch/sparc/include/asm/highmem.h | 10 ----------
arch/x86/include/asm/highmem.h | 4 ----
arch/x86/mm/highmem_32.c | 9 ---------
arch/xtensa/include/asm/highmem.h | 10 ----------
include/linux/highmem.h | 9 +++++++++
16 files changed, 9 insertions(+), 110 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 96eb67c86961..8387a5596a91 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -32,7 +32,6 @@

extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
-extern void kunmap_high(struct page *page);

extern void kmap_init(void);

@@ -41,15 +40,6 @@ static inline void flush_cache_kmaps(void)
flush_cache_all();
}

-static inline void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-
-
#endif

#endif
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index c917522541de..736f65283e7b 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -20,8 +20,6 @@

extern pte_t *pkmap_page_table;

-extern void kunmap_high(struct page *page);
-
/*
* The reason for kmap_high_get() is to ensure that the currently kmap'd
* page usage count does not decrease to zero while we're using its
@@ -62,7 +60,6 @@ static inline void *kmap_high_get(struct page *page)
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
-extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index e8ba37c36590..c700b32350ee 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -31,15 +31,6 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
return *ptep;
}

-void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-EXPORT_SYMBOL(kunmap);
-
void *kmap_atomic(struct page *page)
{
unsigned int idx;
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index 9d0516e38110..be11c5b67122 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -30,11 +30,8 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void kunmap_high(struct page *page);
-
#define ARCH_HAS_KMAP_FLUSH_TLB
extern void kmap_flush_tlb(unsigned long addr);
-extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 4a3c273bc8b9..e9952211264b 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -21,15 +21,6 @@ EXPORT_SYMBOL(kmap_flush_tlb);

EXPORT_SYMBOL(kmap);

-void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-EXPORT_SYMBOL(kunmap);
-
void *kmap_atomic(struct page *page)
{
unsigned long vaddr;
diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h
index 8c5bfd228bd8..0c94046f2d58 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -51,18 +51,9 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt - PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void kunmap_high(struct page *page);
extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
extern void __kunmap_atomic(void *kvaddr);

-static inline void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-
static inline void *kmap_atomic(struct page *page)
{
return kmap_atomic_prot(page, kmap_prot);
diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
index 1f741e3ecabf..24e7e7e5cc7b 100644
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -46,11 +46,8 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void kunmap_high(struct page *page);
-
#define ARCH_HAS_KMAP_FLUSH_TLB
extern void kmap_flush_tlb(unsigned long addr);
-extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index c72058bfead6..eb8ec8493f2f 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -18,15 +18,6 @@ void kmap_flush_tlb(unsigned long addr)
}
EXPORT_SYMBOL(kmap_flush_tlb);

-void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-EXPORT_SYMBOL(kunmap);
-
/*
* kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because
* no global lock is needed and because the kmap code must perform a global TLB
diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
index b13654a79069..c93c7368bb3f 100644
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -44,8 +44,6 @@ extern unsigned long highstart_pfn, highend_pfn;

extern pte_t *pkmap_page_table;

-extern void kunmap_high(struct page *page);
-
extern void kmap_init(void);

/*
@@ -53,7 +51,6 @@ extern void kmap_init(void);
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
-extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index d0cde53b84ae..f9348bec0ecb 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -10,16 +10,6 @@
#include <asm/fixmap.h>
#include <asm/tlbflush.h>

-void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-
-EXPORT_SYMBOL(kunmap);
-
void *kmap_atomic(struct page *page)
{
unsigned int idx;
diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h
index f14e4feef6d5..ba3371977d49 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -59,18 +59,9 @@ extern pte_t *pkmap_page_table;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void kunmap_high(struct page *page);
extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
extern void __kunmap_atomic(void *kvaddr);

-static inline void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-
static inline void *kmap_atomic(struct page *page)
{
return kmap_atomic_prot(page, kmap_prot);
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index 2ff1192047f7..4bdb79fed02c 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -50,16 +50,6 @@ void kmap_init(void) __init;

#define PKMAP_END (PKMAP_ADDR(LAST_PKMAP))

-void kunmap_high(struct page *page);
-
-static inline void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-
void *kmap_atomic(struct page *page);
void __kunmap_atomic(void *kvaddr);

diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index c916a28a9738..90b96594d6c5 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -58,10 +58,6 @@ extern unsigned long highstart_pfn, highend_pfn;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-extern void kunmap_high(struct page *page);
-
-void kunmap(struct page *page);
-
void *kmap_atomic_prot(struct page *page, pgprot_t prot);
void *kmap_atomic(struct page *page);
void __kunmap_atomic(void *kvaddr);
diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index 12591a81b85c..c4ebfd0ae401 100644
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -4,15 +4,6 @@
#include <linux/swap.h> /* for totalram_pages */
#include <linux/memblock.h>

-void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-EXPORT_SYMBOL(kunmap);
-
/*
* kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because
* no global lock is needed and because the kmap code must perform a global TLB
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 2546b88ddecf..5a481f7def0b 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -63,16 +63,6 @@ static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)

extern pte_t *pkmap_page_table;

-void kunmap_high(struct page *page);
-
-static inline void kunmap(struct page *page)
-{
- might_sleep();
- if (!PageHighMem(page))
- return;
- kunmap_high(page);
-}
-
static inline void flush_cache_kmaps(void)
{
flush_cache_all();
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index fc3adc51254a..ae6e8cb81043 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -52,6 +52,15 @@ static inline void *kmap(struct page *page)
return addr;
}

+void kunmap_high(struct page *page);
+static inline void kunmap(struct page *page)
+{
+ might_sleep();
+ if (!PageHighMem(page))
+ return;
+ kunmap_high(page);
+}
+
/* declarations for linux/mm/highmem.c */
unsigned int nr_free_highpages(void);
extern atomic_long_t _totalhigh_pages;
--
2.25.1

2020-04-30 20:42:54

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 01/10] arch/kmap: Remove BUG_ON()

From: Ira Weiny <[email protected]>

Replace the use of BUG_ON(in_interrupt()) in the kmap() and kunmap()
in favor of might_sleep().

Besides the benefits of might_sleep(), this normalizes the
implementations such that they can be made generic in subsequent
patches.

Reviewed-by: Dan Williams <[email protected]>
Signed-off-by: Ira Weiny <[email protected]>
---
arch/arc/include/asm/highmem.h | 2 +-
arch/arc/mm/highmem.c | 2 +-
arch/arm/mm/highmem.c | 2 +-
arch/csky/mm/highmem.c | 2 +-
arch/microblaze/include/asm/highmem.h | 2 +-
arch/mips/mm/highmem.c | 2 +-
arch/nds32/mm/highmem.c | 2 +-
arch/powerpc/include/asm/highmem.h | 2 +-
arch/sparc/include/asm/highmem.h | 4 ++--
arch/x86/mm/highmem_32.c | 3 +--
arch/xtensa/include/asm/highmem.h | 4 ++--
11 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 1af00accb37f..042e92921c4c 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -45,7 +45,7 @@ static inline void flush_cache_kmaps(void)

static inline void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index fc8849e4f72e..39ef7b9a3aa9 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -51,7 +51,7 @@ static pte_t * fixmap_page_table;

void *kmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return page_address(page);

diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index a76f8ace9ce6..cc6eb79ef20c 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -42,7 +42,7 @@ EXPORT_SYMBOL(kmap);

void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 813129145f3d..690d678649d1 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -29,7 +29,7 @@ EXPORT_SYMBOL(kmap);

void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h
index 332c78e15198..99ced7278b5c 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -66,7 +66,7 @@ static inline void *kmap(struct page *page)

static inline void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index d08e6d7d533b..edd889f6cede 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -28,7 +28,7 @@ EXPORT_SYMBOL(kmap);

void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index 022779af6148..4c7c28e994ea 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -24,7 +24,7 @@ EXPORT_SYMBOL(kmap);

void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h
index a4b65b186ec6..529512f6d65a 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -74,7 +74,7 @@ static inline void *kmap(struct page *page)

static inline void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index 18d776925c45..7dd2d4b3f980 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -55,7 +55,7 @@ void kunmap_high(struct page *page);

static inline void *kmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return page_address(page);
return kmap_high(page);
@@ -63,7 +63,7 @@ static inline void *kmap(struct page *page)

static inline void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index 0a1898b8552e..8af66382672b 100644
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -15,8 +15,7 @@ EXPORT_SYMBOL(kmap);

void kunmap(struct page *page)
{
- if (in_interrupt())
- BUG();
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 04e9340eac4b..413848cc1e56 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -73,7 +73,7 @@ static inline void *kmap(struct page *page)
*/
BUILD_BUG_ON(PKMAP_BASE <
TLBTEMP_BASE_1 + TLBTEMP_SIZE);
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return page_address(page);
return kmap_high(page);
@@ -81,7 +81,7 @@ static inline void *kmap(struct page *page)

static inline void kunmap(struct page *page)
{
- BUG_ON(in_interrupt());
+ might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
--
2.25.1

2020-04-30 20:44:05

by Ira Weiny

[permalink] [raw]
Subject: [PATCH V1 05/10] arch/kmap_atomic: Consolidate duplicate code

From: Ira Weiny <[email protected]>

Every arch has the same code to ensure atomic operations and a check for
!HIGHMEM page.

Remove the duplicate code by defining a core kmap_atomic() which only
calls the arch specific kmap_atomic_high() when the page is high memory.

Signed-off-by: Ira Weiny <[email protected]>

---
Changes from V0:
consolidate comments
Use a similar architecture to kmap() and define
kmap_atomic_high() for architecture specific
functionality
Fix 0-day build issue in arch/mips/mm/cache.c
---
arch/arc/include/asm/highmem.h | 2 +-
arch/arc/mm/highmem.c | 9 ++-------
arch/arm/include/asm/highmem.h | 2 +-
arch/arm/mm/highmem.c | 9 ++-------
arch/csky/include/asm/highmem.h | 2 +-
arch/csky/mm/highmem.c | 9 ++-------
arch/microblaze/include/asm/highmem.h | 2 +-
arch/microblaze/mm/highmem.c | 6 ------
arch/mips/include/asm/highmem.h | 2 +-
arch/mips/mm/cache.c | 2 +-
arch/mips/mm/highmem.c | 18 ++----------------
arch/nds32/include/asm/highmem.h | 2 +-
arch/nds32/mm/highmem.c | 9 ++-------
arch/powerpc/include/asm/highmem.h | 2 +-
arch/powerpc/mm/highmem.c | 11 -----------
arch/sparc/include/asm/highmem.h | 2 +-
arch/sparc/mm/highmem.c | 9 ++-------
arch/x86/include/asm/highmem.h | 7 +++++--
arch/x86/mm/highmem_32.c | 20 --------------------
arch/xtensa/include/asm/highmem.h | 2 +-
arch/xtensa/mm/highmem.c | 9 ++-------
include/linux/highmem.h | 22 ++++++++++++++++++++++
22 files changed, 51 insertions(+), 107 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 8387a5596a91..75bd0fa77fe2 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -30,7 +30,7 @@

#include <asm/cacheflush.h>

-extern void *kmap_atomic(struct page *page);
+extern void *kmap_atomic_high(struct page *page);
extern void __kunmap_atomic(void *kvaddr);

extern void kmap_init(void);
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 4db13a6b9f3b..0964b011c29f 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -49,16 +49,11 @@
extern pte_t * pkmap_page_table;
static pte_t * fixmap_page_table;

-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
{
int idx, cpu_idx;
unsigned long vaddr;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
cpu_idx = kmap_atomic_idx_push();
idx = cpu_idx + KM_TYPE_NR * smp_processor_id();
vaddr = FIXMAP_ADDR(idx);
@@ -68,7 +63,7 @@ void *kmap_atomic(struct page *page)

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);

void __kunmap_atomic(void *kv)
{
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index 736f65283e7b..4edb6db3a5c8 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -60,7 +60,7 @@ static inline void *kmap_high_get(struct page *page)
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
-extern void *kmap_atomic(struct page *page);
+extern void *kmap_atomic_high(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
#endif
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index c700b32350ee..075fdc235091 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -31,18 +31,13 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
return *ptep;
}

-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
{
unsigned int idx;
unsigned long vaddr;
void *kmap;
int type;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
#ifdef CONFIG_DEBUG_HIGHMEM
/*
* There is no cache coherency issue when non VIVT, so force the
@@ -76,7 +71,7 @@ void *kmap_atomic(struct page *page)

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);

void __kunmap_atomic(void *kvaddr)
{
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index be11c5b67122..6807df1232f3 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -32,7 +32,7 @@ extern pte_t *pkmap_page_table;

#define ARCH_HAS_KMAP_FLUSH_TLB
extern void kmap_flush_tlb(unsigned long addr);
-extern void *kmap_atomic(struct page *page);
+extern void *kmap_atomic_high(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index e9952211264b..63d74b47eee6 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -21,16 +21,11 @@ EXPORT_SYMBOL(kmap_flush_tlb);

EXPORT_SYMBOL(kmap);

-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
{
unsigned long vaddr;
int idx, type;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
@@ -42,7 +37,7 @@ void *kmap_atomic(struct page *page)

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);

void __kunmap_atomic(void *kvaddr)
{
diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h
index 0c94046f2d58..fe4ad8bac9ae 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -54,7 +54,7 @@ extern pte_t *pkmap_page_table;
extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
extern void __kunmap_atomic(void *kvaddr);

-static inline void *kmap_atomic(struct page *page)
+static inline void *kmap_atomic_high(struct page *page)
{
return kmap_atomic_prot(page, kmap_prot);
}
diff --git a/arch/microblaze/mm/highmem.c b/arch/microblaze/mm/highmem.c
index d7569f77fa15..a14f356b055b 100644
--- a/arch/microblaze/mm/highmem.c
+++ b/arch/microblaze/mm/highmem.c
@@ -38,12 +38,6 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
unsigned long vaddr;
int idx, type;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
-
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
index 24e7e7e5cc7b..a515bcf15d4b 100644
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -48,7 +48,7 @@ extern pte_t *pkmap_page_table;

#define ARCH_HAS_KMAP_FLUSH_TLB
extern void kmap_flush_tlb(unsigned long addr);
-extern void *kmap_atomic(struct page *page);
+extern void *kmap_atomic_high(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);

diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 33b409391ddb..f015bb51fab0 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -14,9 +14,9 @@
#include <linux/sched.h>
#include <linux/syscalls.h>
#include <linux/mm.h>
+#include <linux/highmem.h>

#include <asm/cacheflush.h>
-#include <asm/highmem.h>
#include <asm/processor.h>
#include <asm/cpu.h>
#include <asm/cpu-features.h>
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index eb8ec8493f2f..2bda56372995 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -18,25 +18,11 @@ void kmap_flush_tlb(unsigned long addr)
}
EXPORT_SYMBOL(kmap_flush_tlb);

-/*
- * kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because
- * no global lock is needed and because the kmap code must perform a global TLB
- * invalidation when the kmap pool wraps.
- *
- * However when holding an atomic kmap is is not legal to sleep, so atomic
- * kmaps are appropriate for short, tight code paths only.
- */
-
-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
{
unsigned long vaddr;
int idx, type;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
@@ -48,7 +34,7 @@ void *kmap_atomic(struct page *page)

return (void*) vaddr;
}
-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);

void __kunmap_atomic(void *kvaddr)
{
diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
index c93c7368bb3f..28f5e7072c70 100644
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -51,7 +51,7 @@ extern void kmap_init(void);
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
-extern void *kmap_atomic(struct page *page);
+extern void *kmap_atomic_high(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index f9348bec0ecb..f5f3a21460c4 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -10,18 +10,13 @@
#include <asm/fixmap.h>
#include <asm/tlbflush.h>

-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
{
unsigned int idx;
unsigned long vaddr, pte;
int type;
pte_t *ptep;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
type = kmap_atomic_idx_push();

idx = type + KM_TYPE_NR * smp_processor_id();
@@ -37,7 +32,7 @@ void *kmap_atomic(struct page *page)
return (void *)vaddr;
}

-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);

void __kunmap_atomic(void *kvaddr)
{
diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h
index ba3371977d49..ac0efc2cf08a 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -62,7 +62,7 @@ extern pte_t *pkmap_page_table;
extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
extern void __kunmap_atomic(void *kvaddr);

-static inline void *kmap_atomic(struct page *page)
+static inline void *kmap_atomic_high(struct page *page)
{
return kmap_atomic_prot(page, kmap_prot);
}
diff --git a/arch/powerpc/mm/highmem.c b/arch/powerpc/mm/highmem.c
index 320c1672b2ae..f9558ef4b8fa 100644
--- a/arch/powerpc/mm/highmem.c
+++ b/arch/powerpc/mm/highmem.c
@@ -24,22 +24,11 @@
#include <linux/highmem.h>
#include <linux/module.h>

-/*
- * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap
- * gives a more generic (and caching) interface. But kmap_atomic can
- * be used in IRQ contexts, so in some (very limited) cases we need
- * it.
- */
void *kmap_atomic_prot(struct page *page, pgprot_t prot)
{
unsigned long vaddr;
int idx, type;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index 4bdb79fed02c..c96a0603d821 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -50,7 +50,7 @@ void kmap_init(void) __init;

#define PKMAP_END (PKMAP_ADDR(LAST_PKMAP))

-void *kmap_atomic(struct page *page);
+void *kmap_atomic_high(struct page *page);
void __kunmap_atomic(void *kvaddr);

#define flush_cache_kmaps() flush_cache_all()
diff --git a/arch/sparc/mm/highmem.c b/arch/sparc/mm/highmem.c
index d4a80adea7e5..b53070ab6a31 100644
--- a/arch/sparc/mm/highmem.c
+++ b/arch/sparc/mm/highmem.c
@@ -53,16 +53,11 @@ void __init kmap_init(void)
kmap_prot = __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
}

-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
{
unsigned long vaddr;
long idx, type;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
@@ -87,7 +82,7 @@ void *kmap_atomic(struct page *page)

return (void*) vaddr;
}
-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);

void __kunmap_atomic(void *kvaddr)
{
diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 90b96594d6c5..72e154e17416 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -58,8 +58,11 @@ extern unsigned long highstart_pfn, highend_pfn;
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))

-void *kmap_atomic_prot(struct page *page, pgprot_t prot);
-void *kmap_atomic(struct page *page);
+extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
+void *kmap_atomic_high(struct page *page)
+{
+ return kmap_atomic_prot(page, kmap_prot);
+}
void __kunmap_atomic(void *kvaddr);
void *kmap_atomic_pfn(unsigned long pfn);
void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot);
diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index c4ebfd0ae401..937d2cc40389 100644
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -4,25 +4,11 @@
#include <linux/swap.h> /* for totalram_pages */
#include <linux/memblock.h>

-/*
- * kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because
- * no global lock is needed and because the kmap code must perform a global TLB
- * invalidation when the kmap pool wraps.
- *
- * However when holding an atomic kmap it is not legal to sleep, so atomic
- * kmaps are appropriate for short, tight code paths only.
- */
void *kmap_atomic_prot(struct page *page, pgprot_t prot)
{
unsigned long vaddr;
int idx, type;

- preempt_disable();
- pagefault_disable();
-
- if (!PageHighMem(page))
- return page_address(page);
-
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
@@ -34,12 +20,6 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
}
EXPORT_SYMBOL(kmap_atomic_prot);

-void *kmap_atomic(struct page *page)
-{
- return kmap_atomic_prot(page, kmap_prot);
-}
-EXPORT_SYMBOL(kmap_atomic);
-
/*
* This is the same as kmap_atomic() but can map memory that doesn't
* have a struct page associated with it.
diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 5a481f7def0b..d4fb9f78ba32 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -68,7 +68,7 @@ static inline void flush_cache_kmaps(void)
flush_cache_all();
}

-void *kmap_atomic(struct page *page);
+void *kmap_atomic_high(struct page *page);
void __kunmap_atomic(void *kvaddr);

void kmap_init(void);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 711641c4d214..217f2ebaa298 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -37,16 +37,11 @@ static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
color;
}

-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
{
enum fixed_addresses idx;
unsigned long vaddr;

- preempt_disable();
- pagefault_disable();
- if (!PageHighMem(page))
- return page_address(page);
-
idx = kmap_idx(kmap_atomic_idx_push(),
DCACHE_ALIAS(page_to_phys(page)));
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
@@ -57,7 +52,7 @@ void *kmap_atomic(struct page *page)

return (void *)vaddr;
}
-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);

void __kunmap_atomic(void *kvaddr)
{
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index ae6e8cb81043..e0106b4f7dbb 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -61,6 +61,28 @@ static inline void kunmap(struct page *page)
kunmap_high(page);
}

+/*
+ * kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because
+ * no global lock is needed and because the kmap code must perform a global TLB
+ * invalidation when the kmap pool wraps.
+ *
+ * However when holding an atomic kmap is is not legal to sleep, so atomic
+ * kmaps are appropriate for short, tight code paths only.
+ *
+ * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap
+ * gives a more generic (and caching) interface. But kmap_atomic can
+ * be used in IRQ contexts, so in some (very limited) cases we need
+ * it.
+ */
+static inline void *kmap_atomic(struct page *page)
+{
+ preempt_disable();
+ pagefault_disable();
+ if (!PageHighMem(page))
+ return page_address(page);
+ return kmap_atomic_high(page);
+}
+
/* declarations for linux/mm/highmem.c */
unsigned int nr_free_highpages(void);
extern atomic_long_t _totalhigh_pages;
--
2.25.1

2020-05-01 02:11:12

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH V1 05/10] arch/kmap_atomic: Consolidate duplicate code

On Thu, Apr 30, 2020 at 01:38:40PM -0700, [email protected] wrote:
> From: Ira Weiny <[email protected]>
>
> Every arch has the same code to ensure atomic operations and a check for
> !HIGHMEM page.
>
> Remove the duplicate code by defining a core kmap_atomic() which only
> calls the arch specific kmap_atomic_high() when the page is high memory.

Err.... AFAICS, you've just silently changed the semantics for
kmap_atomic_prot() here. And while most of the callers are converted,
drivers/gpu/drm/ttm/ttm_bo_util.c one is not, so at the very least it's
a bisect hazard...

And I would argue that having kmap_atomic() differ from kmap_atomic_prot()
wrt disabling preempt is asking for trouble.

2020-05-01 02:19:12

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH V1 08/10] arch/kmap: Don't hard code kmap_prot values

On Thu, Apr 30, 2020 at 01:38:43PM -0700, [email protected] wrote:
> From: Ira Weiny <[email protected]>
>
> To support kmap_atomic_prot() on all architectures each arch must
> support protections passed in to them.
>
> Change csky, mips, nds32 and xtensa to use their global kmap_prot value
> rather than a hard coded value which was equal.

Minor nitpick: it's probably worth pointing out that kmap_prot on those
is a constant...

2020-05-01 02:30:52

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH V1 00/10] Remove duplicated kmap code

[email protected] writes:
> From: Ira Weiny <[email protected]>
>
> The kmap infrastructure has been copied almost verbatim to every architecture.
> This series consolidates obvious duplicated code by defining core functions
> which call into the architectures only when needed.
>
> Some of the k[un]map_atomic() implementations have some similarities but the
> similarities were not sufficient to warrant further changes.
>
> In addition we remove a duplicate implementation of kmap() in DRM.
>
> Testing was done by 0day to cover all the architectures I can't readily
> build/test.

I threw some powerpc builds at it and they all passed, so LGTM.

cheers

2020-05-01 02:39:32

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH V1 09/10] arch/kmap: Define kmap_atomic_prot() for all arch's

On Thu, Apr 30, 2020 at 01:38:44PM -0700, [email protected] wrote:

> -static inline void *kmap_atomic(struct page *page)
> +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> {
> preempt_disable();
> pagefault_disable();
> if (!PageHighMem(page))
> return page_address(page);
> - return kmap_atomic_high(page);
> + return kmap_atomic_high_prot(page, prot);
> }
> +#define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot)

OK, so it *was* just a bisect hazard - you return to original semantics
wrt preempt_disable()...

2020-05-01 03:23:25

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH V1 09/10] arch/kmap: Define kmap_atomic_prot() for all arch's

On Fri, May 01, 2020 at 03:37:34AM +0100, Al Viro wrote:
> On Thu, Apr 30, 2020 at 01:38:44PM -0700, [email protected] wrote:
>
> > -static inline void *kmap_atomic(struct page *page)
> > +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> > {
> > preempt_disable();
> > pagefault_disable();
> > if (!PageHighMem(page))
> > return page_address(page);
> > - return kmap_atomic_high(page);
> > + return kmap_atomic_high_prot(page, prot);
> > }
> > +#define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot)
>
> OK, so it *was* just a bisect hazard - you return to original semantics
> wrt preempt_disable()...

FWIW, how about doing the following: just before #5/10 have a patch
that would touch only microblaze, ppc and x86 splitting their
kmap_atomic_prot() into an inline helper + kmap_atomic_high_prot().
Then your #5 would leave their kmap_atomic_prot() as-is (it would
use kmap_atomic_prot_high() instead). The rest of the series plays
out pretty much the same way it does now, and wrappers on those
3 architectures would go away when an identical generic one is
introduced in this commit (#9/10).

AFAICS, that would avoid the bisect hazard and might even end
up with less noise in the patches...

2020-05-01 07:25:49

by Christian König

[permalink] [raw]
Subject: Re: [PATCH V1 10/10] drm: Remove drm specific kmap_atomic code

Am 30.04.20 um 22:38 schrieb [email protected]:
> From: Ira Weiny <[email protected]>
>
> kmap_atomic_prot() is now exported by all architectures. Use this
> function rather than open coding a driver specific kmap_atomic.
>
> Signed-off-by: Ira Weiny <[email protected]>

Ah, yes looking into this once more this was on my TODO list for quite a
while as well.

Patch is Reviewed-by: Christian König <[email protected]>, feel
free to push it upstream through whatever channel you like or ping me if
I should pick it up into drm-misc-next.

Regards,
Christian.

> ---
> drivers/gpu/drm/ttm/ttm_bo_util.c | 56 ++--------------------------
> drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 16 ++++----
> include/drm/ttm/ttm_bo_api.h | 4 --
> 3 files changed, 12 insertions(+), 64 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index 52d2b71f1588..f09b096ba4fd 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -257,54 +257,6 @@ static int ttm_copy_io_page(void *dst, void *src, unsigned long page)
> return 0;
> }
>
> -#ifdef CONFIG_X86
> -#define __ttm_kmap_atomic_prot(__page, __prot) kmap_atomic_prot(__page, __prot)
> -#define __ttm_kunmap_atomic(__addr) kunmap_atomic(__addr)
> -#else
> -#define __ttm_kmap_atomic_prot(__page, __prot) vmap(&__page, 1, 0, __prot)
> -#define __ttm_kunmap_atomic(__addr) vunmap(__addr)
> -#endif
> -
> -
> -/**
> - * ttm_kmap_atomic_prot - Efficient kernel map of a single page with
> - * specified page protection.
> - *
> - * @page: The page to map.
> - * @prot: The page protection.
> - *
> - * This function maps a TTM page using the kmap_atomic api if available,
> - * otherwise falls back to vmap. The user must make sure that the
> - * specified page does not have an aliased mapping with a different caching
> - * policy unless the architecture explicitly allows it. Also mapping and
> - * unmapping using this api must be correctly nested. Unmapping should
> - * occur in the reverse order of mapping.
> - */
> -void *ttm_kmap_atomic_prot(struct page *page, pgprot_t prot)
> -{
> - if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))
> - return kmap_atomic(page);
> - else
> - return __ttm_kmap_atomic_prot(page, prot);
> -}
> -EXPORT_SYMBOL(ttm_kmap_atomic_prot);
> -
> -/**
> - * ttm_kunmap_atomic_prot - Unmap a page that was mapped using
> - * ttm_kmap_atomic_prot.
> - *
> - * @addr: The virtual address from the map.
> - * @prot: The page protection.
> - */
> -void ttm_kunmap_atomic_prot(void *addr, pgprot_t prot)
> -{
> - if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))
> - kunmap_atomic(addr);
> - else
> - __ttm_kunmap_atomic(addr);
> -}
> -EXPORT_SYMBOL(ttm_kunmap_atomic_prot);
> -
> static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void *src,
> unsigned long page,
> pgprot_t prot)
> @@ -316,13 +268,13 @@ static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void *src,
> return -ENOMEM;
>
> src = (void *)((unsigned long)src + (page << PAGE_SHIFT));
> - dst = ttm_kmap_atomic_prot(d, prot);
> + dst = kmap_atomic_prot(d, prot);
> if (!dst)
> return -ENOMEM;
>
> memcpy_fromio(dst, src, PAGE_SIZE);
>
> - ttm_kunmap_atomic_prot(dst, prot);
> + kunmap_atomic(dst);
>
> return 0;
> }
> @@ -338,13 +290,13 @@ static int ttm_copy_ttm_io_page(struct ttm_tt *ttm, void *dst,
> return -ENOMEM;
>
> dst = (void *)((unsigned long)dst + (page << PAGE_SHIFT));
> - src = ttm_kmap_atomic_prot(s, prot);
> + src = kmap_atomic_prot(s, prot);
> if (!src)
> return -ENOMEM;
>
> memcpy_toio(dst, src, PAGE_SIZE);
>
> - ttm_kunmap_atomic_prot(src, prot);
> + kunmap_atomic(src);
>
> return 0;
> }
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
> index bb46ca0c458f..94d456a1d1a9 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
> @@ -374,12 +374,12 @@ static int vmw_bo_cpu_blit_line(struct vmw_bo_blit_line_data *d,
> copy_size = min_t(u32, copy_size, PAGE_SIZE - src_page_offset);
>
> if (unmap_src) {
> - ttm_kunmap_atomic_prot(d->src_addr, d->src_prot);
> + kunmap_atomic(d->src_addr);
> d->src_addr = NULL;
> }
>
> if (unmap_dst) {
> - ttm_kunmap_atomic_prot(d->dst_addr, d->dst_prot);
> + kunmap_atomic(d->dst_addr);
> d->dst_addr = NULL;
> }
>
> @@ -388,8 +388,8 @@ static int vmw_bo_cpu_blit_line(struct vmw_bo_blit_line_data *d,
> return -EINVAL;
>
> d->dst_addr =
> - ttm_kmap_atomic_prot(d->dst_pages[dst_page],
> - d->dst_prot);
> + kmap_atomic_prot(d->dst_pages[dst_page],
> + d->dst_prot);
> if (!d->dst_addr)
> return -ENOMEM;
>
> @@ -401,8 +401,8 @@ static int vmw_bo_cpu_blit_line(struct vmw_bo_blit_line_data *d,
> return -EINVAL;
>
> d->src_addr =
> - ttm_kmap_atomic_prot(d->src_pages[src_page],
> - d->src_prot);
> + kmap_atomic_prot(d->src_pages[src_page],
> + d->src_prot);
> if (!d->src_addr)
> return -ENOMEM;
>
> @@ -499,9 +499,9 @@ int vmw_bo_cpu_blit(struct ttm_buffer_object *dst,
> }
> out:
> if (d.src_addr)
> - ttm_kunmap_atomic_prot(d.src_addr, d.src_prot);
> + kunmap_atomic(d.src_addr);
> if (d.dst_addr)
> - ttm_kunmap_atomic_prot(d.dst_addr, d.dst_prot);
> + kunmap_atomic(d.dst_addr);
>
> return ret;
> }
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 0a9d042e075a..de1ccdcd5703 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -668,10 +668,6 @@ int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo);
> int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma,
> struct ttm_bo_device *bdev);
>
> -void *ttm_kmap_atomic_prot(struct page *page, pgprot_t prot);
> -
> -void ttm_kunmap_atomic_prot(void *addr, pgprot_t prot);
> -
> /**
> * ttm_bo_io
> *

2020-05-01 08:37:54

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH V1 01/10] arch/kmap: Remove BUG_ON()

On Thu, Apr 30, 2020 at 01:38:36PM -0700, [email protected] wrote:
> From: Ira Weiny <[email protected]>
>
> Replace the use of BUG_ON(in_interrupt()) in the kmap() and kunmap()
> in favor of might_sleep().
>
> Besides the benefits of might_sleep(), this normalizes the
> implementations such that they can be made generic in subsequent
> patches.
>
> Reviewed-by: Dan Williams <[email protected]>
> Signed-off-by: Ira Weiny <[email protected]>

Looks good,

Reviewed-by: Christoph Hellwig <[email protected]>

2020-05-01 08:39:12

by Christoph Hellwig

[permalink] [raw]

2020-05-01 08:40:31

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH V1 04/10] arch/kunmap: Remove duplicate kunmap implementations

On Thu, Apr 30, 2020 at 01:38:39PM -0700, [email protected] wrote:
> From: Ira Weiny <[email protected]>
>
> All architectures do exactly the same thing for kunmap(); remove all the
> duplicate definitions and lift the call to the core.
>
> This also has the benefit of changing kmap_unmap() on a number of
> architectures to be an inline call rather than an actual function.
>
> Signed-off-by: Ira Weiny <[email protected]>

Looks good,

Reviewed-by: Christoph Hellwig <[email protected]>

2020-05-01 08:41:34

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH V1 06/10] arch/kunmap_atomic: Consolidate duplicate code

On Thu, Apr 30, 2020 at 01:38:41PM -0700, [email protected] wrote:
> From: Ira Weiny <[email protected]>
>
> Every single architecture (including !CONFIG_HIGHMEM) calls...
>
> pagefault_enable();
> preempt_enable();
>
> ... before returning from __kunmap_atomic(). Lift this code into the
> kunmap_atomic() macro.
>
> While we are at it rename __kunmap_atomic() to kunmap_atomic_high() to
> be consistent.
>
> Signed-off-by: Ira Weiny <[email protected]>

Looks good,

Reviewed-by: Christoph Hellwig <[email protected]>

2020-05-01 08:43:12

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH V1 05/10] arch/kmap_atomic: Consolidate duplicate code

On Thu, Apr 30, 2020 at 01:38:40PM -0700, [email protected] wrote:
> From: Ira Weiny <[email protected]>
>
> Every arch has the same code to ensure atomic operations and a check for
> !HIGHMEM page.
>
> Remove the duplicate code by defining a core kmap_atomic() which only
> calls the arch specific kmap_atomic_high() when the page is high memory.
>
> Signed-off-by: Ira Weiny <[email protected]>

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2020-05-01 08:46:35

by Christoph Hellwig

[permalink] [raw]
Subject: sparc-related comment, to Re: [PATCH V1 07/10] arch/kmap: Ensure kmap_prot visibility

> --- a/arch/sparc/mm/highmem.c
> +++ b/arch/sparc/mm/highmem.c
> @@ -33,6 +33,7 @@
> #include <asm/vaddrs.h>
>
> pgprot_t kmap_prot;
> +EXPORT_SYMBOL(kmap_prot);

Btw, I don't see why sparc needs this as a variable, as there is just
a single assignment to it.

If sparc is sorted out we can always make it a define, and use a define
for kmap_prot that defaults to PAGE_KERNEL, avoiding a little
more duplication.

2020-05-01 08:49:15

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH V1 08/10] arch/kmap: Don't hard code kmap_prot values

On Thu, Apr 30, 2020 at 01:38:43PM -0700, [email protected] wrote:
> From: Ira Weiny <[email protected]>
>
> To support kmap_atomic_prot() on all architectures each arch must
> support protections passed in to them.
>
> Change csky, mips, nds32 and xtensa to use their global kmap_prot value
> rather than a hard coded value which was equal.

Looks good,

Reviewed-by: Christoph Hellwig <[email protected]>

2020-05-01 08:50:53

by Christoph Hellwig

[permalink] [raw]
Subject: xtensa question, was Re: [PATCH V1 00/10] Remove duplicated kmap code

Hi Max,

any idea why xtensa uses PAGE_KERNEL_EXEC instead of PAGE_KERNEL
for kmap_prot? Mapping all mapped highmem as executable seems rather
dangerous.

2020-05-01 08:51:22

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH V1 10/10] drm: Remove drm specific kmap_atomic code

Looks good,

Reviewed-by: Christoph Hellwig <[email protected]>

2020-05-01 08:52:57

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH V1 09/10] arch/kmap: Define kmap_atomic_prot() for all arch's

On Thu, Apr 30, 2020 at 01:38:44PM -0700, [email protected] wrote:
> From: Ira Weiny <[email protected]>
>
> To support kmap_atomic_prot(), all architectures need to support
> protections passed to their kmap_atomic_high() function. Pass
> protections into kmap_atomic_high() and change the name to
> kmap_atomic_high_prot() to match.
>
> Then define kmap_atomic_prot() as a core function which calls
> kmap_atomic_high_prot() when needed.
>
> Finally, redefine kmap_atomic() as a wrapper of kmap_atomic_prot() with
> the default kmap_prot exported by the architectures.

Looks good,

Reviewed-by: Christoph Hellwig <[email protected]>

But can you also consolidate the kmap_atomic_high_prot and
kunmap_atomic_high in linux/highmem.h instead of keeping the duplicates
in all arch headers?

>
> Signed-off-by: Ira Weiny <[email protected]>
> ---
> arch/arc/include/asm/highmem.h | 2 +-
> arch/arc/mm/highmem.c | 6 +++---
> arch/arm/include/asm/highmem.h | 2 +-
> arch/arm/mm/highmem.c | 6 +++---
> arch/csky/include/asm/highmem.h | 2 +-
> arch/csky/mm/highmem.c | 6 +++---
> arch/microblaze/include/asm/highmem.h | 7 +------
> arch/microblaze/mm/highmem.c | 4 ++--
> arch/mips/include/asm/highmem.h | 2 +-
> arch/mips/mm/highmem.c | 6 +++---
> arch/nds32/include/asm/highmem.h | 2 +-
> arch/nds32/mm/highmem.c | 6 +++---
> arch/powerpc/include/asm/highmem.h | 8 +-------
> arch/powerpc/mm/highmem.c | 4 ++--
> arch/sparc/include/asm/highmem.h | 2 +-
> arch/sparc/mm/highmem.c | 6 +++---
> arch/x86/include/asm/highmem.h | 6 +-----
> arch/x86/mm/highmem_32.c | 4 ++--
> arch/xtensa/include/asm/highmem.h | 2 +-
> arch/xtensa/mm/highmem.c | 6 +++---
> include/linux/highmem.h | 5 +++--
> 21 files changed, 40 insertions(+), 54 deletions(-)
>
> diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
> index e16531495620..09f86bde6809 100644
> --- a/arch/arc/include/asm/highmem.h
> +++ b/arch/arc/include/asm/highmem.h
> @@ -30,7 +30,7 @@
>
> #include <asm/cacheflush.h>
>
> -extern void *kmap_atomic_high(struct page *page);
> +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> extern void kunmap_atomic_high(void *kvaddr);
>
> extern void kmap_init(void);
> diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
> index 5d3eab4ac0b0..479b0d72d3cf 100644
> --- a/arch/arc/mm/highmem.c
> +++ b/arch/arc/mm/highmem.c
> @@ -49,7 +49,7 @@
> extern pte_t * pkmap_page_table;
> static pte_t * fixmap_page_table;
>
> -void *kmap_atomic_high(struct page *page)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> int idx, cpu_idx;
> unsigned long vaddr;
> @@ -59,11 +59,11 @@ void *kmap_atomic_high(struct page *page)
> vaddr = FIXMAP_ADDR(idx);
>
> set_pte_at(&init_mm, vaddr, fixmap_page_table + idx,
> - mk_pte(page, kmap_prot));
> + mk_pte(page, prot));
>
> return (void *)vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_high);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kv)
> {
> diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
> index a9d5e9bce1cc..e35f2f73f6aa 100644
> --- a/arch/arm/include/asm/highmem.h
> +++ b/arch/arm/include/asm/highmem.h
> @@ -60,7 +60,7 @@ static inline void *kmap_high_get(struct page *page)
> * when CONFIG_HIGHMEM is not set.
> */
> #ifdef CONFIG_HIGHMEM
> -extern void *kmap_atomic_high(struct page *page);
> +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> extern void kunmap_atomic_high(void *kvaddr);
> extern void *kmap_atomic_pfn(unsigned long pfn);
> #endif
> diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
> index ac8394655a6e..e013f6b81328 100644
> --- a/arch/arm/mm/highmem.c
> +++ b/arch/arm/mm/highmem.c
> @@ -31,7 +31,7 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
> return *ptep;
> }
>
> -void *kmap_atomic_high(struct page *page)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> unsigned int idx;
> unsigned long vaddr;
> @@ -67,11 +67,11 @@ void *kmap_atomic_high(struct page *page)
> * in place, so the contained TLB flush ensures the TLB is updated
> * with the new mapping.
> */
> - set_fixmap_pte(idx, mk_pte(page, kmap_prot));
> + set_fixmap_pte(idx, mk_pte(page, prot));
>
> return (void *)vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_high);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kvaddr)
> {
> diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
> index 5bbbe59e60a9..59854c7ccf78 100644
> --- a/arch/csky/include/asm/highmem.h
> +++ b/arch/csky/include/asm/highmem.h
> @@ -32,7 +32,7 @@ extern pte_t *pkmap_page_table;
>
> #define ARCH_HAS_KMAP_FLUSH_TLB
> extern void kmap_flush_tlb(unsigned long addr);
> -extern void *kmap_atomic_high(struct page *page);
> +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> extern void kunmap_atomic_high(void *kvaddr);
> extern void *kmap_atomic_pfn(unsigned long pfn);
> extern struct page *kmap_atomic_to_page(void *ptr);
> diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
> index f4311669b5bb..3ae5c8cd7619 100644
> --- a/arch/csky/mm/highmem.c
> +++ b/arch/csky/mm/highmem.c
> @@ -21,7 +21,7 @@ EXPORT_SYMBOL(kmap_flush_tlb);
>
> EXPORT_SYMBOL(kmap);
>
> -void *kmap_atomic_high(struct page *page)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> unsigned long vaddr;
> int idx, type;
> @@ -32,12 +32,12 @@ void *kmap_atomic_high(struct page *page)
> #ifdef CONFIG_DEBUG_HIGHMEM
> BUG_ON(!pte_none(*(kmap_pte - idx)));
> #endif
> - set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
> + set_pte(kmap_pte-idx, mk_pte(page, prot));
> flush_tlb_one((unsigned long)vaddr);
>
> return (void *)vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_high);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kvaddr)
> {
> diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h
> index 66521fdc3a47..eb0a2cb883bd 100644
> --- a/arch/microblaze/include/asm/highmem.h
> +++ b/arch/microblaze/include/asm/highmem.h
> @@ -51,14 +51,9 @@ extern pte_t *pkmap_page_table;
> #define PKMAP_NR(virt) ((virt - PKMAP_BASE) >> PAGE_SHIFT)
> #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
>
> -extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
> +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> extern void kunmap_atomic_high(void *kvaddr);
>
> -static inline void *kmap_atomic_high(struct page *page)
> -{
> - return kmap_atomic_prot(page, kmap_prot);
> -}
> -
> #define flush_cache_kmaps() { flush_icache(); flush_dcache(); }
>
> #endif /* __KERNEL__ */
> diff --git a/arch/microblaze/mm/highmem.c b/arch/microblaze/mm/highmem.c
> index 1026aeffe11a..ee8a422b2b76 100644
> --- a/arch/microblaze/mm/highmem.c
> +++ b/arch/microblaze/mm/highmem.c
> @@ -32,7 +32,7 @@
> */
> #include <asm/tlbflush.h>
>
> -void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
>
> unsigned long vaddr;
> @@ -49,7 +49,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
>
> return (void *) vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_prot);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kvaddr)
> {
> diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
> index d9f774bd4938..c9f46b450a68 100644
> --- a/arch/mips/include/asm/highmem.h
> +++ b/arch/mips/include/asm/highmem.h
> @@ -48,7 +48,7 @@ extern pte_t *pkmap_page_table;
>
> #define ARCH_HAS_KMAP_FLUSH_TLB
> extern void kmap_flush_tlb(unsigned long addr);
> -extern void *kmap_atomic_high(struct page *page);
> +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> extern void kunmap_atomic_high(void *kvaddr);
> extern void *kmap_atomic_pfn(unsigned long pfn);
>
> diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
> index 87023bd1a33c..37e244cdb14e 100644
> --- a/arch/mips/mm/highmem.c
> +++ b/arch/mips/mm/highmem.c
> @@ -18,7 +18,7 @@ void kmap_flush_tlb(unsigned long addr)
> }
> EXPORT_SYMBOL(kmap_flush_tlb);
>
> -void *kmap_atomic_high(struct page *page)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> unsigned long vaddr;
> int idx, type;
> @@ -29,12 +29,12 @@ void *kmap_atomic_high(struct page *page)
> #ifdef CONFIG_DEBUG_HIGHMEM
> BUG_ON(!pte_none(*(kmap_pte - idx)));
> #endif
> - set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
> + set_pte(kmap_pte-idx, mk_pte(page, prot));
> local_flush_tlb_one((unsigned long)vaddr);
>
> return (void*) vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_high);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kvaddr)
> {
> diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
> index 97648b678108..1f9fc74d112d 100644
> --- a/arch/nds32/include/asm/highmem.h
> +++ b/arch/nds32/include/asm/highmem.h
> @@ -51,7 +51,7 @@ extern void kmap_init(void);
> * when CONFIG_HIGHMEM is not set.
> */
> #ifdef CONFIG_HIGHMEM
> -extern void *kmap_atomic_high(struct page *page);
> +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> extern void kunmap_atomic_high(void *kvaddr);
> extern void *kmap_atomic_pfn(unsigned long pfn);
> extern struct page *kmap_atomic_to_page(void *ptr);
> diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
> index 809f8c830f06..63ded527c1e8 100644
> --- a/arch/nds32/mm/highmem.c
> +++ b/arch/nds32/mm/highmem.c
> @@ -10,7 +10,7 @@
> #include <asm/fixmap.h>
> #include <asm/tlbflush.h>
>
> -void *kmap_atomic_high(struct page *page)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> unsigned int idx;
> unsigned long vaddr, pte;
> @@ -21,7 +21,7 @@ void *kmap_atomic_high(struct page *page)
>
> idx = type + KM_TYPE_NR * smp_processor_id();
> vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
> - pte = (page_to_pfn(page) << PAGE_SHIFT) | (kmap_prot);
> + pte = (page_to_pfn(page) << PAGE_SHIFT) | prot;
> ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
> set_pte(ptep, pte);
>
> @@ -32,7 +32,7 @@ void *kmap_atomic_high(struct page *page)
> return (void *)vaddr;
> }
>
> -EXPORT_SYMBOL(kmap_atomic_high);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kvaddr)
> {
> diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h
> index d264aebcaa9b..edd01bbe5a44 100644
> --- a/arch/powerpc/include/asm/highmem.h
> +++ b/arch/powerpc/include/asm/highmem.h
> @@ -59,15 +59,9 @@ extern pte_t *pkmap_page_table;
> #define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
> #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
>
> -extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
> +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> extern void kunmap_atomic_high(void *kvaddr);
>
> -static inline void *kmap_atomic_high(struct page *page)
> -{
> - return kmap_atomic_prot(page, kmap_prot);
> -}
> -
> -
> #define flush_cache_kmaps() flush_cache_all()
>
> #endif /* __KERNEL__ */
> diff --git a/arch/powerpc/mm/highmem.c b/arch/powerpc/mm/highmem.c
> index 162958321e28..35071c2913f1 100644
> --- a/arch/powerpc/mm/highmem.c
> +++ b/arch/powerpc/mm/highmem.c
> @@ -24,7 +24,7 @@
> #include <linux/highmem.h>
> #include <linux/module.h>
>
> -void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> unsigned long vaddr;
> int idx, type;
> @@ -38,7 +38,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
>
> return (void*) vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_prot);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kvaddr)
> {
> diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
> index 94dd6e4c5fa4..d5c5700672de 100644
> --- a/arch/sparc/include/asm/highmem.h
> +++ b/arch/sparc/include/asm/highmem.h
> @@ -50,7 +50,7 @@ void kmap_init(void) __init;
>
> #define PKMAP_END (PKMAP_ADDR(LAST_PKMAP))
>
> -void *kmap_atomic_high(struct page *page);
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> void kunmap_atomic_high(void *kvaddr);
>
> #define flush_cache_kmaps() flush_cache_all()
> diff --git a/arch/sparc/mm/highmem.c b/arch/sparc/mm/highmem.c
> index 9f06d75e88e1..414f578d1e57 100644
> --- a/arch/sparc/mm/highmem.c
> +++ b/arch/sparc/mm/highmem.c
> @@ -54,7 +54,7 @@ void __init kmap_init(void)
> kmap_prot = __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
> }
>
> -void *kmap_atomic_high(struct page *page)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> unsigned long vaddr;
> long idx, type;
> @@ -73,7 +73,7 @@ void *kmap_atomic_high(struct page *page)
> #ifdef CONFIG_DEBUG_HIGHMEM
> BUG_ON(!pte_none(*(kmap_pte-idx)));
> #endif
> - set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
> + set_pte(kmap_pte-idx, mk_pte(page, prot));
> /* XXX Fix - Anton */
> #if 0
> __flush_tlb_one(vaddr);
> @@ -83,7 +83,7 @@ void *kmap_atomic_high(struct page *page)
>
> return (void*) vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_high);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kvaddr)
> {
> diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
> index ff87aba96eee..009b8e22e906 100644
> --- a/arch/x86/include/asm/highmem.h
> +++ b/arch/x86/include/asm/highmem.h
> @@ -58,11 +58,7 @@ extern unsigned long highstart_pfn, highend_pfn;
> #define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
> #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
>
> -extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
> -void *kmap_atomic_high(struct page *page)
> -{
> - return kmap_atomic_prot(page, kmap_prot);
> -}
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> void kunmap_atomic_high(void *kvaddr);
> void *kmap_atomic_pfn(unsigned long pfn);
> void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot);
> diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
> index af07a6842743..075fe51317b0 100644
> --- a/arch/x86/mm/highmem_32.c
> +++ b/arch/x86/mm/highmem_32.c
> @@ -4,7 +4,7 @@
> #include <linux/swap.h> /* for totalram_pages */
> #include <linux/memblock.h>
>
> -void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> unsigned long vaddr;
> int idx, type;
> @@ -18,7 +18,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
>
> return (void *)vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_prot);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> /*
> * This is the same as kmap_atomic() but can map memory that doesn't
> diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
> index a60a02dc68f6..7152aeb1e3a4 100644
> --- a/arch/xtensa/include/asm/highmem.h
> +++ b/arch/xtensa/include/asm/highmem.h
> @@ -68,7 +68,7 @@ static inline void flush_cache_kmaps(void)
> flush_cache_all();
> }
>
> -void *kmap_atomic_high(struct page *page);
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
> void kunmap_atomic_high(void *kvaddr);
>
> void kmap_init(void);
> diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
> index 8c58c4c37033..fe56644d7b23 100644
> --- a/arch/xtensa/mm/highmem.c
> +++ b/arch/xtensa/mm/highmem.c
> @@ -37,7 +37,7 @@ static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
> color;
> }
>
> -void *kmap_atomic_high(struct page *page)
> +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
> {
> enum fixed_addresses idx;
> unsigned long vaddr;
> @@ -48,11 +48,11 @@ void *kmap_atomic_high(struct page *page)
> #ifdef CONFIG_DEBUG_HIGHMEM
> BUG_ON(!pte_none(*(kmap_pte + idx)));
> #endif
> - set_pte(kmap_pte + idx, mk_pte(page, kmap_prot));
> + set_pte(kmap_pte + idx, mk_pte(page, prot));
>
> return (void *)vaddr;
> }
> -EXPORT_SYMBOL(kmap_atomic_high);
> +EXPORT_SYMBOL(kmap_atomic_high_prot);
>
> void kunmap_atomic_high(void *kvaddr)
> {
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index 601df07607a4..b10e8a39ae60 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -74,14 +74,15 @@ static inline void kunmap(struct page *page)
> * be used in IRQ contexts, so in some (very limited) cases we need
> * it.
> */
> -static inline void *kmap_atomic(struct page *page)
> +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> {
> preempt_disable();
> pagefault_disable();
> if (!PageHighMem(page))
> return page_address(page);
> - return kmap_atomic_high(page);
> + return kmap_atomic_high_prot(page, prot);
> }
> +#define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot)
>
> /* declarations for linux/mm/highmem.c */
> unsigned int nr_free_highpages(void);
> --
> 2.25.1
>
---end quoted text---

2020-05-01 08:56:41

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH V1 00/10] Remove duplicated kmap code

In addition to the work already it the series, it seems like
LAST_PKMAP_MASK, PKMAP_ADDR and PKMAP_NR can also be consolidated
to common code.

Also kmap_atomic_high_prot / kmap_atomic_pfn could move into common
code, maybe keyed off a symbol selected by the actual users that
need it. It also seems like it doesn't actually ever need to be
exported.

This in turn would lead to being able to allow io_mapping_map_atomic_wc
on all architectures, which might make nouveau and qxl happy, but maybe
that can be left for another series.

2020-05-01 09:04:50

by Max Filippov

[permalink] [raw]
Subject: Re: xtensa question, was Re: [PATCH V1 00/10] Remove duplicated kmap code

Hi Christoph,

On Fri, May 1, 2020 at 1:46 AM Christoph Hellwig <[email protected]> wrote:
> any idea why xtensa uses PAGE_KERNEL_EXEC instead of PAGE_KERNEL
> for kmap_prot? Mapping all mapped highmem as executable seems rather
> dangerous.

I sure do: to allow instruction cache flushing when writing to high user
pages temporarily mapped with kmap. Instruction cache management
opcodes that operate on virtual addresses would raise an exception if
the address is not executable.

--
Thanks.
-- Max

2020-05-01 09:22:44

by Christoph Hellwig

[permalink] [raw]
Subject: Re: xtensa question, was Re: [PATCH V1 00/10] Remove duplicated kmap code

On Fri, May 01, 2020 at 02:02:19AM -0700, Max Filippov wrote:
> Hi Christoph,
>
> On Fri, May 1, 2020 at 1:46 AM Christoph Hellwig <[email protected]> wrote:
> > any idea why xtensa uses PAGE_KERNEL_EXEC instead of PAGE_KERNEL
> > for kmap_prot? Mapping all mapped highmem as executable seems rather
> > dangerous.
>
> I sure do: to allow instruction cache flushing when writing to high user
> pages temporarily mapped with kmap. Instruction cache management
> opcodes that operate on virtual addresses would raise an exception if
> the address is not executable.

Seems like this should use kmap_atomic_prot with PAGE_KERNEL_EXEC just
for that case. Which of course didn't exist on xtensa so far, but with
this series will.

2020-05-01 09:52:30

by Max Filippov

[permalink] [raw]
Subject: Re: xtensa question, was Re: [PATCH V1 00/10] Remove duplicated kmap code

On Fri, May 1, 2020 at 2:19 AM Christoph Hellwig <[email protected]> wrote:
>
> On Fri, May 01, 2020 at 02:02:19AM -0700, Max Filippov wrote:
> > Hi Christoph,
> >
> > On Fri, May 1, 2020 at 1:46 AM Christoph Hellwig <[email protected]> wrote:
> > > any idea why xtensa uses PAGE_KERNEL_EXEC instead of PAGE_KERNEL
> > > for kmap_prot? Mapping all mapped highmem as executable seems rather
> > > dangerous.
> >
> > I sure do: to allow instruction cache flushing when writing to high user
> > pages temporarily mapped with kmap. Instruction cache management
> > opcodes that operate on virtual addresses would raise an exception if
> > the address is not executable.
>
> Seems like this should use kmap_atomic_prot with PAGE_KERNEL_EXEC just
> for that case. Which of course didn't exist on xtensa so far, but with
> this series will.

Yeah, except it's the __access_remote_vm that does the kmap and then
calls copy_to_user_page...

--
Thanks.
-- Max

2020-05-01 15:38:06

by Ira Weiny

[permalink] [raw]
Subject: Re: sparc-related comment, to Re: [PATCH V1 07/10] arch/kmap: Ensure kmap_prot visibility

On Fri, May 01, 2020 at 01:44:46AM -0700, Christoph Hellwig wrote:
> > --- a/arch/sparc/mm/highmem.c
> > +++ b/arch/sparc/mm/highmem.c
> > @@ -33,6 +33,7 @@
> > #include <asm/vaddrs.h>
> >
> > pgprot_t kmap_prot;
> > +EXPORT_SYMBOL(kmap_prot);
>
> Btw, I don't see why sparc needs this as a variable, as there is just
> a single assignment to it.

Because sparc uses non-standard defines which I'm not familiar with.

kmap_prot = __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);

SRMMU_ET_PTE and friends are defined in

arch/sparc/include/asm/pgtsrmmu.h

Since I can't readily test sparc this was easier to put out than let 0-day
crank on the entire series checking if including that header in the common
header chain would be an issue.

>
> If sparc is sorted out we can always make it a define, and use a define
> for kmap_prot that defaults to PAGE_KERNEL, avoiding a little
> more duplication.

Agreed. But it seems easier as a follow up (for me with 0-day). Perhaps
someone from sparc can weigh in on the specifics of those defines and why they
are different from the normal ones? Or even provide a follow on patch?

Ira

2020-05-01 17:20:16

by Ira Weiny

[permalink] [raw]
Subject: Re: [PATCH V1 00/10] Remove duplicated kmap code

On Fri, May 01, 2020 at 01:54:56AM -0700, Christoph Hellwig wrote:
> In addition to the work already it the series, it seems like
> LAST_PKMAP_MASK, PKMAP_ADDR and PKMAP_NR can also be consolidated
> to common code.

Agreed, I mentioned in the cover letter there are similarities...

>
> Also kmap_atomic_high_prot / kmap_atomic_pfn could move into common
> code, maybe keyed off a symbol selected by the actual users that
> need it. It also seems like it doesn't actually ever need to be
> exported.

... but these are not as readily obvious, at least to me. I do see a pattern
but the differences seemed subtle enough that it would take a while to ensure
correctness. So I'd like to see this series go in and build on it.

>
> This in turn would lead to being able to allow io_mapping_map_atomic_wc
> on all architectures, which might make nouveau and qxl happy, but maybe
> that can be left for another series.

I agree, that this should be follow on patches. I still need to fix the
bisect-ability and I don't want to bog down 0-day with a longer series.

Thanks for the review!
Ira

2020-05-03 03:16:24

by Ira Weiny

[permalink] [raw]
Subject: Re: [PATCH V1 09/10] arch/kmap: Define kmap_atomic_prot() for all arch's

On Fri, May 01, 2020 at 04:20:20AM +0100, Al Viro wrote:
> On Fri, May 01, 2020 at 03:37:34AM +0100, Al Viro wrote:
> > On Thu, Apr 30, 2020 at 01:38:44PM -0700, [email protected] wrote:
> >
> > > -static inline void *kmap_atomic(struct page *page)
> > > +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> > > {
> > > preempt_disable();
> > > pagefault_disable();
> > > if (!PageHighMem(page))
> > > return page_address(page);
> > > - return kmap_atomic_high(page);
> > > + return kmap_atomic_high_prot(page, prot);
> > > }
> > > +#define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot)
> >
> > OK, so it *was* just a bisect hazard - you return to original semantics
> > wrt preempt_disable()...
>
> FWIW, how about doing the following: just before #5/10 have a patch
> that would touch only microblaze, ppc and x86 splitting their
> kmap_atomic_prot() into an inline helper + kmap_atomic_high_prot().
> Then your #5 would leave their kmap_atomic_prot() as-is (it would
> use kmap_atomic_prot_high() instead). The rest of the series plays
> out pretty much the same way it does now, and wrappers on those
> 3 architectures would go away when an identical generic one is
> introduced in this commit (#9/10).
>
> AFAICS, that would avoid the bisect hazard and might even end
> up with less noise in the patches...

This works. V2 coming out shortly.

Thanks for catching this,
Ira