2022-08-01 15:12:30

by Baoquan He

[permalink] [raw]
Subject: [PATCH 00/11] mm: ioremap: Convert architectures to take GENERIC_IOREMAP way

Currently, many architecutres have't taken the standard GENERIC_IOREMAP
way to implement ioremap_prot(), iounmap(), and ioremap_xx(), but make
these functions specifically under each arch's folder. Those cause many
duplicated codes of ioremap() and iounmap().

In this patchset, firstly adapt the hooks io[re|un]map_allowed, then
make use of them to convert those ARCH-es to take GENERIC_IOREMAP method.
With these change, duplicated ioremap/iounmap() code uder ARCH-es are
removed.

This is suggested by Christoph in below discussion:
https://lore.kernel.org/all/[email protected]/T/#u

And it's basically further action after arm64 has converted to
GENERIC_IOREMAP way in below patchset.
[PATCH v5 0/6] arm64: Cleanup ioremap() and support ioremap_prot()
https://lore.kernel.org/all/[email protected]/T/#u

And some change of io[re|un]map_allowed() is borrowed from the v3 of
arm64 converting patch.
[PATCH v3 4/6] mm: ioremap: Add arch_ioremap/iounmap()
https://lore.kernel.org/all/[email protected]/T/#u

For patch 1~3, I don't merge them because I made them in different
rounds of changing. And splitting them makes me easily describe the
intention and make review easier. I can merge them after v1 reviewing
if anyone thinks they should be merged.

Testing:
It passes the testing on arm64 and s390. For other ARCHes, I only tried
to pass build with existing RPMs of cross compiling tools. ARCHes like
openrisc, parisc, don't build because of lack of cross compiling RPMS.

Baoquan He (11):
mm/ioremap: change the return value of io[re|un]map_allowed
mm: ioremap: fixup the physical address
mm: ioremap: allow ARCH to have its own ioremap definition
arc: mm: Convert to GENERIC_IOREMAP
hexagon: mm: Convert to GENERIC_IOREMAP
ia64: mm: Convert to GENERIC_IOREMAP
openrisc: mm: Convert to GENERIC_IOREMAP
parisc: mm: Convert to GENERIC_IOREMAP
s390: mm: Convert to GENERIC_IOREMAP
sh: mm: Convert to GENERIC_IOREMAP
xtensa: mm: Convert to GENERIC_IOREMAP

arch/arc/Kconfig | 1 +
arch/arc/include/asm/io.h | 19 +++++++---
arch/arc/mm/ioremap.c | 60 +++++--------------------------
arch/arm64/include/asm/io.h | 3 +-
arch/arm64/mm/ioremap.c | 17 ++++++---
arch/hexagon/Kconfig | 1 +
arch/hexagon/include/asm/io.h | 9 +++--
arch/hexagon/mm/ioremap.c | 44 -----------------------
arch/ia64/Kconfig | 1 +
arch/ia64/include/asm/io.h | 26 ++++++++------
arch/ia64/mm/ioremap.c | 50 ++++++--------------------
arch/openrisc/Kconfig | 1 +
arch/openrisc/include/asm/io.h | 16 ++++++---
arch/openrisc/mm/ioremap.c | 49 ++++++++-----------------
arch/parisc/include/asm/io.h | 16 ++++++---
arch/parisc/mm/ioremap.c | 66 ++++------------------------------
arch/s390/Kconfig | 1 +
arch/s390/include/asm/io.h | 26 +++++++++-----
arch/s390/pci/pci.c | 60 +++++--------------------------
arch/sh/Kconfig | 1 +
arch/sh/include/asm/io.h | 47 ++++++++----------------
arch/sh/mm/ioremap.c | 62 +++++++-------------------------
arch/xtensa/Kconfig | 1 +
arch/xtensa/include/asm/io.h | 39 ++++++++++----------
arch/xtensa/mm/ioremap.c | 56 +++++++----------------------
include/asm-generic/io.h | 24 +++++++------
mm/ioremap.c | 21 ++++++-----
27 files changed, 232 insertions(+), 485 deletions(-)
delete mode 100644 arch/hexagon/mm/ioremap.c

--
2.34.1



2022-08-01 15:13:38

by Baoquan He

[permalink] [raw]
Subject: [PATCH 10/11] sh: mm: Convert to GENERIC_IOREMAP

Add hook ioremap_allowed() for sh's special operation when
ioremap(), then ioremap_cache() is converted to use ioremap_prot()
from GENERIC_IOREMAP.

Signed-off-by: Baoquan He <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: [email protected]
---
arch/sh/Kconfig | 1 +
arch/sh/include/asm/io.h | 47 +++++++++---------------------
arch/sh/mm/ioremap.c | 62 ++++++++--------------------------------
3 files changed, 27 insertions(+), 83 deletions(-)

diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index 5f220e903e5a..b63ad4698cf8 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -25,6 +25,7 @@ config SUPERH
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD
select GUP_GET_PTE_LOW_HIGH if X2TLB
+ select GENERIC_IOREMAP if MMU
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_KGDB
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h
index fba90e670ed4..b2cba511a73b 100644
--- a/arch/sh/include/asm/io.h
+++ b/arch/sh/include/asm/io.h
@@ -242,45 +242,26 @@ unsigned long long poke_real_address_q(unsigned long long addr,
#define phys_to_virt(address) (__va(address))
#endif

-#ifdef CONFIG_MMU
-void iounmap(void __iomem *addr);
-void __iomem *__ioremap_caller(phys_addr_t offset, unsigned long size,
- pgprot_t prot, void *caller);
-
-static inline void __iomem *ioremap(phys_addr_t offset, unsigned long size)
-{
- return __ioremap_caller(offset, size, PAGE_KERNEL_NOCACHE,
- __builtin_return_address(0));
-}
-
-static inline void __iomem *
-ioremap_cache(phys_addr_t offset, unsigned long size)
-{
- return __ioremap_caller(offset, size, PAGE_KERNEL,
- __builtin_return_address(0));
-}
-#define ioremap_cache ioremap_cache
+/*
+ * I/O memory mapping functions.
+ */
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val);
+#define ioremap_allowed ioremap_allowed

-#ifdef CONFIG_HAVE_IOREMAP_PROT
-static inline void __iomem *ioremap_prot(phys_addr_t offset, unsigned long size,
- unsigned long flags)
-{
- return __ioremap_caller(offset, size, __pgprot(flags),
- __builtin_return_address(0));
-}
-#endif /* CONFIG_HAVE_IOREMAP_PROT */
+int iounmap_allowed(void *addr);
+#define iounmap_allowed iounmap_allowed

-#else /* CONFIG_MMU */
-static inline void __iomem *ioremap(phys_addr_t offset, size_t size)
-{
- return (void __iomem *)(unsigned long)offset;
-}
+#define _PAGE_IOREMAP pgprot_val(PAGE_KERNEL_NOCACHE)

-static inline void iounmap(volatile void __iomem *addr) { }
-#endif /* CONFIG_MMU */
+#define ioremap_cache(addr, size) \
+ ioremap_prot((addr), (size), pgprot_val(PAGE_KERNEL))
+#define ioremap_cache ioremap_cache

#define ioremap_uc ioremap

+#include <asm-generic/io.h>
+
/*
* Convert a physical pointer to a virtual kernel pointer for /dev/mem
* access
diff --git a/arch/sh/mm/ioremap.c b/arch/sh/mm/ioremap.c
index 21342581144d..66637a6cc6f8 100644
--- a/arch/sh/mm/ioremap.c
+++ b/arch/sh/mm/ioremap.c
@@ -72,22 +72,13 @@ __ioremap_29bit(phys_addr_t offset, unsigned long size, pgprot_t prot)
#define __ioremap_29bit(offset, size, prot) NULL
#endif /* CONFIG_29BIT */

-/*
- * Remap an arbitrary physical address space into the kernel virtual
- * address space. Needed when the kernel wants to access high addresses
- * directly.
- *
- * NOTE! We need to allow non-page-aligned mappings too: we will obviously
- * have to convert them into an offset in a page-aligned mapping, but the
- * caller shouldn't need to know that small detail.
- */
-void __iomem * __ref
-__ioremap_caller(phys_addr_t phys_addr, unsigned long size,
- pgprot_t pgprot, void *caller)
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
{
- struct vm_struct *area;
- unsigned long offset, last_addr, addr, orig_addr;
+ unsigned long last_addr, phys_addr = *paddr;
void __iomem *mapped;
+ pgprot_t pgprot = __pgprot(*prot_val);
+ int ret = -EINVAL;

mapped = __ioremap_trapped(phys_addr, size);
if (mapped)
@@ -100,7 +91,7 @@ __ioremap_caller(phys_addr_t phys_addr, unsigned long size,
/* Don't allow wraparound or zero size */
last_addr = phys_addr + size - 1;
if (!size || last_addr < phys_addr)
- return NULL;
+ return IOMEM_ERR_PTR(ret);

/*
* If we can't yet use the regular approach, go the fixmap route.
@@ -116,30 +107,8 @@ __ioremap_caller(phys_addr_t phys_addr, unsigned long size,
if (mapped && !IS_ERR(mapped))
return mapped;

- /*
- * Mappings have to be page-aligned
- */
- offset = phys_addr & ~PAGE_MASK;
- phys_addr &= PAGE_MASK;
- size = PAGE_ALIGN(last_addr+1) - phys_addr;
-
- /*
- * Ok, go for it..
- */
- area = get_vm_area_caller(size, VM_IOREMAP, caller);
- if (!area)
- return NULL;
- area->phys_addr = phys_addr;
- orig_addr = addr = (unsigned long)area->addr;
-
- if (ioremap_page_range(addr, addr + size, phys_addr, pgprot)) {
- vunmap((void *)orig_addr);
- return NULL;
- }
-
- return (void __iomem *)(offset + (char *)orig_addr);
+ return NULL;
}
-EXPORT_SYMBOL(__ioremap_caller);

/*
* Simple checks for non-translatable mappings.
@@ -158,7 +127,7 @@ static inline int iomapping_nontranslatable(unsigned long offset)
return 0;
}

-void iounmap(void __iomem *addr)
+int iounmap_allowed(void __iomem *addr)
{
unsigned long vaddr = (unsigned long __force)addr;
struct vm_struct *p;
@@ -167,26 +136,19 @@ void iounmap(void __iomem *addr)
* Nothing to do if there is no translatable mapping.
*/
if (iomapping_nontranslatable(vaddr))
- return;
+ return -EINVAL;

/*
* There's no VMA if it's from an early fixed mapping.
*/
if (iounmap_fixed(addr) == 0)
- return;
+ return -EINVAL;

/*
* If the PMB handled it, there's nothing else to do.
*/
if (pmb_unmap(addr) == 0)
- return;
+ return -EINVAL;

- p = remove_vm_area((void *)(vaddr & PAGE_MASK));
- if (!p) {
- printk(KERN_ERR "%s: bad address %p\n", __func__, addr);
- return;
- }
-
- kfree(p);
+ return 0;
}
-EXPORT_SYMBOL(iounmap);
--
2.34.1


2022-08-01 15:13:48

by Baoquan He

[permalink] [raw]
Subject: [PATCH 05/11] hexagon: mm: Convert to GENERIC_IOREMAP

With it, the old ioremap() and iounmap() can be perfectly removed
since they are duplicated with the standard functions.

Signed-off-by: Baoquan He <[email protected]>
Cc: Brian Cain <[email protected]>
Cc: [email protected]
---
arch/hexagon/Kconfig | 1 +
arch/hexagon/include/asm/io.h | 9 +++++--
arch/hexagon/mm/ioremap.c | 44 -----------------------------------
3 files changed, 8 insertions(+), 46 deletions(-)
delete mode 100644 arch/hexagon/mm/ioremap.c

diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index 54eadf265178..17afffde1a7f 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -25,6 +25,7 @@ config HEXAGON
select NEED_SG_DMA_LENGTH
select NO_IOPORT_MAP
select GENERIC_IOMAP
+ select GENERIC_IOREMAP
select GENERIC_SMP_IDLE_THREAD
select STACKTRACE_SUPPORT
select GENERIC_CLOCKEVENTS_BROADCAST
diff --git a/arch/hexagon/include/asm/io.h b/arch/hexagon/include/asm/io.h
index c33241425a5c..e2d3091ec9d6 100644
--- a/arch/hexagon/include/asm/io.h
+++ b/arch/hexagon/include/asm/io.h
@@ -170,8 +170,13 @@ static inline void writel(u32 data, volatile void __iomem *addr)
#define writew_relaxed __raw_writew
#define writel_relaxed __raw_writel

-void __iomem *ioremap(unsigned long phys_addr, unsigned long size);
-#define ioremap_uc(X, Y) ioremap((X), (Y))
+/*
+ * I/O memory mapping functions.
+ */
+#define _PAGE_IOREMAP (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+ (__HEXAGON_C_DEV << 6))
+
+#define ioremap_uc(addr, size) ioremap((addr), (size))


#define __raw_writel writel
diff --git a/arch/hexagon/mm/ioremap.c b/arch/hexagon/mm/ioremap.c
deleted file mode 100644
index 255c5b1ee1a7..000000000000
--- a/arch/hexagon/mm/ioremap.c
+++ /dev/null
@@ -1,44 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * I/O remap functions for Hexagon
- *
- * Copyright (c) 2010-2011, The Linux Foundation. All rights reserved.
- */
-
-#include <linux/io.h>
-#include <linux/vmalloc.h>
-#include <linux/mm.h>
-
-void __iomem *ioremap(unsigned long phys_addr, unsigned long size)
-{
- unsigned long last_addr, addr;
- unsigned long offset = phys_addr & ~PAGE_MASK;
- struct vm_struct *area;
-
- pgprot_t prot = __pgprot(_PAGE_PRESENT|_PAGE_READ|_PAGE_WRITE
- |(__HEXAGON_C_DEV << 6));
-
- last_addr = phys_addr + size - 1;
-
- /* Wrapping not allowed */
- if (!size || (last_addr < phys_addr))
- return NULL;
-
- /* Rounds up to next page size, including whole-page offset */
- size = PAGE_ALIGN(offset + size);
-
- area = get_vm_area(size, VM_IOREMAP);
- addr = (unsigned long)area->addr;
-
- if (ioremap_page_range(addr, addr+size, phys_addr, prot)) {
- vunmap((void *)addr);
- return NULL;
- }
-
- return (void __iomem *) (offset + addr);
-}
-
-void iounmap(const volatile void __iomem *addr)
-{
- vunmap((void *) ((unsigned long) addr & PAGE_MASK));
-}
--
2.34.1


2022-08-01 15:18:55

by Baoquan He

[permalink] [raw]
Subject: [PATCH 11/11] xtensa: mm: Convert to GENERIC_IOREMAP

Add hooks ioremap_allowed() and iounmap_allowed() for xtensa's special
operation when ioremap() and iounmap(). Then define and implement its
own ioremap() and ioremap_cache().

Signed-off-by: Baoquan He <[email protected]>
Cc: Chris Zankel <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: [email protected]
---
arch/xtensa/Kconfig | 1 +
arch/xtensa/include/asm/io.h | 39 ++++++++++++-------------
arch/xtensa/mm/ioremap.c | 56 +++++++++---------------------------
3 files changed, 32 insertions(+), 64 deletions(-)

diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 0b0f0172cced..8edc1a049673 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -27,6 +27,7 @@ config XTENSA
select GENERIC_LIB_UCMPDI2
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
+ select GENERIC_IOREMAP if MMU
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
diff --git a/arch/xtensa/include/asm/io.h b/arch/xtensa/include/asm/io.h
index 54188e69b988..ca890b7d9f91 100644
--- a/arch/xtensa/include/asm/io.h
+++ b/arch/xtensa/include/asm/io.h
@@ -16,6 +16,7 @@
#include <asm/vectors.h>
#include <linux/bug.h>
#include <linux/kernel.h>
+#include <linux/pgtable.h>

#include <linux/types.h>

@@ -23,23 +24,29 @@
#define IO_SPACE_LIMIT ~0
#define PCI_IOBASE ((void __iomem *)XCHAL_KIO_BYPASS_VADDR)

-#ifdef CONFIG_MMU
-
-void __iomem *xtensa_ioremap_nocache(unsigned long addr, unsigned long size);
-void __iomem *xtensa_ioremap_cache(unsigned long addr, unsigned long size);
-void xtensa_iounmap(volatile void __iomem *addr);
-
/*
- * Return the virtual address for the specified bus memory.
+ * I/O memory mapping functions.
*/
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val);
+#define ioremap_allowed ioremap_allowed
+
+int iounmap_allowed(void *addr);
+#define iounmap_allowed iounmap_allowed
+
+void __iomem *ioremap_prot(phys_addr_t paddr, size_t size,
+ unsigned long prot);
+
static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
{
if (offset >= XCHAL_KIO_PADDR
&& offset - XCHAL_KIO_PADDR < XCHAL_KIO_SIZE)
return (void*)(offset-XCHAL_KIO_PADDR+XCHAL_KIO_BYPASS_VADDR);
else
- return xtensa_ioremap_nocache(offset, size);
+ return ioremap_prot(offset, size,
+ pgprot_val(pgprot_noncached(PAGE_KERNEL)));
}
+#define ioremap ioremap

static inline void __iomem *ioremap_cache(unsigned long offset,
unsigned long size)
@@ -48,24 +55,14 @@ static inline void __iomem *ioremap_cache(unsigned long offset,
&& offset - XCHAL_KIO_PADDR < XCHAL_KIO_SIZE)
return (void*)(offset-XCHAL_KIO_PADDR+XCHAL_KIO_CACHED_VADDR);
else
- return xtensa_ioremap_cache(offset, size);
-}
-#define ioremap_cache ioremap_cache
-
-static inline void iounmap(volatile void __iomem *addr)
-{
- unsigned long va = (unsigned long) addr;
+ return ioremap_prot(offset, size, pgprot_val(PAGE_KERNEL));

- if (!(va >= XCHAL_KIO_CACHED_VADDR &&
- va - XCHAL_KIO_CACHED_VADDR < XCHAL_KIO_SIZE) &&
- !(va >= XCHAL_KIO_BYPASS_VADDR &&
- va - XCHAL_KIO_BYPASS_VADDR < XCHAL_KIO_SIZE))
- xtensa_iounmap(addr);
}
+#define ioremap_cache ioremap_cache

+#ifdef CONFIG_MMU
#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
-
#endif /* CONFIG_MMU */

#include <asm-generic/io.h>
diff --git a/arch/xtensa/mm/ioremap.c b/arch/xtensa/mm/ioremap.c
index a400188c16b9..76733528f06d 100644
--- a/arch/xtensa/mm/ioremap.c
+++ b/arch/xtensa/mm/ioremap.c
@@ -6,60 +6,30 @@
*/

#include <linux/io.h>
-#include <linux/vmalloc.h>
#include <linux/pgtable.h>
#include <asm/cacheflush.h>
#include <asm/io.h>

-static void __iomem *xtensa_ioremap(unsigned long paddr, unsigned long size,
- pgprot_t prot)
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
{
- unsigned long offset = paddr & ~PAGE_MASK;
- unsigned long pfn = __phys_to_pfn(paddr);
- struct vm_struct *area;
- unsigned long vaddr;
- int err;
-
- paddr &= PAGE_MASK;
+ unsigned long phys_addr = *paddr;
+ unsigned long pfn = __phys_to_pfn(phys_addr);

WARN_ON(pfn_valid(pfn));

- size = PAGE_ALIGN(offset + size);
-
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
-
- vaddr = (unsigned long)area->addr;
- area->phys_addr = paddr;
-
- err = ioremap_page_range(vaddr, vaddr + size, paddr, prot);
-
- if (err) {
- vunmap((void *)vaddr);
- return NULL;
- }
-
- flush_cache_vmap(vaddr, vaddr + size);
- return (void __iomem *)(offset + vaddr);
+ return NULL;
}

-void __iomem *xtensa_ioremap_nocache(unsigned long addr, unsigned long size)
+int iounmap_allowed(void __iomem *addr)
{
- return xtensa_ioremap(addr, size, pgprot_noncached(PAGE_KERNEL));
-}
-EXPORT_SYMBOL(xtensa_ioremap_nocache);
+ unsigned long va = (unsigned long) addr;

-void __iomem *xtensa_ioremap_cache(unsigned long addr, unsigned long size)
-{
- return xtensa_ioremap(addr, size, PAGE_KERNEL);
-}
-EXPORT_SYMBOL(xtensa_ioremap_cache);
-
-void xtensa_iounmap(volatile void __iomem *io_addr)
-{
- void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr);
+ if ((va >= XCHAL_KIO_CACHED_VADDR &&
+ va - XCHAL_KIO_CACHED_VADDR < XCHAL_KIO_SIZE) ||
+ (va >= XCHAL_KIO_BYPASS_VADDR &&
+ va - XCHAL_KIO_BYPASS_VADDR < XCHAL_KIO_SIZE))
+ return -EINVAL;

- vunmap(addr);
+ return 0;
}
-EXPORT_SYMBOL(xtensa_iounmap);
--
2.34.1


2022-08-01 15:34:23

by Baoquan He

[permalink] [raw]
Subject: [PATCH 07/11] openrisc: mm: Convert to GENERIC_IOREMAP

Add hooks ioremap_allowed() and iounmap_allowed() for operisc's special
operation when ioremap() and iounmap.

Signed-off-by: Baoquan He <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: [email protected]
---
arch/openrisc/Kconfig | 1 +
arch/openrisc/include/asm/io.h | 16 ++++++++---
arch/openrisc/mm/ioremap.c | 49 +++++++++++-----------------------
3 files changed, 28 insertions(+), 38 deletions(-)

diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index e814df4c483c..77505f2a2767 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -21,6 +21,7 @@ config OPENRISC
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select GENERIC_IOMAP
+ select GENERIC_IOREMAP
select GENERIC_CPU_DEVICES
select HAVE_UID16
select GENERIC_ATOMIC64
diff --git a/arch/openrisc/include/asm/io.h b/arch/openrisc/include/asm/io.h
index c298061c70a7..8948df501464 100644
--- a/arch/openrisc/include/asm/io.h
+++ b/arch/openrisc/include/asm/io.h
@@ -15,6 +15,8 @@
#define __ASM_OPENRISC_IO_H

#include <linux/types.h>
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>

/*
* PCI: can we really do 0 here if we have no port IO?
@@ -27,11 +29,17 @@
#define PIO_OFFSET 0
#define PIO_MASK 0

-#define ioremap ioremap
-void __iomem *ioremap(phys_addr_t offset, unsigned long size);
+/*
+ * I/O memory mapping functions.
+ */
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val);
+#define ioremap_allowed ioremap_allowed
+
+int iounmap_allowed(void *addr);
+#define iounmap_allowed iounmap_allowed

-#define iounmap iounmap
-extern void iounmap(void __iomem *addr);
+#define _PAGE_IOREMAP (pgprot_val(PAGE_KERNEL) | _PAGE_CI)

#include <asm-generic/io.h>

diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c
index daae13a76743..d30b6cc65548 100644
--- a/arch/openrisc/mm/ioremap.c
+++ b/arch/openrisc/mm/ioremap.c
@@ -24,26 +24,18 @@ extern int mem_init_done;

static unsigned int fixmaps_used __initdata;

-/*
- * Remap an arbitrary physical address space into the kernel virtual
- * address space. Needed when the kernel wants to access high addresses
- * directly.
- *
- * NOTE! We need to allow non-page-aligned mappings too: we will obviously
- * have to convert them into an offset in a page-aligned mapping, but the
- * caller shouldn't need to know that small detail.
- */
-void __iomem *__ref ioremap(phys_addr_t addr, unsigned long size)
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
{
phys_addr_t p;
unsigned long v;
unsigned long offset, last_addr;
- struct vm_struct *area = NULL;
+ int ret = -EINVAL;

/* Don't allow wraparound or zero size */
last_addr = addr + size - 1;
if (!size || last_addr < addr)
- return NULL;
+ return IOMEM_ERR_PTR(ret);

/*
* Mappings have to be page-aligned
@@ -52,32 +44,24 @@ void __iomem *__ref ioremap(phys_addr_t addr, unsigned long size)
p = addr & PAGE_MASK;
size = PAGE_ALIGN(last_addr + 1) - p;

- if (likely(mem_init_done)) {
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
- v = (unsigned long)area->addr;
- } else {
+ if (unlikely(!mem_init_done)) {
if ((fixmaps_used + (size >> PAGE_SHIFT)) > FIX_N_IOREMAPS)
- return NULL;
+ return IOMEM_ERR_PTR(ret);
v = fix_to_virt(FIX_IOREMAP_BEGIN + fixmaps_used);
fixmaps_used += (size >> PAGE_SHIFT);
- }

- if (ioremap_page_range(v, v + size, p,
- __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_CI))) {
- if (likely(mem_init_done))
- vfree(area->addr);
- else
+ if (ioremap_page_range(v, v + size, p, __pgprot(*prot_val))) {
fixmaps_used -= (size >> PAGE_SHIFT);
- return NULL;
+ return IOMEM_ERR_PTR(ret);
+ }
+
+ return (void __iomem *)(offset + (char *)v);
}

- return (void __iomem *)(offset + (char *)v);
+ return NULL;
}
-EXPORT_SYMBOL(ioremap);

-void iounmap(void __iomem *addr)
+int iounmap_allowed(void __iomem *addr)
{
/* If the page is from the fixmap pool then we just clear out
* the fixmap mapping.
@@ -97,13 +81,10 @@ void iounmap(void __iomem *addr)
* ii) invalid accesses to the freed areas aren't made
*/
flush_tlb_all();
- return;
+ return -EINVAL;
}
-
- return vfree((void *)(PAGE_MASK & (unsigned long)addr));
+ return 0;
}
-EXPORT_SYMBOL(iounmap);
-
/**
* OK, this one's a bit tricky... ioremap can get called before memory is
* initialized (early serial console does this) and will want to alloc a page
--
2.34.1


2022-08-01 15:38:47

by Baoquan He

[permalink] [raw]
Subject: [PATCH 08/11] parisc: mm: Convert to GENERIC_IOREMAP

Add hook ioremap_allowed() for parisc's special operation when
ioremap(), then ioremap_[wc|uc]() are converted to use ioremap_prot()
from GENERIC_IOREMAP.

Signed-off-by: Baoquan He <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: [email protected]
---
arch/parisc/include/asm/io.h | 16 ++++++---
arch/parisc/mm/ioremap.c | 66 ++++--------------------------------
2 files changed, 18 insertions(+), 64 deletions(-)

diff --git a/arch/parisc/include/asm/io.h b/arch/parisc/include/asm/io.h
index 837ddddbac6a..4183f05ca1c4 100644
--- a/arch/parisc/include/asm/io.h
+++ b/arch/parisc/include/asm/io.h
@@ -125,13 +125,19 @@ static inline void gsc_writeq(unsigned long long val, unsigned long addr)
}

/*
- * The standard PCI ioremap interfaces
+ * I/O memory mapping functions.
*/
-void __iomem *ioremap(unsigned long offset, unsigned long size);
-#define ioremap_wc ioremap
-#define ioremap_uc ioremap
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long prot);
+#define ioremap_allowed ioremap_allowed

-extern void iounmap(const volatile void __iomem *addr);
+#define _PAGE_IOREMAP (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
+ _PAGE_ACCESSED | _PAGE_NO_CACHE)
+
+#define ioremap_wc(addr, size) \
+ ioremap_prot((addr), (size), _PAGE_IOREMAP)
+#define ioremap_uc(addr, size) \
+ ioremap_prot((addr), (size), _PAGE_IOREMAP)

static inline unsigned char __raw_readb(const volatile void __iomem *addr)
{
diff --git a/arch/parisc/mm/ioremap.c b/arch/parisc/mm/ioremap.c
index 345ff0b66499..1a6f928e8339 100644
--- a/arch/parisc/mm/ioremap.c
+++ b/arch/parisc/mm/ioremap.c
@@ -13,38 +13,20 @@
#include <linux/io.h>
#include <linux/mm.h>

-/*
- * Generic mapping function (not visible outside):
- */
-
-/*
- * Remap an arbitrary physical address space into the kernel virtual
- * address space.
- *
- * NOTE! We need to allow non-page-aligned mappings too: we will obviously
- * have to convert them into an offset in a page-aligned mapping, but the
- * caller shouldn't need to know that small detail.
- */
-void __iomem *ioremap(unsigned long phys_addr, unsigned long size)
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
{
- void __iomem *addr;
- struct vm_struct *area;
- unsigned long offset, last_addr;
- pgprot_t pgprot;
+ phys_addr_t phys_addr = *paddr;
+ int ret = -EINVAL;

#ifdef CONFIG_EISA
unsigned long end = phys_addr + size - 1;
/* Support EISA addresses */
if ((phys_addr >= 0x00080000 && end < 0x000fffff) ||
(phys_addr >= 0x00500000 && end < 0x03bfffff))
- phys_addr |= F_EXTEND(0xfc000000);
+ *paddr = phys_addr |= F_EXTEND(0xfc000000);
#endif

- /* Don't allow wraparound or zero size */
- last_addr = phys_addr + size - 1;
- if (!size || last_addr < phys_addr)
- return NULL;
-
/*
* Don't allow anybody to remap normal RAM that we're using..
*/
@@ -58,43 +40,9 @@ void __iomem *ioremap(unsigned long phys_addr, unsigned long size)
for (page = virt_to_page(t_addr);
page <= virt_to_page(t_end); page++) {
if(!PageReserved(page))
- return NULL;
+ return IOMEM_ERR_PTR(ret);
}
}

- pgprot = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY |
- _PAGE_ACCESSED | _PAGE_NO_CACHE);
-
- /*
- * Mappings have to be page-aligned
- */
- offset = phys_addr & ~PAGE_MASK;
- phys_addr &= PAGE_MASK;
- size = PAGE_ALIGN(last_addr + 1) - phys_addr;
-
- /*
- * Ok, go for it..
- */
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
-
- addr = (void __iomem *) area->addr;
- if (ioremap_page_range((unsigned long)addr, (unsigned long)addr + size,
- phys_addr, pgprot)) {
- vunmap(addr);
- return NULL;
- }
-
- return (void __iomem *) (offset + (char __iomem *)addr);
-}
-EXPORT_SYMBOL(ioremap);
-
-void iounmap(const volatile void __iomem *io_addr)
-{
- unsigned long addr = (unsigned long)io_addr & PAGE_MASK;
-
- if (is_vmalloc_addr((void *)addr))
- vunmap((void *)addr);
+ return NULL;
}
-EXPORT_SYMBOL(iounmap);
--
2.34.1


2022-08-01 15:38:47

by Baoquan He

[permalink] [raw]
Subject: [PATCH 01/11] mm/ioremap: change the return value of io[re|un]map_allowed

In some architectures, there are arch specifici io address mapping
handling when calling ioremap() or ioremap_prot(), e.g, arc, ia64,
openrisc, s390, sh.

In oder to convert them to take GENERIC_IOREMAP method, we need change
the return value of ioremap_allowed() and iounmap_allowed() as below.

===
ioremap_allowed() return a bool,
- IS_ERR means return an error
- NULL means continue to remap
- a non-NULL, non-IS_ERR pointer is returned directly
iounmap_allowed() return a bool,
- 0 means continue to vunmap
- error code means skip vunmap and return directly

This is taken from Kefeng's below old patch. Christoph suggested the
return value because he foresaw the doablity of converting to take
GENERIC_IOREMAP on more architectures.
- [PATCH v3 4/6] mm: ioremap: Add arch_ioremap/iounmap()
- https://lore.kernel.org/all/[email protected]/T/#u

This is preparation for later patch, no functionality change.

Signed-off-by: Baoquan He <[email protected]>
---
arch/arm64/include/asm/io.h | 2 +-
arch/arm64/mm/ioremap.c | 9 +++++----
include/asm-generic/io.h | 17 +++++++++--------
mm/ioremap.c | 10 +++++++---
4 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 87dd42d74afe..9c56a077b687 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -164,7 +164,7 @@ extern void __memset_io(volatile void __iomem *, int, size_t);
* I/O memory mapping functions.
*/

-bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot);
+void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot);
#define ioremap_allowed ioremap_allowed

#define _PAGE_IOREMAP PROT_DEVICE_nGnRE
diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
index c5af103d4ad4..49467c781283 100644
--- a/arch/arm64/mm/ioremap.c
+++ b/arch/arm64/mm/ioremap.c
@@ -3,19 +3,20 @@
#include <linux/mm.h>
#include <linux/io.h>

-bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
+void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
{
unsigned long last_addr = phys_addr + size - 1;
+ int ret = -EINVAL;

/* Don't allow outside PHYS_MASK */
if (last_addr & ~PHYS_MASK)
- return false;
+ return IOMEM_ERR_PTR(ret);

/* Don't allow RAM to be mapped. */
if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr))))
- return false;
+ return IOMEM_ERR_PTR(ret);

- return true;
+ return NULL;
}

/*
diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index 72974cb81343..d72eb310fb3c 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
@@ -967,26 +967,27 @@ static inline void iounmap(volatile void __iomem *addr)
/*
* Arch code can implement the following two hooks when using GENERIC_IOREMAP
* ioremap_allowed() return a bool,
- * - true means continue to remap
- * - false means skip remap and return directly
+ * - IS_ERR means return an error
+ * - NULL means continue to remap
+ * - a non-NULL, non-IS_ERR pointer is returned directly
* iounmap_allowed() return a bool,
- * - true means continue to vunmap
- * - false means skip vunmap and return directly
+ * - 0 means continue to vunmap
+ * - error code means skip vunmap and return directly
*/
#ifndef ioremap_allowed
#define ioremap_allowed ioremap_allowed
-static inline bool ioremap_allowed(phys_addr_t phys_addr, size_t size,
+static inline void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size,
unsigned long prot)
{
- return true;
+ return NULL;
}
#endif

#ifndef iounmap_allowed
#define iounmap_allowed iounmap_allowed
-static inline bool iounmap_allowed(void *addr)
+static inline int iounmap_allowed(void *addr)
{
- return true;
+ return 0;
}
#endif

diff --git a/mm/ioremap.c b/mm/ioremap.c
index 8652426282cc..b16ee9debea2 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -17,6 +17,7 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
unsigned long offset, vaddr;
phys_addr_t last_addr;
struct vm_struct *area;
+ void __iomem *base;

/* Disallow wrap-around or zero size */
last_addr = phys_addr + size - 1;
@@ -28,8 +29,11 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
phys_addr -= offset;
size = PAGE_ALIGN(size + offset);

- if (!ioremap_allowed(phys_addr, size, prot))
+ base = ioremap_allowed(phys_addr, size, prot);
+ if (IS_ERR(base))
return NULL;
+ else if (base)
+ return base;

area = get_vm_area_caller(size, VM_IOREMAP,
__builtin_return_address(0));
@@ -50,9 +54,9 @@ EXPORT_SYMBOL(ioremap_prot);

void iounmap(volatile void __iomem *addr)
{
- void *vaddr = (void *)((unsigned long)addr & PAGE_MASK);
+ void __iomem *vaddr = (void __iomem *)((unsigned long)addr & PAGE_MASK);

- if (!iounmap_allowed(vaddr))
+ if (iounmap_allowed(vaddr))
return;

if (is_vmalloc_addr(vaddr))
--
2.34.1


2022-08-01 15:40:23

by Baoquan He

[permalink] [raw]
Subject: [PATCH 04/11] arc: mm: Convert to GENERIC_IOREMAP

Add hooks ioremap_allowed() and iounmap_allowed() for arc's special
operation when ioremap_prot() and iounmap(). Meanwhile define and
implement arc's own ioremap() because arc has some special handling
in ioremap() than standard ioremap().

Signed-off-by: Baoquan He <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: [email protected]
---
arch/arc/Kconfig | 1 +
arch/arc/include/asm/io.h | 19 +++++++++----
arch/arc/mm/ioremap.c | 60 ++++++---------------------------------
3 files changed, 24 insertions(+), 56 deletions(-)

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 9e3653253ef2..a08d2abfaf61 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -26,6 +26,7 @@ config ARC
select GENERIC_PENDING_IRQ if SMP
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD
+ select GENERIC_IOREMAP
select HAVE_ARCH_KGDB
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARC_MMU_V4
diff --git a/arch/arc/include/asm/io.h b/arch/arc/include/asm/io.h
index 8f777d6441a5..10d7ed9dfa61 100644
--- a/arch/arc/include/asm/io.h
+++ b/arch/arc/include/asm/io.h
@@ -20,9 +20,20 @@
#define __iowmb() do { } while (0)
#endif

-extern void __iomem *ioremap(phys_addr_t paddr, unsigned long size);
-extern void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
- unsigned long flags);
+/*
+ * I/O memory mapping functions.
+ */
+
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val);
+#define ioremap_allowed ioremap_allowed
+
+int iounmap_allowed(void __iomem *addr);
+#define iounmap_allowed iounmap_allowed
+
+void __iomem *ioremap(phys_addr_t paddr, unsigned long size);
+#define ioremap ioremap
+
static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
{
return (void __iomem *)port;
@@ -32,8 +43,6 @@ static inline void ioport_unmap(void __iomem *addr)
{
}

-extern void iounmap(const void __iomem *addr);
-
/*
* io{read,write}{16,32}be() macros
*/
diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c
index 0ee75aca6e10..7be73e3645da 100644
--- a/arch/arc/mm/ioremap.c
+++ b/arch/arc/mm/ioremap.c
@@ -25,13 +25,6 @@ static inline bool arc_uncached_addr_space(phys_addr_t paddr)

void __iomem *ioremap(phys_addr_t paddr, unsigned long size)
{
- phys_addr_t end;
-
- /* Don't allow wraparound or zero size */
- end = paddr + size - 1;
- if (!size || (end < paddr))
- return NULL;
-
/*
* If the region is h/w uncached, MMU mapping can be elided as optim
* The cast to u32 is fine as this region can only be inside 4GB
@@ -44,62 +37,27 @@ void __iomem *ioremap(phys_addr_t paddr, unsigned long size)
}
EXPORT_SYMBOL(ioremap);

-/*
- * ioremap with access flags
- * Cache semantics wise it is same as ioremap - "forced" uncached.
- * However unlike vanilla ioremap which bypasses ARC MMU for addresses in
- * ARC hardware uncached region, this one still goes thru the MMU as caller
- * might need finer access control (R/W/X)
- */
-void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
- unsigned long flags)
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
{
- unsigned int off;
- unsigned long vaddr;
- struct vm_struct *area;
- phys_addr_t end;
- pgprot_t prot = __pgprot(flags);
-
- /* Don't allow wraparound, zero size */
- end = paddr + size - 1;
- if ((!size) || (end < paddr))
- return NULL;
+ int ret = -EINVAL;

/* An early platform driver might end up here */
if (!slab_is_available())
- return NULL;
+ return IOMEM_ERR_PTR(ret);

/* force uncached */
- prot = pgprot_noncached(prot);
+ *prot_val = pgprot_val(pgprot_noncached(__pgprot(*prot_val)));

- /* Mappings have to be page-aligned */
- off = paddr & ~PAGE_MASK;
- paddr &= PAGE_MASK_PHYS;
- size = PAGE_ALIGN(end + 1) - paddr;
+ return NULL;

- /*
- * Ok, go for it..
- */
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
- area->phys_addr = paddr;
- vaddr = (unsigned long)area->addr;
- if (ioremap_page_range(vaddr, vaddr + size, paddr, prot)) {
- vunmap((void __force *)vaddr);
- return NULL;
- }
- return (void __iomem *)(off + (char __iomem *)vaddr);
}
-EXPORT_SYMBOL(ioremap_prot);
-

-void iounmap(const void __iomem *addr)
+int iounmap_allowed(void __iomem *addr)
{
/* weird double cast to handle phys_addr_t > 32 bits */
if (arc_uncached_addr_space((phys_addr_t)(u32)addr))
- return;
+ return -EINVAL;

- vfree((void *)(PAGE_MASK & (unsigned long __force)addr));
+ return 0;
}
-EXPORT_SYMBOL(iounmap);
--
2.34.1


2022-08-01 15:42:24

by Baoquan He

[permalink] [raw]
Subject: [PATCH 02/11] mm: ioremap: fixup the physical address

In some architectures, the io address will be fixed up before doing
mapping, e.g, parisc, mips.

In oder to convert them to take GENERIC_IOREMAP method, we need wrap
the address fixing up code into ioremap_allowed() and pass the new
address out for ioremap_prot() handling.

This is a preparation patch, no functionality change.

Signed-off-by: Baoquan He <[email protected]>
---
arch/arm64/include/asm/io.h | 3 ++-
arch/arm64/mm/ioremap.c | 10 ++++++++--
include/asm-generic/io.h | 6 +++---
mm/ioremap.c | 19 ++++++++++---------
4 files changed, 23 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 9c56a077b687..7f0c5f60d946 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -164,7 +164,8 @@ extern void __memset_io(volatile void __iomem *, int, size_t);
* I/O memory mapping functions.
*/

-void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot);
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val);
#define ioremap_allowed ioremap_allowed

#define _PAGE_IOREMAP PROT_DEVICE_nGnRE
diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
index 49467c781283..b9febcf0f5f4 100644
--- a/arch/arm64/mm/ioremap.c
+++ b/arch/arm64/mm/ioremap.c
@@ -3,11 +3,17 @@
#include <linux/mm.h>
#include <linux/io.h>

-void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
{
- unsigned long last_addr = phys_addr + size - 1;
+ unsigned long last_addr, offset, phys_addr = *paddr;
int ret = -EINVAL;

+ offset = phys_addr & (~PAGE_MASK);
+ phys_addr -= offset;
+ size = PAGE_ALIGN(size + offset);
+ last_addr = phys_addr + size - 1;
+
/* Don't allow outside PHYS_MASK */
if (last_addr & ~PHYS_MASK)
return IOMEM_ERR_PTR(ret);
diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index d72eb310fb3c..0b5cd3cef99d 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
@@ -976,8 +976,8 @@ static inline void iounmap(volatile void __iomem *addr)
*/
#ifndef ioremap_allowed
#define ioremap_allowed ioremap_allowed
-static inline void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size,
- unsigned long prot)
+static inline void __iomem *ioremap_allowed(phys_addr_t *paddr, size_t size,
+ unsigned long *prot_val)
{
return NULL;
}
@@ -991,7 +991,7 @@ static inline int iounmap_allowed(void *addr)
}
#endif

-void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+void __iomem *ioremap_prot(phys_addr_t paddr, size_t size,
unsigned long prot);
void iounmap(volatile void __iomem *addr);

diff --git a/mm/ioremap.c b/mm/ioremap.c
index b16ee9debea2..01439882112e 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -11,13 +11,20 @@
#include <linux/io.h>
#include <linux/export.h>

-void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+void __iomem *ioremap_prot(phys_addr_t paddr, size_t size,
unsigned long prot)
{
unsigned long offset, vaddr;
- phys_addr_t last_addr;
+ phys_addr_t last_addr, phys_addr = paddr;
struct vm_struct *area;
void __iomem *base;
+ unsigned long prot_val = prot;
+
+ base = ioremap_allowed(&phys_addr, size, &prot_val);
+ if (IS_ERR(base))
+ return NULL;
+ else if (base)
+ return base;

/* Disallow wrap-around or zero size */
last_addr = phys_addr + size - 1;
@@ -29,12 +36,6 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
phys_addr -= offset;
size = PAGE_ALIGN(size + offset);

- base = ioremap_allowed(phys_addr, size, prot);
- if (IS_ERR(base))
- return NULL;
- else if (base)
- return base;
-
area = get_vm_area_caller(size, VM_IOREMAP,
__builtin_return_address(0));
if (!area)
@@ -43,7 +44,7 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
area->phys_addr = phys_addr;

if (ioremap_page_range(vaddr, vaddr + size, phys_addr,
- __pgprot(prot))) {
+ __pgprot(prot_val))) {
free_vm_area(area);
return NULL;
}
--
2.34.1


2022-08-01 15:42:29

by Baoquan He

[permalink] [raw]
Subject: [PATCH 09/11] s390: mm: Convert to GENERIC_IOREMAP

Add hooks ioremap_allowed() and iounmap_allowed() for s390's special
operation when ioremap() and iounmap(), then ioremap_[wc|wt]() are
converted to use ioremap_prot() from GENERIC_IOREMAP.

Signed-off-by: Baoquan He <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Alexander Gordeev <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Sven Schnelle <[email protected]>
Cc: [email protected]
---
arch/s390/Kconfig | 1 +
arch/s390/include/asm/io.h | 26 +++++++++++------
arch/s390/pci/pci.c | 60 +++++---------------------------------
3 files changed, 26 insertions(+), 61 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 5a1a8dfda6f8..60ed181dfba5 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -135,6 +135,7 @@ config S390
select GENERIC_SMP_IDLE_THREAD
select GENERIC_TIME_VSYSCALL
select GENERIC_VDSO_TIME_NS
+ select GENERIC_IOREMAP
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/s390/include/asm/io.h b/arch/s390/include/asm/io.h
index e3882b012bfa..b0f823850ef1 100644
--- a/arch/s390/include/asm/io.h
+++ b/arch/s390/include/asm/io.h
@@ -22,11 +22,23 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);

#define IO_SPACE_LIMIT 0

-void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot);
-void __iomem *ioremap(phys_addr_t addr, size_t size);
-void __iomem *ioremap_wc(phys_addr_t addr, size_t size);
-void __iomem *ioremap_wt(phys_addr_t addr, size_t size);
-void iounmap(volatile void __iomem *addr);
+
+/*
+ * I/O memory mapping functions.
+ */
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val);
+#define ioremap_allowed ioremap_allowed
+
+int iounmap_allowed(void *addr);
+#define iounmap_allowed iounmap_allowed
+
+#define _PAGE_IOREMAP pgprot_val(PAGE_KERNEL)
+
+#define ioremap_wc(addr, size) \
+ ioremap_prot((addr), (size), pgprot_val(pgprot_writecombine(PAGE_KERNEL)))
+#define ioremap_wt(addr, size) \
+ ioremap_prot((addr), (size), pgprot_val(pgprot_writethrough(PAGE_KERNEL)))

static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
{
@@ -51,10 +63,6 @@ static inline void ioport_unmap(void __iomem *p)
#define pci_iomap_wc pci_iomap_wc
#define pci_iomap_wc_range pci_iomap_wc_range

-#define ioremap ioremap
-#define ioremap_wt ioremap_wt
-#define ioremap_wc ioremap_wc
-
#define memcpy_fromio(dst, src, count) zpci_memcpy_fromio(dst, src, count)
#define memcpy_toio(dst, src, count) zpci_memcpy_toio(dst, src, count)
#define memset_io(dst, val, count) zpci_memset_io(dst, val, count)
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index bc980fd313d5..dcd17366a064 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -231,64 +231,20 @@ void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
zpci_memcpy_toio(to, from, count);
}

-static void __iomem *__ioremap(phys_addr_t addr, size_t size, pgprot_t prot)
+void __iomem *
+ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
{
- unsigned long offset, vaddr;
- struct vm_struct *area;
- phys_addr_t last_addr;
-
- last_addr = addr + size - 1;
- if (!size || last_addr < addr)
- return NULL;
-
if (!static_branch_unlikely(&have_mio))
- return (void __iomem *) addr;
-
- offset = addr & ~PAGE_MASK;
- addr &= PAGE_MASK;
- size = PAGE_ALIGN(size + offset);
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
-
- vaddr = (unsigned long) area->addr;
- if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
- free_vm_area(area);
- return NULL;
- }
- return (void __iomem *) ((unsigned long) area->addr + offset);
-}
-
-void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot)
-{
- return __ioremap(addr, size, __pgprot(prot));
+ return (void __iomem *) *paddr;
+ return NULL;
}
-EXPORT_SYMBOL(ioremap_prot);

-void __iomem *ioremap(phys_addr_t addr, size_t size)
+int iounmap_allowed(void __iomem *addr)
{
- return __ioremap(addr, size, PAGE_KERNEL);
-}
-EXPORT_SYMBOL(ioremap);
-
-void __iomem *ioremap_wc(phys_addr_t addr, size_t size)
-{
- return __ioremap(addr, size, pgprot_writecombine(PAGE_KERNEL));
-}
-EXPORT_SYMBOL(ioremap_wc);
-
-void __iomem *ioremap_wt(phys_addr_t addr, size_t size)
-{
- return __ioremap(addr, size, pgprot_writethrough(PAGE_KERNEL));
-}
-EXPORT_SYMBOL(ioremap_wt);
-
-void iounmap(volatile void __iomem *addr)
-{
- if (static_branch_likely(&have_mio))
- vunmap((__force void *) ((unsigned long) addr & PAGE_MASK));
+ if (!static_branch_likely(&have_mio))
+ return -EINVAL;
+ return 0;
}
-EXPORT_SYMBOL(iounmap);

/* Create a virtual mapping cookie for a PCI BAR */
static void __iomem *pci_iomap_range_fh(struct pci_dev *pdev, int bar,
--
2.34.1


2022-08-04 15:56:48

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH 01/11] mm/ioremap: change the return value of io[re|un]map_allowed

On Mon, Aug 01, 2022 at 10:40:19PM +0800, Baoquan He wrote:

Hi Baoquan,

> --- a/arch/arm64/mm/ioremap.c
> +++ b/arch/arm64/mm/ioremap.c
> @@ -3,19 +3,20 @@
> #include <linux/mm.h>
> #include <linux/io.h>
>
> -bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> +void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> {
> unsigned long last_addr = phys_addr + size - 1;
> + int ret = -EINVAL;

If ret variable is really needed?

> diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
> index 72974cb81343..d72eb310fb3c 100644
> --- a/include/asm-generic/io.h
> +++ b/include/asm-generic/io.h
> @@ -967,26 +967,27 @@ static inline void iounmap(volatile void __iomem *addr)
> /*
> * Arch code can implement the following two hooks when using GENERIC_IOREMAP
> * ioremap_allowed() return a bool,
> - * - true means continue to remap
> - * - false means skip remap and return directly
> + * - IS_ERR means return an error
> + * - NULL means continue to remap
> + * - a non-NULL, non-IS_ERR pointer is returned directly

If ioremap_allowed() returns a valid pointer, then the function name
is not as precise anymore.

> @@ -28,8 +29,11 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
> phys_addr -= offset;
> size = PAGE_ALIGN(size + offset);
>
> - if (!ioremap_allowed(phys_addr, size, prot))
> + base = ioremap_allowed(phys_addr, size, prot);
> + if (IS_ERR(base))
> return NULL;
> + else if (base)
> + return base;

It is probably just me, but the base name bit misleading here.

> @@ -50,9 +54,9 @@ EXPORT_SYMBOL(ioremap_prot);
>
> void iounmap(volatile void __iomem *addr)
> {
> - void *vaddr = (void *)((unsigned long)addr & PAGE_MASK);
> + void __iomem *vaddr = (void __iomem *)((unsigned long)addr & PAGE_MASK);
>
> - if (!iounmap_allowed(vaddr))
> + if (iounmap_allowed(vaddr))

I guess, iounmap_allowed() should accept void __iomem *, not void *.
Then addr needs to be passed to iounmap_allowed() not vaddr.

> return;

2022-08-04 16:20:20

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH 02/11] mm: ioremap: fixup the physical address

On Mon, Aug 01, 2022 at 10:40:20PM +0800, Baoquan He wrote:
> This is a preparation patch, no functionality change.

There is, please see below.

> @@ -3,11 +3,17 @@
> #include <linux/mm.h>
> #include <linux/io.h>
>
> -void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> +void __iomem *
> +ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
> {
> - unsigned long last_addr = phys_addr + size - 1;
> + unsigned long last_addr, offset, phys_addr = *paddr;
> int ret = -EINVAL;
>
> + offset = phys_addr & (~PAGE_MASK);
> + phys_addr -= offset;

FWIW, phys_addr &= PAGE_MASK looks much more usual.

> @@ -11,13 +11,20 @@
> #include <linux/io.h>
> #include <linux/export.h>
>
> -void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
> +void __iomem *ioremap_prot(phys_addr_t paddr, size_t size,
> unsigned long prot)
> {
> unsigned long offset, vaddr;
> - phys_addr_t last_addr;
> + phys_addr_t last_addr, phys_addr = paddr;
> struct vm_struct *area;
> void __iomem *base;
> + unsigned long prot_val = prot;

Why prot_val is needed?

> + base = ioremap_allowed(&phys_addr, size, &prot_val);
> + if (IS_ERR(base))
> + return NULL;
> + else if (base)
> + return base;

By moving ioremap_allowed() here you allow it to be called
before the wrap-around check, including architectures that
do not do fixups.

And now ioremap_allowed() semantics, prototype and name turn
less than obvious. Why not introduce a separate fixup callback?

> /* Disallow wrap-around or zero size */
> last_addr = phys_addr + size - 1;
> @@ -29,12 +36,6 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
> phys_addr -= offset;
> size = PAGE_ALIGN(size + offset);
>
> - base = ioremap_allowed(phys_addr, size, prot);
> - if (IS_ERR(base))
> - return NULL;
> - else if (base)
> - return base;
> -
> area = get_vm_area_caller(size, VM_IOREMAP,
> __builtin_return_address(0));
> if (!area)

2022-08-06 02:42:35

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH 01/11] mm/ioremap: change the return value of io[re|un]map_allowed


On 2022/8/4 23:42, Alexander Gordeev wrote:
> On Mon, Aug 01, 2022 at 10:40:19PM +0800, Baoquan He wrote:
>
> Hi Baoquan,
>
>> --- a/arch/arm64/mm/ioremap.c
>> +++ b/arch/arm64/mm/ioremap.c
>> @@ -3,19 +3,20 @@
>> #include <linux/mm.h>
>> #include <linux/io.h>
>>
>> -bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
>> +void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
>> {
>> unsigned long last_addr = phys_addr + size - 1;
>> + int ret = -EINVAL;
> If ret variable is really needed?
>
>> diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
>> index 72974cb81343..d72eb310fb3c 100644
>> --- a/include/asm-generic/io.h
>> +++ b/include/asm-generic/io.h
>> @@ -967,26 +967,27 @@ static inline void iounmap(volatile void __iomem *addr)
>> /*
>> * Arch code can implement the following two hooks when using GENERIC_IOREMAP
>> * ioremap_allowed() return a bool,
>> - * - true means continue to remap
>> - * - false means skip remap and return directly
>> + * - IS_ERR means return an error
>> + * - NULL means continue to remap
>> + * - a non-NULL, non-IS_ERR pointer is returned directly
> If ioremap_allowed() returns a valid pointer, then the function name
> is not as precise anymore.

Maybe use arch_ioremap/unmap as before, or some better name.

>
>> @@ -28,8 +29,11 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
>> phys_addr -= offset;
>> size = PAGE_ALIGN(size + offset);
>>
>> - if (!ioremap_allowed(phys_addr, size, prot))
>> + base = ioremap_allowed(phys_addr, size, prot);
>> + if (IS_ERR(base))
>> return NULL;
>> + else if (base)
>> + return base;
> It is probably just me, but the base name bit misleading here.
We could reuse vaddr, not add new base.
>
>> @@ -50,9 +54,9 @@ EXPORT_SYMBOL(ioremap_prot);
>>
>> void iounmap(volatile void __iomem *addr)
>> {
>> - void *vaddr = (void *)((unsigned long)addr & PAGE_MASK);
>> + void __iomem *vaddr = (void __iomem *)((unsigned long)addr & PAGE_MASK);
>>
>> - if (!iounmap_allowed(vaddr))
>> + if (iounmap_allowed(vaddr))
> I guess, iounmap_allowed() should accept void __iomem *, not void *.
> Then addr needs to be passed to iounmap_allowed() not vaddr.

The following is_vmalloc_addr()  and vunmap() in iounmap() use void *,

so we could simply use void* for iounmap_allowed().

>
>> return;
> .

2022-08-06 08:56:34

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH 01/11] mm/ioremap: change the return value of io[re|un]map_allowed

On Sat, Aug 06, 2022 at 10:29:03AM +0800, Kefeng Wang wrote:
>
> On 2022/8/4 23:42, Alexander Gordeev wrote:
> > On Mon, Aug 01, 2022 at 10:40:19PM +0800, Baoquan He wrote:
> >
> > Hi Baoquan,
> >
> > > --- a/arch/arm64/mm/ioremap.c
> > > +++ b/arch/arm64/mm/ioremap.c
> > > @@ -3,19 +3,20 @@
> > > #include <linux/mm.h>
> > > #include <linux/io.h>
> > > -bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> > > +void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> > > {
> > > unsigned long last_addr = phys_addr + size - 1;
> > > + int ret = -EINVAL;
> > If ret variable is really needed?
> >
> > > diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
> > > index 72974cb81343..d72eb310fb3c 100644
> > > --- a/include/asm-generic/io.h
> > > +++ b/include/asm-generic/io.h
> > > @@ -967,26 +967,27 @@ static inline void iounmap(volatile void __iomem *addr)
> > > /*
> > > * Arch code can implement the following two hooks when using GENERIC_IOREMAP
> > > * ioremap_allowed() return a bool,
> > > - * - true means continue to remap
> > > - * - false means skip remap and return directly
> > > + * - IS_ERR means return an error
> > > + * - NULL means continue to remap
> > > + * - a non-NULL, non-IS_ERR pointer is returned directly
> > If ioremap_allowed() returns a valid pointer, then the function name
> > is not as precise anymore.
>
> Maybe use arch_ioremap/unmap as before, or some better name.
>
> >
> > > @@ -28,8 +29,11 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
> > > phys_addr -= offset;
> > > size = PAGE_ALIGN(size + offset);
> > > - if (!ioremap_allowed(phys_addr, size, prot))
> > > + base = ioremap_allowed(phys_addr, size, prot);
> > > + if (IS_ERR(base))
> > > return NULL;
> > > + else if (base)
> > > + return base;
> > It is probably just me, but the base name bit misleading here.
> We could reuse vaddr, not add new base.

vaddr name is wrong AFAICT. ioremap_allowed() returns __iomem address,
not the virtual one.

> >
> > > @@ -50,9 +54,9 @@ EXPORT_SYMBOL(ioremap_prot);
> > > void iounmap(volatile void __iomem *addr)
> > > {
> > > - void *vaddr = (void *)((unsigned long)addr & PAGE_MASK);
> > > + void __iomem *vaddr = (void __iomem *)((unsigned long)addr & PAGE_MASK);

Same here.

> > > - if (!iounmap_allowed(vaddr))
> > > + if (iounmap_allowed(vaddr))
> > I guess, iounmap_allowed() should accept void __iomem *, not void *.
> > Then addr needs to be passed to iounmap_allowed() not vaddr.
>
> The following is_vmalloc_addr()? and vunmap() in iounmap() use void *,
>
> so we could simply use void* for iounmap_allowed().

iounmap_allowed() accepts void __iomem * and I that looks correct to me.

Passing void * on the other hand means you pass a pointer that
in theory differs from what architecture previously returned
with ioremap_allowed() and "knows" nothing about.

I think you need to pass addr to iounmap_allowed() as is.

2022-08-07 01:56:25

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 01/11] mm/ioremap: change the return value of io[re|un]map_allowed

On 08/06/22 at 10:29am, Alexander Gordeev wrote:
> On Sat, Aug 06, 2022 at 10:29:03AM +0800, Kefeng Wang wrote:
> >
> > On 2022/8/4 23:42, Alexander Gordeev wrote:
> > > On Mon, Aug 01, 2022 at 10:40:19PM +0800, Baoquan He wrote:
> > >
> > > Hi Baoquan,
> > >
> > > > --- a/arch/arm64/mm/ioremap.c
> > > > +++ b/arch/arm64/mm/ioremap.c
> > > > @@ -3,19 +3,20 @@
> > > > #include <linux/mm.h>
> > > > #include <linux/io.h>
> > > > -bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> > > > +void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> > > > {
> > > > unsigned long last_addr = phys_addr + size - 1;
> > > > + int ret = -EINVAL;
> > > If ret variable is really needed?
> > >
> > > > diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
> > > > index 72974cb81343..d72eb310fb3c 100644
> > > > --- a/include/asm-generic/io.h
> > > > +++ b/include/asm-generic/io.h
> > > > @@ -967,26 +967,27 @@ static inline void iounmap(volatile void __iomem *addr)
> > > > /*
> > > > * Arch code can implement the following two hooks when using GENERIC_IOREMAP
> > > > * ioremap_allowed() return a bool,
> > > > - * - true means continue to remap
> > > > - * - false means skip remap and return directly
> > > > + * - IS_ERR means return an error
> > > > + * - NULL means continue to remap
> > > > + * - a non-NULL, non-IS_ERR pointer is returned directly
> > > If ioremap_allowed() returns a valid pointer, then the function name
> > > is not as precise anymore.
> >
> > Maybe use arch_ioremap/unmap as before, or some better name.
> >
> > >
> > > > @@ -28,8 +29,11 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
> > > > phys_addr -= offset;
> > > > size = PAGE_ALIGN(size + offset);
> > > > - if (!ioremap_allowed(phys_addr, size, prot))
> > > > + base = ioremap_allowed(phys_addr, size, prot);
> > > > + if (IS_ERR(base))
> > > > return NULL;
> > > > + else if (base)
> > > > + return base;
> > > It is probably just me, but the base name bit misleading here.
> > We could reuse vaddr, not add new base.
>
> vaddr name is wrong AFAICT. ioremap_allowed() returns __iomem address,
> not the virtual one.

Thanks a lot for reviewing, both.

Here, I tend to agree with Alexander. ioremap_allowed() returns
__iomem*. How about naming it io_addr here. While I don't have strong
opinion about it, reusing vaddr and casting it to (__iomem*) when return
is also OK to me.

>
> > >
> > > > @@ -50,9 +54,9 @@ EXPORT_SYMBOL(ioremap_prot);
> > > > void iounmap(volatile void __iomem *addr)
> > > > {
> > > > - void *vaddr = (void *)((unsigned long)addr & PAGE_MASK);
> > > > + void __iomem *vaddr = (void __iomem *)((unsigned long)addr & PAGE_MASK);
>
> Same here.


>
> > > > - if (!iounmap_allowed(vaddr))
> > > > + if (iounmap_allowed(vaddr))
> > > I guess, iounmap_allowed() should accept void __iomem *, not void *.
> > > Then addr needs to be passed to iounmap_allowed() not vaddr.
> >
> > The following is_vmalloc_addr()? and vunmap() in iounmap() use void *,
> >
> > so we could simply use void* for iounmap_allowed().
>
> iounmap_allowed() accepts void __iomem * and I that looks correct to me.
>
> Passing void * on the other hand means you pass a pointer that
> in theory differs from what architecture previously returned
> with ioremap_allowed() and "knows" nothing about.
>
> I think you need to pass addr to iounmap_allowed() as is.

OK, I will change to pass (__iomem*) to iounmap_allowed() directly.
Thanks.

2022-08-07 02:27:25

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 02/11] mm: ioremap: fixup the physical address

On 08/04/22 at 06:02pm, Alexander Gordeev wrote:
> On Mon, Aug 01, 2022 at 10:40:20PM +0800, Baoquan He wrote:
> > This is a preparation patch, no functionality change.
>
> There is, please see below.
>
> > @@ -3,11 +3,17 @@
> > #include <linux/mm.h>
> > #include <linux/io.h>
> >
> > -void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> > +void __iomem *
> > +ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
> > {
> > - unsigned long last_addr = phys_addr + size - 1;
> > + unsigned long last_addr, offset, phys_addr = *paddr;
> > int ret = -EINVAL;
> >
> > + offset = phys_addr & (~PAGE_MASK);
> > + phys_addr -= offset;
>
> FWIW, phys_addr &= PAGE_MASK looks much more usual.

Sure, will change.

>
> > @@ -11,13 +11,20 @@
> > #include <linux/io.h>
> > #include <linux/export.h>
> >
> > -void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
> > +void __iomem *ioremap_prot(phys_addr_t paddr, size_t size,
> > unsigned long prot)
> > {
> > unsigned long offset, vaddr;
> > - phys_addr_t last_addr;
> > + phys_addr_t last_addr, phys_addr = paddr;
> > struct vm_struct *area;
> > void __iomem *base;
> > + unsigned long prot_val = prot;
>
> Why prot_val is needed?

I will remove it and pass &prot to ioremap_allowed(). I could think too
much when I made change here. Thanks.

>
> > + base = ioremap_allowed(&phys_addr, size, &prot_val);
> > + if (IS_ERR(base))
> > + return NULL;
> > + else if (base)
> > + return base;
>
> By moving ioremap_allowed() here you allow it to be called
> before the wrap-around check, including architectures that
> do not do fixups.

Yes, just as you say.

>
> And now ioremap_allowed() semantics, prototype and name turn
> less than obvious. Why not introduce a separate fixup callback?

I can do that, while a little worried if too many hooks introduced.
I will introduce another fixup hook in v2 if no other suggestion or
concern. Thanks.


>
> > /* Disallow wrap-around or zero size */
> > last_addr = phys_addr + size - 1;
> > @@ -29,12 +36,6 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
> > phys_addr -= offset;
> > size = PAGE_ALIGN(size + offset);
> >
> > - base = ioremap_allowed(phys_addr, size, prot);
> > - if (IS_ERR(base))
> > - return NULL;
> > - else if (base)
> > - return base;
> > -
> > area = get_vm_area_caller(size, VM_IOREMAP,
> > __builtin_return_address(0));
> > if (!area)
>

2022-08-07 02:47:48

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 01/11] mm/ioremap: change the return value of io[re|un]map_allowed

On 08/06/22 at 10:29am, Kefeng Wang wrote:
...snip...
> > > diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
> > > index 72974cb81343..d72eb310fb3c 100644
> > > --- a/include/asm-generic/io.h
> > > +++ b/include/asm-generic/io.h
> > > @@ -967,26 +967,27 @@ static inline void iounmap(volatile void __iomem *addr)
> > > /*
> > > * Arch code can implement the following two hooks when using GENERIC_IOREMAP
> > > * ioremap_allowed() return a bool,
> > > - * - true means continue to remap
> > > - * - false means skip remap and return directly
> > > + * - IS_ERR means return an error
> > > + * - NULL means continue to remap
> > > + * - a non-NULL, non-IS_ERR pointer is returned directly
> > If ioremap_allowed() returns a valid pointer, then the function name
> > is not as precise anymore.
>
> Maybe use arch_ioremap/unmap as before, or some better name.

Looks good to me. Or ioremap_check() which is a generic name, and
usually xxx_check() can hanlde many things.

2022-08-07 02:48:11

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 01/11] mm/ioremap: change the return value of io[re|un]map_allowed

On 08/04/22 at 05:42pm, Alexander Gordeev wrote:
> On Mon, Aug 01, 2022 at 10:40:19PM +0800, Baoquan He wrote:
>
> Hi Baoquan,
>
> > --- a/arch/arm64/mm/ioremap.c
> > +++ b/arch/arm64/mm/ioremap.c
> > @@ -3,19 +3,20 @@
> > #include <linux/mm.h>
> > #include <linux/io.h>
> >
> > -bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> > +void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> > {
> > unsigned long last_addr = phys_addr + size - 1;
> > + int ret = -EINVAL;
>
> If ret variable is really needed?

There are two places where -EINVAL need be returned. I can remove
variable ret, and return -EINVAL directly if there's concern.

Thanks again for looking into this series.

2022-08-20 01:53:07

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 02/11] mm: ioremap: fixup the physical address

On 08/04/22 at 06:02pm, Alexander Gordeev wrote:
> On Mon, Aug 01, 2022 at 10:40:20PM +0800, Baoquan He wrote:
> > This is a preparation patch, no functionality change.
>
> There is, please see below.
>
> > @@ -3,11 +3,17 @@
> > #include <linux/mm.h>
> > #include <linux/io.h>
> >
> > -void __iomem *ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> > +void __iomem *
> > +ioremap_allowed(phys_addr_t *paddr, size_t size, unsigned long *prot_val)
> > {
> > - unsigned long last_addr = phys_addr + size - 1;
> > + unsigned long last_addr, offset, phys_addr = *paddr;
> > int ret = -EINVAL;
> >
> > + offset = phys_addr & (~PAGE_MASK);
> > + phys_addr -= offset;
>
> FWIW, phys_addr &= PAGE_MASK looks much more usual.
>
> > @@ -11,13 +11,20 @@
> > #include <linux/io.h>
> > #include <linux/export.h>
> >
> > -void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
> > +void __iomem *ioremap_prot(phys_addr_t paddr, size_t size,
> > unsigned long prot)
> > {
> > unsigned long offset, vaddr;
> > - phys_addr_t last_addr;
> > + phys_addr_t last_addr, phys_addr = paddr;
> > struct vm_struct *area;
> > void __iomem *base;
> > + unsigned long prot_val = prot;
>
> Why prot_val is needed?
>
> > + base = ioremap_allowed(&phys_addr, size, &prot_val);
> > + if (IS_ERR(base))
> > + return NULL;
> > + else if (base)
> > + return base;
>
> By moving ioremap_allowed() here you allow it to be called
> before the wrap-around check, including architectures that
> do not do fixups.
>
> And now ioremap_allowed() semantics, prototype and name turn
> less than obvious. Why not introduce a separate fixup callback?

I finally renamed ioremap_allowed()/iounmap_allowed() to arch_ioremap()
and arch_iounmap(). I didn't introduce a separate fixup callback, and
have added more explanation to log of patch 1~2, please check that.