Motivation and implementation:
==============================
Currently, many architecutres have't taken the standard GENERIC_IOREMAP
way to implement ioremap_prot(), iounmap(), and ioremap_xx(), but make
these functions specifically under each arch's folder. Those cause many
duplicated codes of ioremap() and iounmap().
In this patchset, firstly introduce generic_ioremap_prot() and
generic_iounmap() to extract the generic codes for GENERIC_IOREMAP.
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic version if there's
arch specific handling in its corresponding ioremap_prot(), ioremap()
or iounmap(). With these changes, duplicated ioremap/iounmap() code uder
ARCH-es are removed, and the equivalent functioality is kept as before.
Background info:
================
1)
The converting more architectures to take GENERIC_IOREMAP way is
suggested by Christoph in below discussion:
https://lore.kernel.org/all/[email protected]/T/#u
2)
In the previous v1 to v3, it's basically further action after arm64
has converted to GENERIC_IOREMAP way in below patchset. It's done by
adding hook ioremap_allowed() and iounmap_allowed() in ARCH to add
ARCH specific handling the middle of ioremap_prot() and iounmap().
[PATCH v5 0/6] arm64: Cleanup ioremap() and support ioremap_prot()
https://lore.kernel.org/all/[email protected]/T/#u
Later, during v3 reviewing, Christophe Leroy suggested to introduce
generic_ioremap_prot() and generic_iounmap() to generic codes, and ARCH
can provide wrapper function ioremap_prot(), ioremap() or iounmap() if
needed. Christophe made a RFC patchset as below to specially demonstrate
his idea. This is what v4 and now v5 is doing.
[RFC PATCH 0/8] mm: ioremap: Convert architectures to take GENERIC_IOREMAP way
https://lore.kernel.org/all/[email protected]/T/#u
Testing:
========
Old v4 has done below test. In v5, patch 1 is newly added to remove
ARCH_HAS_IOREMAP_xx, and patch 13 ("parisc: mm: Convert to GENERIC_IOREMAP")
is impacted, so I only built related x86_64, m68K, mips64, ppc64le, parisc and
all passed.
------Old v4 testing
- It's running well on arm64, s390x, ppc64le with this patchset applied
on the latest upstream kernel 6.2-rc8+.
- Cross compiling passed on arc, ia64, parisc, sh, xtensa.
- cross compiling is not tried on hexagon, openrisc and powerpc 32bit
because:
- Didn't find cross compiling tools for hexagon, ppc 32bit;
- there's error with openrisc compiling, while I have no idea how to
fix it. Please see below pasted log:
---------------------------------------------------------------------
[root@intel-knightslanding-lb-02 linux]# make ARCH=openrisc defconfig
*** Default configuration is based on 'or1ksim_defconfig'
#
# configuration written to .config
#
[root@intel-knightslanding-lb-02 linux]# make ARCH=openrisc -j320 CROSS_COMPILE=/usr/bin/openrisc-linux-gnu-
SYNC include/config/auto.conf.cmd
CC scripts/mod/empty.o
./scripts/check-local-export: /usr/bin/openrisc-linux-gnu-nm failed
make[1]: *** [scripts/Makefile.build:250: scripts/mod/empty.o] Error 1
make[1]: *** Deleting file 'scripts/mod/empty.o'
make: *** [Makefile:1275: prepare0] Error 2
----------------------------------------------------------------------
History:
=======
v4->v5:
- Ard and Christophe suggested adding a preparation patch to remove
ARCH_HAS_IOREMAP_xx macros, this is done in newly added patch 1.
- In the current patch 13 ("parisc: mm: Convert to GENERIC_IOREMAP"),
so we don't need to add ARCH_HAS_IOREMAP_WC.
v3->v4:
- Change to contain arch specific handling in wrapper function
ioremap(), ioremap_prot() or iounmap() to replace the old hook
ioremap|iounmap_allowed() hook way for each arch.
- Add two patches to convert powerpc to GENERIC_IOREMAP. They are
picked from above Christophe's RFC patchset, I made some changes
to make them formal.
v2->v3:
- Rewrite log of all patches to add more details as Christoph suggested.
- Merge the old patch 1 and 2 which adjusts return values and
parameters of arch_ioremap() into one patch, namely the current
patch 3. Christoph suggested this.
- Change the return value of arch_iounmap() to bool type since we only
do arch specific address filtering or address checking, bool value
can reflect the checking better. This is pointed out by Niklas when
he reviewed the s390 patch.
- Put hexagon patch at the beginning of patchset since hexagon has the
same ioremap() and iounmap() as standard ones, no arch_ioremap() and
arch_iounmap() hooks need be introduced. So the later arch_ioremap
and arch_iounmap() adjustment are not related in hexagon. Christophe
suggested this.
- Remove the early ioremap code from openrisc ioremap() firstly since
openrisc doesn't have early ioremap handling in openrisc arch code.
This simplifies the later converting to GENERIC_IOREMAP method.
Christoph and Stafford suggersted this.
- Fix compiling erorrs reported by lkp in parisc and sh patches.
Adding macro defintions for those port|mem io functions in
<asm/io.h> to avoid repeated definition in <asm-generic/io.h>.
v1->v2:
- Rename io[re|un]map_allowed() to arch_io[re|un]map() and made
some minor changes in patch 1~2 as per Alexander and Kefeng's
suggestions. Accordingly, adjust patches~4~11 because of the renaming
arch_io[re|un]map().
Baoquan He (14):
asm-generic/iomap.h: remove ARCH_HAS_IOREMAP_xx macros
hexagon: mm: Convert to GENERIC_IOREMAP
openrisc: mm: remove unneeded early ioremap code
mm: ioremap: allow ARCH to have its own ioremap method definition
mm/ioremap: add slab availability checking in ioremap_prot
arc: mm: Convert to GENERIC_IOREMAP
ia64: mm: Convert to GENERIC_IOREMAP
openrisc: mm: Convert to GENERIC_IOREMAP
s390: mm: Convert to GENERIC_IOREMAP
sh: mm: Convert to GENERIC_IOREMAP
xtensa: mm: Convert to GENERIC_IOREMAP
parisc: mm: Convert to GENERIC_IOREMAP
arm64 : mm: add wrapper function ioremap_prot()
mm: ioremap: remove unneeded ioremap_allowed and iounmap_allowed
Christophe Leroy (3):
mm/ioremap: Define generic_ioremap_prot() and generic_iounmap()
mm/ioremap: Consider IOREMAP space in generic ioremap
powerpc: mm: Convert to GENERIC_IOREMAP
arch/arc/Kconfig | 1 +
arch/arc/include/asm/io.h | 7 ++--
arch/arc/mm/ioremap.c | 49 ++--------------------
arch/arm64/include/asm/io.h | 3 +-
arch/arm64/mm/ioremap.c | 10 +++--
arch/hexagon/Kconfig | 1 +
arch/hexagon/include/asm/io.h | 9 +++-
arch/hexagon/mm/ioremap.c | 44 -------------------
arch/ia64/Kconfig | 1 +
arch/ia64/include/asm/io.h | 13 +++---
arch/ia64/mm/ioremap.c | 41 +++---------------
arch/loongarch/include/asm/io.h | 2 -
arch/m68k/include/asm/io_mm.h | 2 -
arch/m68k/include/asm/kmap.h | 2 -
arch/mips/include/asm/io.h | 5 +--
arch/openrisc/Kconfig | 1 +
arch/openrisc/include/asm/io.h | 11 +++--
arch/openrisc/mm/ioremap.c | 58 +------------------------
arch/parisc/Kconfig | 1 +
arch/parisc/include/asm/io.h | 15 ++++---
arch/parisc/mm/ioremap.c | 62 ++-------------------------
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/io.h | 17 ++------
arch/powerpc/mm/ioremap.c | 26 +-----------
arch/powerpc/mm/ioremap_32.c | 19 ++++-----
arch/powerpc/mm/ioremap_64.c | 12 +-----
arch/s390/Kconfig | 1 +
arch/s390/include/asm/io.h | 21 ++++++----
arch/s390/pci/pci.c | 57 +++++--------------------
arch/sh/Kconfig | 1 +
arch/sh/include/asm/io.h | 65 +++++++++++++++--------------
arch/sh/include/asm/io_noioport.h | 7 ++++
arch/sh/mm/ioremap.c | 65 +++++------------------------
arch/x86/include/asm/io.h | 5 ---
arch/xtensa/Kconfig | 1 +
arch/xtensa/include/asm/io.h | 32 ++++++--------
arch/xtensa/mm/ioremap.c | 58 +++++++------------------
drivers/net/ethernet/sfc/io.h | 2 +-
drivers/net/ethernet/sfc/siena/io.h | 2 +-
include/asm-generic/io.h | 31 +++-----------
include/asm-generic/iomap.h | 6 +--
mm/ioremap.c | 41 ++++++++++++------
42 files changed, 220 insertions(+), 588 deletions(-)
delete mode 100644 arch/hexagon/mm/ioremap.c
--
2.34.1
Let's use '#define ioremap_xx' and "#ifdef ioremap_xx" instead.
For architectures defined ARCH_HAS_IOREMAP_xx macros in <asm/io.h>,
the relevant adjustments are made to avoid compiling error:
loongarch:
- doesn't include <asm-generic/iomap.h>, defining ARCH_HAS_IOREMAP_WC
is redundant, so simply remove it.
m68k:
- selected GENERIC_IOMAP, so <asm-generic/iomap.h> has been added in
<asm-generic/io.h>, and <asm/kmap.h> is included above
<asm-generic/iomap.h>, so simply remove ARCH_HAS_IOREMAP_WT defining.
mips:
- move "#include <asm-generic/iomap.h>" below ioremap_wc definition
in <asm/io.h>
powerpc:
- remove "#include <asm-generic/iomap.h>" in <asm/io.h> because it's
duplicated with the one in <asm-generic/io.h>, let's rely on the
latter.
x86:
- selected GENERIC_IOMAP, remove #include <asm-generic/iomap.h> at
the middle of <asm/io.h>. Let's rely on <asm-generic/io.h>.
This is a preparation patch so that Later patch don't need to add
ARCH_HAS_IOREMAP_xx macro.
Signed-off-by: Baoquan He <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
arch/loongarch/include/asm/io.h | 2 --
arch/m68k/include/asm/io_mm.h | 2 --
arch/m68k/include/asm/kmap.h | 2 --
arch/mips/include/asm/io.h | 5 ++---
arch/powerpc/include/asm/io.h | 9 +--------
arch/x86/include/asm/io.h | 5 -----
drivers/net/ethernet/sfc/io.h | 2 +-
drivers/net/ethernet/sfc/siena/io.h | 2 +-
include/asm-generic/iomap.h | 6 +++---
9 files changed, 8 insertions(+), 27 deletions(-)
diff --git a/arch/loongarch/include/asm/io.h b/arch/loongarch/include/asm/io.h
index 402a7d9e3a53..505e3b583463 100644
--- a/arch/loongarch/include/asm/io.h
+++ b/arch/loongarch/include/asm/io.h
@@ -5,8 +5,6 @@
#ifndef _ASM_IO_H
#define _ASM_IO_H
-#define ARCH_HAS_IOREMAP_WC
-
#include <linux/kernel.h>
#include <linux/types.h>
diff --git a/arch/m68k/include/asm/io_mm.h b/arch/m68k/include/asm/io_mm.h
index d41fa488453b..6a0abd4846c6 100644
--- a/arch/m68k/include/asm/io_mm.h
+++ b/arch/m68k/include/asm/io_mm.h
@@ -26,8 +26,6 @@
#include <asm/virtconvert.h>
#include <asm/kmap.h>
-#include <asm-generic/iomap.h>
-
#ifdef CONFIG_ATARI
#define atari_readb raw_inb
#define atari_writeb raw_outb
diff --git a/arch/m68k/include/asm/kmap.h b/arch/m68k/include/asm/kmap.h
index dec05743d426..4efb3efa593a 100644
--- a/arch/m68k/include/asm/kmap.h
+++ b/arch/m68k/include/asm/kmap.h
@@ -4,8 +4,6 @@
#ifdef CONFIG_MMU
-#define ARCH_HAS_IOREMAP_WT
-
/* Values for nocacheflag and cmode */
#define IOMAP_FULL_CACHING 0
#define IOMAP_NOCACHE_SER 1
diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h
index e6d5ccaa309e..cec8347f0b85 100644
--- a/arch/mips/include/asm/io.h
+++ b/arch/mips/include/asm/io.h
@@ -12,8 +12,6 @@
#ifndef _ASM_IO_H
#define _ASM_IO_H
-#define ARCH_HAS_IOREMAP_WC
-
#include <linux/compiler.h>
#include <linux/kernel.h>
#include <linux/types.h>
@@ -25,7 +23,6 @@
#include <asm/byteorder.h>
#include <asm/cpu.h>
#include <asm/cpu-features.h>
-#include <asm-generic/iomap.h>
#include <asm/page.h>
#include <asm/pgtable-bits.h>
#include <asm/processor.h>
@@ -210,6 +207,8 @@ void iounmap(const volatile void __iomem *addr);
#define ioremap_wc(offset, size) \
ioremap_prot((offset), (size), boot_cpu_data.writecombine)
+#include <asm-generic/iomap.h>
+
#if defined(CONFIG_CPU_CAVIUM_OCTEON) || defined(CONFIG_CPU_LOONGSON64)
#define war_io_reorder_wmb() wmb()
#else
diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
index fc112a91d0c2..7a82e6f70ced 100644
--- a/arch/powerpc/include/asm/io.h
+++ b/arch/powerpc/include/asm/io.h
@@ -3,11 +3,6 @@
#define _ASM_POWERPC_IO_H
#ifdef __KERNEL__
-#define ARCH_HAS_IOREMAP_WC
-#ifdef CONFIG_PPC32
-#define ARCH_HAS_IOREMAP_WT
-#endif
-
/*
*/
@@ -695,9 +690,7 @@ static inline void name at \
#define writel_relaxed(v, addr) writel(v, addr)
#define writeq_relaxed(v, addr) writeq(v, addr)
-#ifdef CONFIG_GENERIC_IOMAP
-#include <asm-generic/iomap.h>
-#else
+#ifndef CONFIG_GENERIC_IOMAP
/*
* Here comes the implementation of the IOMAP interfaces.
*/
diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index e9025640f634..76238842406a 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -35,9 +35,6 @@
* - Arnaldo Carvalho de Melo <[email protected]>
*/
-#define ARCH_HAS_IOREMAP_WC
-#define ARCH_HAS_IOREMAP_WT
-
#include <linux/string.h>
#include <linux/compiler.h>
#include <linux/cc_platform.h>
@@ -212,8 +209,6 @@ void memset_io(volatile void __iomem *, int, size_t);
#define memcpy_toio memcpy_toio
#define memset_io memset_io
-#include <asm-generic/iomap.h>
-
/*
* ISA space is 'always mapped' on a typical x86 system, no need to
* explicitly ioremap() it. The fact that the ISA IO space is mapped
diff --git a/drivers/net/ethernet/sfc/io.h b/drivers/net/ethernet/sfc/io.h
index 30439cc83a89..07f99ad14bf3 100644
--- a/drivers/net/ethernet/sfc/io.h
+++ b/drivers/net/ethernet/sfc/io.h
@@ -70,7 +70,7 @@
*/
#ifdef CONFIG_X86_64
/* PIO is a win only if write-combining is possible */
-#ifdef ARCH_HAS_IOREMAP_WC
+#ifdef ioremap_wc
#define EFX_USE_PIO 1
#endif
#endif
diff --git a/drivers/net/ethernet/sfc/siena/io.h b/drivers/net/ethernet/sfc/siena/io.h
index 30439cc83a89..07f99ad14bf3 100644
--- a/drivers/net/ethernet/sfc/siena/io.h
+++ b/drivers/net/ethernet/sfc/siena/io.h
@@ -70,7 +70,7 @@
*/
#ifdef CONFIG_X86_64
/* PIO is a win only if write-combining is possible */
-#ifdef ARCH_HAS_IOREMAP_WC
+#ifdef ioremap_wc
#define EFX_USE_PIO 1
#endif
#endif
diff --git a/include/asm-generic/iomap.h b/include/asm-generic/iomap.h
index 08237ae8b840..196087a8126e 100644
--- a/include/asm-generic/iomap.h
+++ b/include/asm-generic/iomap.h
@@ -93,15 +93,15 @@ extern void __iomem *ioport_map(unsigned long port, unsigned int nr);
extern void ioport_unmap(void __iomem *);
#endif
-#ifndef ARCH_HAS_IOREMAP_WC
+#ifndef ioremap_wc
#define ioremap_wc ioremap
#endif
-#ifndef ARCH_HAS_IOREMAP_WT
+#ifndef ioremap_wt
#define ioremap_wt ioremap
#endif
-#ifndef ARCH_HAS_IOREMAP_NP
+#ifndef ioremap_np
/* See the comment in asm-generic/io.h about ioremap_np(). */
#define ioremap_np ioremap_np
static inline void __iomem *ioremap_np(phys_addr_t offset, size_t size)
--
2.34.1
By taking GENERIC_IOREMAP method, the generic ioremap_prot() and
iounmap() are visible and available to arch. This change will
simplify implementation by removing duplicated codes with generic
ioremap_prot() and iounmap(), and has the equivalent functioality.
For hexagon, the current ioremap() and iounmap() are the same as
generic version. After taking GENERIC_IOREMAP way, the old ioremap()
and iounmap() can be completely removed.
Signed-off-by: Baoquan He <[email protected]>
Cc: Brian Cain <[email protected]>
Cc: [email protected]
---
arch/hexagon/Kconfig | 1 +
arch/hexagon/include/asm/io.h | 9 +++++--
arch/hexagon/mm/ioremap.c | 44 -----------------------------------
3 files changed, 8 insertions(+), 46 deletions(-)
delete mode 100644 arch/hexagon/mm/ioremap.c
diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index 54eadf265178..17afffde1a7f 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -25,6 +25,7 @@ config HEXAGON
select NEED_SG_DMA_LENGTH
select NO_IOPORT_MAP
select GENERIC_IOMAP
+ select GENERIC_IOREMAP
select GENERIC_SMP_IDLE_THREAD
select STACKTRACE_SUPPORT
select GENERIC_CLOCKEVENTS_BROADCAST
diff --git a/arch/hexagon/include/asm/io.h b/arch/hexagon/include/asm/io.h
index 46a099de85b7..dcd9cbbf5934 100644
--- a/arch/hexagon/include/asm/io.h
+++ b/arch/hexagon/include/asm/io.h
@@ -170,8 +170,13 @@ static inline void writel(u32 data, volatile void __iomem *addr)
#define writew_relaxed __raw_writew
#define writel_relaxed __raw_writel
-void __iomem *ioremap(unsigned long phys_addr, unsigned long size);
-#define ioremap_uc(X, Y) ioremap((X), (Y))
+/*
+ * I/O memory mapping functions.
+ */
+#define _PAGE_IOREMAP (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+ (__HEXAGON_C_DEV << 6))
+
+#define ioremap_uc(addr, size) ioremap((addr), (size))
#define __raw_writel writel
diff --git a/arch/hexagon/mm/ioremap.c b/arch/hexagon/mm/ioremap.c
deleted file mode 100644
index 255c5b1ee1a7..000000000000
--- a/arch/hexagon/mm/ioremap.c
+++ /dev/null
@@ -1,44 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * I/O remap functions for Hexagon
- *
- * Copyright (c) 2010-2011, The Linux Foundation. All rights reserved.
- */
-
-#include <linux/io.h>
-#include <linux/vmalloc.h>
-#include <linux/mm.h>
-
-void __iomem *ioremap(unsigned long phys_addr, unsigned long size)
-{
- unsigned long last_addr, addr;
- unsigned long offset = phys_addr & ~PAGE_MASK;
- struct vm_struct *area;
-
- pgprot_t prot = __pgprot(_PAGE_PRESENT|_PAGE_READ|_PAGE_WRITE
- |(__HEXAGON_C_DEV << 6));
-
- last_addr = phys_addr + size - 1;
-
- /* Wrapping not allowed */
- if (!size || (last_addr < phys_addr))
- return NULL;
-
- /* Rounds up to next page size, including whole-page offset */
- size = PAGE_ALIGN(offset + size);
-
- area = get_vm_area(size, VM_IOREMAP);
- addr = (unsigned long)area->addr;
-
- if (ioremap_page_range(addr, addr+size, phys_addr, prot)) {
- vunmap((void *)addr);
- return NULL;
- }
-
- return (void __iomem *) (offset + addr);
-}
-
-void iounmap(const volatile void __iomem *addr)
-{
- vunmap((void *) ((unsigned long) addr & PAGE_MASK));
-}
--
2.34.1
From: Christophe Leroy <[email protected]>
Define a generic version of ioremap_prot() and iounmap() that
architectures can call after they have performed the necessary
alteration to parameters and/or necessary verifications.
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Baoquan He <[email protected]>
---
include/asm-generic/io.h | 4 ++++
mm/ioremap.c | 22 ++++++++++++++++------
2 files changed, 20 insertions(+), 6 deletions(-)
diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index 4c44a29b5e8e..5a9cf16ee0c2 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
@@ -1073,9 +1073,13 @@ static inline bool iounmap_allowed(void *addr)
}
#endif
+void __iomem *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
+ pgprot_t prot);
+
void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
unsigned long prot);
void iounmap(volatile void __iomem *addr);
+void generic_iounmap(volatile void __iomem *addr);
static inline void __iomem *ioremap(phys_addr_t addr, size_t size)
{
diff --git a/mm/ioremap.c b/mm/ioremap.c
index 8652426282cc..db6234b9db59 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -11,8 +11,8 @@
#include <linux/io.h>
#include <linux/export.h>
-void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
- unsigned long prot)
+void __iomem *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
+ pgprot_t prot)
{
unsigned long offset, vaddr;
phys_addr_t last_addr;
@@ -28,7 +28,7 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
phys_addr -= offset;
size = PAGE_ALIGN(size + offset);
- if (!ioremap_allowed(phys_addr, size, prot))
+ if (!ioremap_allowed(phys_addr, size, pgprot_val(prot)))
return NULL;
area = get_vm_area_caller(size, VM_IOREMAP,
@@ -38,17 +38,22 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
vaddr = (unsigned long)area->addr;
area->phys_addr = phys_addr;
- if (ioremap_page_range(vaddr, vaddr + size, phys_addr,
- __pgprot(prot))) {
+ if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot)) {
free_vm_area(area);
return NULL;
}
return (void __iomem *)(vaddr + offset);
}
+
+void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+ unsigned long prot)
+{
+ return generic_ioremap_prot(phys_addr, size, __pgprot(prot));
+}
EXPORT_SYMBOL(ioremap_prot);
-void iounmap(volatile void __iomem *addr)
+void generic_iounmap(volatile void __iomem *addr)
{
void *vaddr = (void *)((unsigned long)addr & PAGE_MASK);
@@ -58,4 +63,9 @@ void iounmap(volatile void __iomem *addr)
if (is_vmalloc_addr(vaddr))
vunmap(vaddr);
}
+
+void iounmap(volatile void __iomem *addr)
+{
+ generic_iounmap(addr);
+}
EXPORT_SYMBOL(iounmap);
--
2.34.1
Under arch/openrisc, there isn't any place where ioremap() is called.
It means that there isn't early ioremap handling needed in openrisc,
So the early ioremap handling code in ioremap() of
arch/openrisc/mm/ioremap.c is unnecessary and can be removed.
Link: https://lore.kernel.org/linux-mm/YwxfxKrTUtAuejKQ@oscomms1/
Signed-off-by: Baoquan He <[email protected]>
Acked-by: Stafford Horne <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: [email protected]
---
arch/openrisc/mm/ioremap.c | 22 +++++-----------------
1 file changed, 5 insertions(+), 17 deletions(-)
diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c
index 8ec0dafecf25..90b59bc53c8c 100644
--- a/arch/openrisc/mm/ioremap.c
+++ b/arch/openrisc/mm/ioremap.c
@@ -22,8 +22,6 @@
extern int mem_init_done;
-static unsigned int fixmaps_used __initdata;
-
/*
* Remap an arbitrary physical address space into the kernel virtual
* address space. Needed when the kernel wants to access high addresses
@@ -52,24 +50,14 @@ void __iomem *__ref ioremap(phys_addr_t addr, unsigned long size)
p = addr & PAGE_MASK;
size = PAGE_ALIGN(last_addr + 1) - p;
- if (likely(mem_init_done)) {
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
- v = (unsigned long)area->addr;
- } else {
- if ((fixmaps_used + (size >> PAGE_SHIFT)) > FIX_N_IOREMAPS)
- return NULL;
- v = fix_to_virt(FIX_IOREMAP_BEGIN + fixmaps_used);
- fixmaps_used += (size >> PAGE_SHIFT);
- }
+ area = get_vm_area(size, VM_IOREMAP);
+ if (!area)
+ return NULL;
+ v = (unsigned long)area->addr;
if (ioremap_page_range(v, v + size, p,
__pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_CI))) {
- if (likely(mem_init_done))
- vfree(area->addr);
- else
- fixmaps_used -= (size >> PAGE_SHIFT);
+ vfree(area->addr);
return NULL;
}
--
2.34.1
Architectures can be converted to GENERIC_IOREMAP, to take standard
ioremap_xxx() and iounmap() way. But some ARCH-es could have specific
handling for ioremap_prot(), ioremap() and iounmap(), than standard
methods.
In oder to convert these ARCH-es to take GENERIC_IOREMAP method, allow
these architecutres to have their own ioremap_prot(), ioremap() and
iounmap() definitions.
Signed-off-by: Baoquan He <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Cc: [email protected]
Cc: Kefeng Wang <[email protected]>
---
include/asm-generic/io.h | 3 +++
mm/ioremap.c | 4 ++++
2 files changed, 7 insertions(+)
diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index 5a9cf16ee0c2..29ee791164ac 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
@@ -1081,11 +1081,14 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
void iounmap(volatile void __iomem *addr);
void generic_iounmap(volatile void __iomem *addr);
+#ifndef ioremap
+#define ioremap ioremap
static inline void __iomem *ioremap(phys_addr_t addr, size_t size)
{
/* _PAGE_IOREMAP needs to be supplied by the architecture */
return ioremap_prot(addr, size, _PAGE_IOREMAP);
}
+#endif
#endif /* !CONFIG_MMU || CONFIG_GENERIC_IOREMAP */
#ifndef ioremap_wc
diff --git a/mm/ioremap.c b/mm/ioremap.c
index db6234b9db59..9f34a8f90b58 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -46,12 +46,14 @@ void __iomem *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
return (void __iomem *)(vaddr + offset);
}
+#ifndef ioremap_prot
void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
unsigned long prot)
{
return generic_ioremap_prot(phys_addr, size, __pgprot(prot));
}
EXPORT_SYMBOL(ioremap_prot);
+#endif
void generic_iounmap(volatile void __iomem *addr)
{
@@ -64,8 +66,10 @@ void generic_iounmap(volatile void __iomem *addr)
vunmap(vaddr);
}
+#ifndef iounmap
void iounmap(volatile void __iomem *addr)
{
generic_iounmap(addr);
}
EXPORT_SYMBOL(iounmap);
+#endif
--
2.34.1
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic versions if there's
arch specific handling in its ioremap_prot(), ioremap() or iounmap().
This change will simplify implementation by removing duplicated codes
with generic_ioremap_prot() and generic_iounmap(), and has the equivalent
functioality as before.
Here, add wrapper functions ioremap_prot() and iounmap() for arc's
special operation when ioremap_prot() and iounmap().
Signed-off-by: Baoquan He <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: [email protected]
---
arch/arc/Kconfig | 1 +
arch/arc/include/asm/io.h | 7 +++---
arch/arc/mm/ioremap.c | 49 ++++-----------------------------------
3 files changed, 8 insertions(+), 49 deletions(-)
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index d9a13ccf89a3..37da34ac7abf 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -26,6 +26,7 @@ config ARC
select GENERIC_PENDING_IRQ if SMP
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD
+ select GENERIC_IOREMAP
select HAVE_ARCH_KGDB
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARC_MMU_V4
diff --git a/arch/arc/include/asm/io.h b/arch/arc/include/asm/io.h
index 80347382a380..4fdb7350636c 100644
--- a/arch/arc/include/asm/io.h
+++ b/arch/arc/include/asm/io.h
@@ -21,8 +21,9 @@
#endif
extern void __iomem *ioremap(phys_addr_t paddr, unsigned long size);
-extern void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
- unsigned long flags);
+#define ioremap ioremap
+#define ioremap_prot ioremap_prot
+#define iounmap iounmap
static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
{
return (void __iomem *)port;
@@ -32,8 +33,6 @@ static inline void ioport_unmap(void __iomem *addr)
{
}
-extern void iounmap(const volatile void __iomem *addr);
-
/*
* io{read,write}{16,32}be() macros
*/
diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c
index 712c2311daef..b07004d53267 100644
--- a/arch/arc/mm/ioremap.c
+++ b/arch/arc/mm/ioremap.c
@@ -8,7 +8,6 @@
#include <linux/module.h>
#include <linux/io.h>
#include <linux/mm.h>
-#include <linux/slab.h>
#include <linux/cache.h>
static inline bool arc_uncached_addr_space(phys_addr_t paddr)
@@ -25,13 +24,6 @@ static inline bool arc_uncached_addr_space(phys_addr_t paddr)
void __iomem *ioremap(phys_addr_t paddr, unsigned long size)
{
- phys_addr_t end;
-
- /* Don't allow wraparound or zero size */
- end = paddr + size - 1;
- if (!size || (end < paddr))
- return NULL;
-
/*
* If the region is h/w uncached, MMU mapping can be elided as optim
* The cast to u32 is fine as this region can only be inside 4GB
@@ -51,55 +43,22 @@ EXPORT_SYMBOL(ioremap);
* ARC hardware uncached region, this one still goes thru the MMU as caller
* might need finer access control (R/W/X)
*/
-void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
+void __iomem *ioremap_prot(phys_addr_t paddr, size_t size,
unsigned long flags)
{
- unsigned int off;
- unsigned long vaddr;
- struct vm_struct *area;
- phys_addr_t end;
pgprot_t prot = __pgprot(flags);
- /* Don't allow wraparound, zero size */
- end = paddr + size - 1;
- if ((!size) || (end < paddr))
- return NULL;
-
- /* An early platform driver might end up here */
- if (!slab_is_available())
- return NULL;
-
/* force uncached */
- prot = pgprot_noncached(prot);
-
- /* Mappings have to be page-aligned */
- off = paddr & ~PAGE_MASK;
- paddr &= PAGE_MASK_PHYS;
- size = PAGE_ALIGN(end + 1) - paddr;
-
- /*
- * Ok, go for it..
- */
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
- area->phys_addr = paddr;
- vaddr = (unsigned long)area->addr;
- if (ioremap_page_range(vaddr, vaddr + size, paddr, prot)) {
- vunmap((void __force *)vaddr);
- return NULL;
- }
- return (void __iomem *)(off + (char __iomem *)vaddr);
+ return generic_ioremap_prot(paddr, size, pgprot_noncached(prot));
}
EXPORT_SYMBOL(ioremap_prot);
-
-void iounmap(const volatile void __iomem *addr)
+void iounmap(volatile void __iomem *addr)
{
/* weird double cast to handle phys_addr_t > 32 bits */
if (arc_uncached_addr_space((phys_addr_t)(u32)addr))
return;
- vfree((void *)(PAGE_MASK & (unsigned long __force)addr));
+ generic_iounmap(addr);
}
EXPORT_SYMBOL(iounmap);
--
2.34.1
Several architectures has done checking if slab if available in
ioremap_prot(). In fact it should be done in generic ioremap_prot()
since on any architecutre, slab allocator must be available before
get_vm_area_caller() and vunmap() are used.
Add the checking into generic_ioremap_prot().
Suggested-by: Christophe Leroy <[email protected]>
Signed-off-by: Baoquan He <[email protected]>
---
mm/ioremap.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/ioremap.c b/mm/ioremap.c
index 9f34a8f90b58..2fbe6b9bc50e 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -18,6 +18,10 @@ void __iomem *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
phys_addr_t last_addr;
struct vm_struct *area;
+ /* An early platform driver might end up here */
+ if (!slab_is_available())
+ return NULL;
+
/* Disallow wrap-around or zero size */
last_addr = phys_addr + size - 1;
if (!size || last_addr < phys_addr)
--
2.34.1
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic versions if there's
arch specific handling in its ioremap_prot(), ioremap() or iounmap().
This change will simplify implementation by removing duplicated codes
with generic_ioremap_prot() and generic_iounmap(), and has the equivalent
functioality as before.
Here, add wrapper functions ioremap_prot() and iounmap() for ia64's
special operation when ioremap() and iounmap().
Signed-off-by: Baoquan He <[email protected]>
Cc: [email protected]
---
arch/ia64/Kconfig | 1 +
arch/ia64/include/asm/io.h | 13 +++++-------
arch/ia64/mm/ioremap.c | 41 ++++++--------------------------------
3 files changed, 12 insertions(+), 43 deletions(-)
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index d7e4a24e8644..74568cb73d87 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -45,6 +45,7 @@ config IA64
select GENERIC_IRQ_LEGACY
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select GENERIC_IOMAP
+ select GENERIC_IOREMAP
select GENERIC_SMP_IDLE_THREAD
select ARCH_TASK_STRUCT_ON_STACK
select ARCH_TASK_STRUCT_ALLOCATOR
diff --git a/arch/ia64/include/asm/io.h b/arch/ia64/include/asm/io.h
index 83a492c8d298..eedc0afa8cad 100644
--- a/arch/ia64/include/asm/io.h
+++ b/arch/ia64/include/asm/io.h
@@ -243,15 +243,12 @@ static inline void outsl(unsigned long port, const void *src,
# ifdef __KERNEL__
-extern void __iomem * ioremap(unsigned long offset, unsigned long size);
+#define _PAGE_IOREMAP pgprot_val(PAGE_KERNEL)
+
extern void __iomem * ioremap_uc(unsigned long offset, unsigned long size);
-extern void iounmap (volatile void __iomem *addr);
-static inline void __iomem * ioremap_cache (unsigned long phys_addr, unsigned long size)
-{
- return ioremap(phys_addr, size);
-}
-#define ioremap ioremap
-#define ioremap_cache ioremap_cache
+
+#define ioremap_prot ioremap_prot
+#define ioremap_cache ioremap
#define ioremap_uc ioremap_uc
#define iounmap iounmap
diff --git a/arch/ia64/mm/ioremap.c b/arch/ia64/mm/ioremap.c
index 55fd3eb753ff..35e75e9c878b 100644
--- a/arch/ia64/mm/ioremap.c
+++ b/arch/ia64/mm/ioremap.c
@@ -29,13 +29,9 @@ early_ioremap (unsigned long phys_addr, unsigned long size)
return __ioremap_uc(phys_addr);
}
-void __iomem *
-ioremap (unsigned long phys_addr, unsigned long size)
+void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+ unsigned long flags)
{
- void __iomem *addr;
- struct vm_struct *area;
- unsigned long offset;
- pgprot_t prot;
u64 attr;
unsigned long gran_base, gran_size;
unsigned long page_base;
@@ -68,36 +64,12 @@ ioremap (unsigned long phys_addr, unsigned long size)
*/
page_base = phys_addr & PAGE_MASK;
size = PAGE_ALIGN(phys_addr + size) - page_base;
- if (efi_mem_attribute(page_base, size) & EFI_MEMORY_WB) {
- prot = PAGE_KERNEL;
-
- /*
- * Mappings have to be page-aligned
- */
- offset = phys_addr & ~PAGE_MASK;
- phys_addr &= PAGE_MASK;
-
- /*
- * Ok, go for it..
- */
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
-
- area->phys_addr = phys_addr;
- addr = (void __iomem *) area->addr;
- if (ioremap_page_range((unsigned long) addr,
- (unsigned long) addr + size, phys_addr, prot)) {
- vunmap((void __force *) addr);
- return NULL;
- }
-
- return (void __iomem *) (offset + (char __iomem *)addr);
- }
+ if (efi_mem_attribute(page_base, size) & EFI_MEMORY_WB)
+ return generic_ioremap_prot(phys_addr, size, __pgprot(flags));
return __ioremap_uc(phys_addr);
}
-EXPORT_SYMBOL(ioremap);
+EXPORT_SYMBOL(ioremap_prot);
void __iomem *
ioremap_uc(unsigned long phys_addr, unsigned long size)
@@ -114,8 +86,7 @@ early_iounmap (volatile void __iomem *addr, unsigned long size)
{
}
-void
-iounmap (volatile void __iomem *addr)
+void iounmap(volatile void __iomem *addr)
{
if (REGION_NUMBER(addr) == RGN_GATE)
vunmap((void *) ((unsigned long) addr & PAGE_MASK));
--
2.34.1
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic versions if there's
arch specific handling in its ioremap_prot(), ioremap() or iounmap().
This change will simplify implementation by removing duplicated codes
with generic_ioremap_prot() and generic_iounmap(), and has the equivalent
functioality as before.
Here, add wrapper function iounmap() for openrisc's special operation
when iounmap().
Signed-off-by: Baoquan He <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: [email protected]
---
arch/openrisc/Kconfig | 1 +
arch/openrisc/include/asm/io.h | 11 +++++---
arch/openrisc/mm/ioremap.c | 46 +---------------------------------
3 files changed, 9 insertions(+), 49 deletions(-)
diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index c7f282f60f64..fd9bb76a610b 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -21,6 +21,7 @@ config OPENRISC
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select GENERIC_PCI_IOMAP
+ select GENERIC_IOREMAP
select GENERIC_CPU_DEVICES
select HAVE_PCI
select HAVE_UID16
diff --git a/arch/openrisc/include/asm/io.h b/arch/openrisc/include/asm/io.h
index ee6043a03173..e640960c26c2 100644
--- a/arch/openrisc/include/asm/io.h
+++ b/arch/openrisc/include/asm/io.h
@@ -15,6 +15,8 @@
#define __ASM_OPENRISC_IO_H
#include <linux/types.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
/*
* PCI: We do not use IO ports in OpenRISC
@@ -27,11 +29,12 @@
#define PIO_OFFSET 0
#define PIO_MASK 0
-#define ioremap ioremap
-void __iomem *ioremap(phys_addr_t offset, unsigned long size);
-
+/*
+ * I/O memory mapping functions.
+ */
#define iounmap iounmap
-extern void iounmap(volatile void __iomem *addr);
+
+#define _PAGE_IOREMAP (pgprot_val(PAGE_KERNEL) | _PAGE_CI)
#include <asm-generic/io.h>
diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c
index 90b59bc53c8c..9f9941df7d4c 100644
--- a/arch/openrisc/mm/ioremap.c
+++ b/arch/openrisc/mm/ioremap.c
@@ -22,49 +22,6 @@
extern int mem_init_done;
-/*
- * Remap an arbitrary physical address space into the kernel virtual
- * address space. Needed when the kernel wants to access high addresses
- * directly.
- *
- * NOTE! We need to allow non-page-aligned mappings too: we will obviously
- * have to convert them into an offset in a page-aligned mapping, but the
- * caller shouldn't need to know that small detail.
- */
-void __iomem *__ref ioremap(phys_addr_t addr, unsigned long size)
-{
- phys_addr_t p;
- unsigned long v;
- unsigned long offset, last_addr;
- struct vm_struct *area = NULL;
-
- /* Don't allow wraparound or zero size */
- last_addr = addr + size - 1;
- if (!size || last_addr < addr)
- return NULL;
-
- /*
- * Mappings have to be page-aligned
- */
- offset = addr & ~PAGE_MASK;
- p = addr & PAGE_MASK;
- size = PAGE_ALIGN(last_addr + 1) - p;
-
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
- v = (unsigned long)area->addr;
-
- if (ioremap_page_range(v, v + size, p,
- __pgprot(pgprot_val(PAGE_KERNEL) | _PAGE_CI))) {
- vfree(area->addr);
- return NULL;
- }
-
- return (void __iomem *)(offset + (char *)v);
-}
-EXPORT_SYMBOL(ioremap);
-
void iounmap(volatile void __iomem *addr)
{
/* If the page is from the fixmap pool then we just clear out
@@ -88,9 +45,8 @@ void iounmap(volatile void __iomem *addr)
return;
}
- return vfree((void *)(PAGE_MASK & (unsigned long)addr));
+ generic_iounmap(addr);
}
-EXPORT_SYMBOL(iounmap);
/**
* OK, this one's a bit tricky... ioremap can get called before memory is
--
2.34.1
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic versions if there's
arch specific handling in its ioremap_prot(), ioremap() or iounmap().
This change will simplify implementation by removing duplicated codes
with generic_ioremap_prot() and generic_iounmap(), and has the equivalent
functioality as before.
Here, add wrapper functions ioremap_prot() and iounmap() for s390's
special operation when ioremap() and iounmap().
Signed-off-by: Baoquan He <[email protected]>
Reviewed-by: Niklas Schnelle <[email protected]>
Tested-by: Niklas Schnelle <[email protected]>
Cc: Gerald Schaefer <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Alexander Gordeev <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Sven Schnelle <[email protected]>
Cc: [email protected]
---
arch/s390/Kconfig | 1 +
arch/s390/include/asm/io.h | 21 ++++++++------
arch/s390/pci/pci.c | 57 +++++++-------------------------------
3 files changed, 23 insertions(+), 56 deletions(-)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 078cd1a773a3..74505a5de3ba 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -141,6 +141,7 @@ config S390
select GENERIC_SMP_IDLE_THREAD
select GENERIC_TIME_VSYSCALL
select GENERIC_VDSO_TIME_NS
+ select GENERIC_IOREMAP if PCI
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_JUMP_LABEL
diff --git a/arch/s390/include/asm/io.h b/arch/s390/include/asm/io.h
index e3882b012bfa..4453ad7c11ac 100644
--- a/arch/s390/include/asm/io.h
+++ b/arch/s390/include/asm/io.h
@@ -22,11 +22,18 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
#define IO_SPACE_LIMIT 0
-void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot);
-void __iomem *ioremap(phys_addr_t addr, size_t size);
-void __iomem *ioremap_wc(phys_addr_t addr, size_t size);
-void __iomem *ioremap_wt(phys_addr_t addr, size_t size);
-void iounmap(volatile void __iomem *addr);
+/*
+ * I/O memory mapping functions.
+ */
+#define ioremap_prot ioremap_prot
+#define iounmap iounmap
+
+#define _PAGE_IOREMAP pgprot_val(PAGE_KERNEL)
+
+#define ioremap_wc(addr, size) \
+ ioremap_prot((addr), (size), pgprot_val(pgprot_writecombine(PAGE_KERNEL)))
+#define ioremap_wt(addr, size) \
+ ioremap_prot((addr), (size), pgprot_val(pgprot_writethrough(PAGE_KERNEL)))
static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
{
@@ -51,10 +58,6 @@ static inline void ioport_unmap(void __iomem *p)
#define pci_iomap_wc pci_iomap_wc
#define pci_iomap_wc_range pci_iomap_wc_range
-#define ioremap ioremap
-#define ioremap_wt ioremap_wt
-#define ioremap_wc ioremap_wc
-
#define memcpy_fromio(dst, src, count) zpci_memcpy_fromio(dst, src, count)
#define memcpy_toio(dst, src, count) zpci_memcpy_toio(dst, src, count)
#define memset_io(dst, val, count) zpci_memset_io(dst, val, count)
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index ef38b1514c77..9590bf2c0d88 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -244,62 +244,25 @@ void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
zpci_memcpy_toio(to, from, count);
}
-static void __iomem *__ioremap(phys_addr_t addr, size_t size, pgprot_t prot)
+void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+ unsigned long prot)
{
- unsigned long offset, vaddr;
- struct vm_struct *area;
- phys_addr_t last_addr;
-
- last_addr = addr + size - 1;
- if (!size || last_addr < addr)
- return NULL;
-
+ /*
+ * When PCI MIO instructions are unavailable the "physical" address
+ * encodes a hint for accessing the PCI memory space it represents.
+ * Just pass it unchanged such that ioread/iowrite can decode it.
+ */
if (!static_branch_unlikely(&have_mio))
- return (void __iomem *) addr;
+ return (void __iomem *)phys_addr;
- offset = addr & ~PAGE_MASK;
- addr &= PAGE_MASK;
- size = PAGE_ALIGN(size + offset);
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
-
- vaddr = (unsigned long) area->addr;
- if (ioremap_page_range(vaddr, vaddr + size, addr, prot)) {
- free_vm_area(area);
- return NULL;
- }
- return (void __iomem *) ((unsigned long) area->addr + offset);
-}
-
-void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot)
-{
- return __ioremap(addr, size, __pgprot(prot));
+ return generic_ioremap_prot(phys_addr, size, __pgprot(prot));
}
EXPORT_SYMBOL(ioremap_prot);
-void __iomem *ioremap(phys_addr_t addr, size_t size)
-{
- return __ioremap(addr, size, PAGE_KERNEL);
-}
-EXPORT_SYMBOL(ioremap);
-
-void __iomem *ioremap_wc(phys_addr_t addr, size_t size)
-{
- return __ioremap(addr, size, pgprot_writecombine(PAGE_KERNEL));
-}
-EXPORT_SYMBOL(ioremap_wc);
-
-void __iomem *ioremap_wt(phys_addr_t addr, size_t size)
-{
- return __ioremap(addr, size, pgprot_writethrough(PAGE_KERNEL));
-}
-EXPORT_SYMBOL(ioremap_wt);
-
void iounmap(volatile void __iomem *addr)
{
if (static_branch_likely(&have_mio))
- vunmap((__force void *) ((unsigned long) addr & PAGE_MASK));
+ generic_iounmap(addr);
}
EXPORT_SYMBOL(iounmap);
--
2.34.1
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic versions if there's
arch specific handling in its ioremap_prot(), ioremap() or iounmap().
This change will simplify implementation by removing duplicated codes
with generic_ioremap_prot() and generic_iounmap(), and has the equivalent
functioality as before.
Here, add wrapper functions ioremap_prot() and iounmap() for SuperH's
special operation when ioremap() and iounmap().
Meanwhile, add macro definitions for port|mm io functions since SuperH
has its own implementation in arch/sh/kernel/iomap.c and
arch/sh/include/asm/io_noioport.h. These will conflict with the port|mm io
function definitions in include/asm-generic/io.h to cause compiling
errors like below:
====
CC arch/sh/kernel/asm-offsets.s
In file included from ./arch/sh/include/asm/io.h:294,
from ./include/linux/io.h:13,
......
from arch/sh/kernel/asm-offsets.c:16:
./include/asm-generic/io.h:792:17: error: conflicting types for ‘ioread8’
792 | #define ioread8 ioread8
| ^~~~~~~
./include/asm-generic/io.h:793:18: note: in expansion of macro ‘ioread8’
793 | static inline u8 ioread8(const volatile void __iomem *addr)
| ^~~~~~~
In file included from ./arch/sh/include/asm/io.h:22,
from ./include/linux/io.h:13,
......
from arch/sh/kernel/asm-offsets.c:16:
./include/asm-generic/iomap.h:29:21: note: previous declaration of ‘ioread8’ was here
29 | extern unsigned int ioread8(const void __iomem *);
====
Signed-off-by: Baoquan He <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: [email protected]
---
arch/sh/Kconfig | 1 +
arch/sh/include/asm/io.h | 65 ++++++++++++++++---------------
arch/sh/include/asm/io_noioport.h | 7 ++++
arch/sh/mm/ioremap.c | 65 ++++++-------------------------
4 files changed, 52 insertions(+), 86 deletions(-)
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index 0665ac0add0b..9ab627f97c4a 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -25,6 +25,7 @@ config SUPERH
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD
select GUP_GET_PXX_LOW_HIGH if X2TLB
+ select GENERIC_IOREMAP if MMU
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_KGDB
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h
index fba90e670ed4..b3a26b405c8d 100644
--- a/arch/sh/include/asm/io.h
+++ b/arch/sh/include/asm/io.h
@@ -119,6 +119,26 @@ void __raw_readsl(const void __iomem *addr, void *data, int longlen);
__BUILD_MEMORY_STRING(__raw_, q, u64)
+#define ioread8 ioread8
+#define ioread16 ioread16
+#define ioread16be ioread16be
+#define ioread32 ioread32
+#define ioread32be ioread32be
+
+#define iowrite8 iowrite8
+#define iowrite16 iowrite16
+#define iowrite16be iowrite16be
+#define iowrite32 iowrite32
+#define iowrite32be iowrite32be
+
+#define ioread8_rep ioread8_rep
+#define ioread16_rep ioread16_rep
+#define ioread32_rep ioread32_rep
+
+#define iowrite8_rep iowrite8_rep
+#define iowrite16_rep iowrite16_rep
+#define iowrite32_rep iowrite32_rep
+
#ifdef CONFIG_HAS_IOPORT_MAP
/*
@@ -225,6 +245,9 @@ __BUILD_IOPORT_STRING(q, u64)
#define IO_SPACE_LIMIT 0xffffffff
/* We really want to try and get these to memcpy etc */
+#define memset_io memset_io
+#define memcpy_fromio memcpy_fromio
+#define memcpy_toio memcpy_toio
void memcpy_fromio(void *, const volatile void __iomem *, unsigned long);
void memcpy_toio(volatile void __iomem *, const void *, unsigned long);
void memset_io(volatile void __iomem *, int, unsigned long);
@@ -243,40 +266,16 @@ unsigned long long poke_real_address_q(unsigned long long addr,
#endif
#ifdef CONFIG_MMU
-void iounmap(void __iomem *addr);
-void __iomem *__ioremap_caller(phys_addr_t offset, unsigned long size,
- pgprot_t prot, void *caller);
-
-static inline void __iomem *ioremap(phys_addr_t offset, unsigned long size)
-{
- return __ioremap_caller(offset, size, PAGE_KERNEL_NOCACHE,
- __builtin_return_address(0));
-}
-
-static inline void __iomem *
-ioremap_cache(phys_addr_t offset, unsigned long size)
-{
- return __ioremap_caller(offset, size, PAGE_KERNEL,
- __builtin_return_address(0));
-}
-#define ioremap_cache ioremap_cache
-
-#ifdef CONFIG_HAVE_IOREMAP_PROT
-static inline void __iomem *ioremap_prot(phys_addr_t offset, unsigned long size,
- unsigned long flags)
-{
- return __ioremap_caller(offset, size, __pgprot(flags),
- __builtin_return_address(0));
-}
-#endif /* CONFIG_HAVE_IOREMAP_PROT */
+/*
+ * I/O memory mapping functions.
+ */
+#define ioremap_prot ioremap_prot
+#define iounmap iounmap
-#else /* CONFIG_MMU */
-static inline void __iomem *ioremap(phys_addr_t offset, size_t size)
-{
- return (void __iomem *)(unsigned long)offset;
-}
+#define _PAGE_IOREMAP pgprot_val(PAGE_KERNEL_NOCACHE)
-static inline void iounmap(volatile void __iomem *addr) { }
+#define ioremap_cache(addr, size) \
+ ioremap_prot((addr), (size), pgprot_val(PAGE_KERNEL))
#endif /* CONFIG_MMU */
#define ioremap_uc ioremap
@@ -287,6 +286,8 @@ static inline void iounmap(volatile void __iomem *addr) { }
*/
#define xlate_dev_mem_ptr(p) __va(p)
+#include <asm-generic/io.h>
+
#define ARCH_HAS_VALID_PHYS_ADDR_RANGE
int valid_phys_addr_range(phys_addr_t addr, size_t size);
int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
diff --git a/arch/sh/include/asm/io_noioport.h b/arch/sh/include/asm/io_noioport.h
index f7938fe0f911..5ba4116b4265 100644
--- a/arch/sh/include/asm/io_noioport.h
+++ b/arch/sh/include/asm/io_noioport.h
@@ -53,6 +53,13 @@ static inline void ioport_unmap(void __iomem *addr)
#define outw_p(x, addr) outw((x), (addr))
#define outl_p(x, addr) outl((x), (addr))
+#define insb insb
+#define insw insw
+#define insl insl
+#define outsb outsb
+#define outsw outsw
+#define outsl outsl
+
static inline void insb(unsigned long port, void *dst, unsigned long count)
{
BUG();
diff --git a/arch/sh/mm/ioremap.c b/arch/sh/mm/ioremap.c
index 21342581144d..c33b3daa4ad1 100644
--- a/arch/sh/mm/ioremap.c
+++ b/arch/sh/mm/ioremap.c
@@ -72,22 +72,11 @@ __ioremap_29bit(phys_addr_t offset, unsigned long size, pgprot_t prot)
#define __ioremap_29bit(offset, size, prot) NULL
#endif /* CONFIG_29BIT */
-/*
- * Remap an arbitrary physical address space into the kernel virtual
- * address space. Needed when the kernel wants to access high addresses
- * directly.
- *
- * NOTE! We need to allow non-page-aligned mappings too: we will obviously
- * have to convert them into an offset in a page-aligned mapping, but the
- * caller shouldn't need to know that small detail.
- */
-void __iomem * __ref
-__ioremap_caller(phys_addr_t phys_addr, unsigned long size,
- pgprot_t pgprot, void *caller)
+void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+ unsigned long prot)
{
- struct vm_struct *area;
- unsigned long offset, last_addr, addr, orig_addr;
void __iomem *mapped;
+ pgprot_t pgprot = __pgprot(prot);
mapped = __ioremap_trapped(phys_addr, size);
if (mapped)
@@ -97,11 +86,6 @@ __ioremap_caller(phys_addr_t phys_addr, unsigned long size,
if (mapped)
return mapped;
- /* Don't allow wraparound or zero size */
- last_addr = phys_addr + size - 1;
- if (!size || last_addr < phys_addr)
- return NULL;
-
/*
* If we can't yet use the regular approach, go the fixmap route.
*/
@@ -112,34 +96,14 @@ __ioremap_caller(phys_addr_t phys_addr, unsigned long size,
* First try to remap through the PMB.
* PMB entries are all pre-faulted.
*/
- mapped = pmb_remap_caller(phys_addr, size, pgprot, caller);
+ mapped = pmb_remap_caller(phys_addr, size, pgprot,
+ __builtin_return_address(0));
if (mapped && !IS_ERR(mapped))
return mapped;
- /*
- * Mappings have to be page-aligned
- */
- offset = phys_addr & ~PAGE_MASK;
- phys_addr &= PAGE_MASK;
- size = PAGE_ALIGN(last_addr+1) - phys_addr;
-
- /*
- * Ok, go for it..
- */
- area = get_vm_area_caller(size, VM_IOREMAP, caller);
- if (!area)
- return NULL;
- area->phys_addr = phys_addr;
- orig_addr = addr = (unsigned long)area->addr;
-
- if (ioremap_page_range(addr, addr + size, phys_addr, pgprot)) {
- vunmap((void *)orig_addr);
- return NULL;
- }
-
- return (void __iomem *)(offset + (char *)orig_addr);
+ return generic_ioremap_prot(phys_addr, size, pgprot);
}
-EXPORT_SYMBOL(__ioremap_caller);
+EXPORT_SYMBOL(ioremap_prot);
/*
* Simple checks for non-translatable mappings.
@@ -158,10 +122,9 @@ static inline int iomapping_nontranslatable(unsigned long offset)
return 0;
}
-void iounmap(void __iomem *addr)
+void iounmap(volatile void __iomem *addr)
{
unsigned long vaddr = (unsigned long __force)addr;
- struct vm_struct *p;
/*
* Nothing to do if there is no translatable mapping.
@@ -172,21 +135,15 @@ void iounmap(void __iomem *addr)
/*
* There's no VMA if it's from an early fixed mapping.
*/
- if (iounmap_fixed(addr) == 0)
+ if (iounmap_fixed((void __iomem *)addr) == 0)
return;
/*
* If the PMB handled it, there's nothing else to do.
*/
- if (pmb_unmap(addr) == 0)
+ if (pmb_unmap((void __iomem *)addr) == 0)
return;
- p = remove_vm_area((void *)(vaddr & PAGE_MASK));
- if (!p) {
- printk(KERN_ERR "%s: bad address %p\n", __func__, addr);
- return;
- }
-
- kfree(p);
+ generic_iounmap(addr);
}
EXPORT_SYMBOL(iounmap);
--
2.34.1
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic versions if there's
arch specific handling in its ioremap_prot(), ioremap() or iounmap().
This change will simplify implementation by removing duplicated codes
with generic_ioremap_prot() and generic_iounmap(), and has the equivalent
functioality as before.
Here, add wrapper functions ioremap_prot(), ioremap() and iounmap() for
xtensa's special operation when ioremap() and iounmap().
Signed-off-by: Baoquan He <[email protected]>
---
arch/xtensa/Kconfig | 1 +
arch/xtensa/include/asm/io.h | 32 ++++++++------------
arch/xtensa/mm/ioremap.c | 58 +++++++++---------------------------
3 files changed, 27 insertions(+), 64 deletions(-)
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index bcb0c5d2abc2..465d5981082a 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -29,6 +29,7 @@ config XTENSA
select GENERIC_LIB_UCMPDI2
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
+ select GENERIC_IOREMAP if MMU
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
diff --git a/arch/xtensa/include/asm/io.h b/arch/xtensa/include/asm/io.h
index a5b707e1c0f4..934e58399c8c 100644
--- a/arch/xtensa/include/asm/io.h
+++ b/arch/xtensa/include/asm/io.h
@@ -16,6 +16,7 @@
#include <asm/vectors.h>
#include <linux/bug.h>
#include <linux/kernel.h>
+#include <linux/pgtable.h>
#include <linux/types.h>
@@ -24,22 +25,24 @@
#define PCI_IOBASE ((void __iomem *)XCHAL_KIO_BYPASS_VADDR)
#ifdef CONFIG_MMU
-
-void __iomem *xtensa_ioremap_nocache(unsigned long addr, unsigned long size);
-void __iomem *xtensa_ioremap_cache(unsigned long addr, unsigned long size);
-void xtensa_iounmap(volatile void __iomem *addr);
-
/*
- * Return the virtual address for the specified bus memory.
+ * I/O memory mapping functions.
*/
+void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+ unsigned long prot);
+#define ioremap_prot ioremap_prot
+#define iounmap iounmap
+
static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
{
if (offset >= XCHAL_KIO_PADDR
&& offset - XCHAL_KIO_PADDR < XCHAL_KIO_SIZE)
return (void*)(offset-XCHAL_KIO_PADDR+XCHAL_KIO_BYPASS_VADDR);
else
- return xtensa_ioremap_nocache(offset, size);
+ return ioremap_prot(offset, size,
+ pgprot_val(pgprot_noncached(PAGE_KERNEL)));
}
+#define ioremap ioremap
static inline void __iomem *ioremap_cache(unsigned long offset,
unsigned long size)
@@ -48,21 +51,10 @@ static inline void __iomem *ioremap_cache(unsigned long offset,
&& offset - XCHAL_KIO_PADDR < XCHAL_KIO_SIZE)
return (void*)(offset-XCHAL_KIO_PADDR+XCHAL_KIO_CACHED_VADDR);
else
- return xtensa_ioremap_cache(offset, size);
-}
-#define ioremap_cache ioremap_cache
+ return ioremap_prot(offset, size, pgprot_val(PAGE_KERNEL));
-static inline void iounmap(volatile void __iomem *addr)
-{
- unsigned long va = (unsigned long) addr;
-
- if (!(va >= XCHAL_KIO_CACHED_VADDR &&
- va - XCHAL_KIO_CACHED_VADDR < XCHAL_KIO_SIZE) &&
- !(va >= XCHAL_KIO_BYPASS_VADDR &&
- va - XCHAL_KIO_BYPASS_VADDR < XCHAL_KIO_SIZE))
- xtensa_iounmap(addr);
}
-
+#define ioremap_cache ioremap_cache
#endif /* CONFIG_MMU */
#include <asm-generic/io.h>
diff --git a/arch/xtensa/mm/ioremap.c b/arch/xtensa/mm/ioremap.c
index a400188c16b9..8ca660b7ab49 100644
--- a/arch/xtensa/mm/ioremap.c
+++ b/arch/xtensa/mm/ioremap.c
@@ -6,60 +6,30 @@
*/
#include <linux/io.h>
-#include <linux/vmalloc.h>
#include <linux/pgtable.h>
#include <asm/cacheflush.h>
#include <asm/io.h>
-static void __iomem *xtensa_ioremap(unsigned long paddr, unsigned long size,
- pgprot_t prot)
+void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+ unsigned long prot)
{
- unsigned long offset = paddr & ~PAGE_MASK;
- unsigned long pfn = __phys_to_pfn(paddr);
- struct vm_struct *area;
- unsigned long vaddr;
- int err;
-
- paddr &= PAGE_MASK;
-
+ unsigned long pfn = __phys_to_pfn((phys_addr));
WARN_ON(pfn_valid(pfn));
- size = PAGE_ALIGN(offset + size);
-
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
-
- vaddr = (unsigned long)area->addr;
- area->phys_addr = paddr;
-
- err = ioremap_page_range(vaddr, vaddr + size, paddr, prot);
-
- if (err) {
- vunmap((void *)vaddr);
- return NULL;
- }
-
- flush_cache_vmap(vaddr, vaddr + size);
- return (void __iomem *)(offset + vaddr);
-}
-
-void __iomem *xtensa_ioremap_nocache(unsigned long addr, unsigned long size)
-{
- return xtensa_ioremap(addr, size, pgprot_noncached(PAGE_KERNEL));
+ return generic_ioremap_prot(phys_addr, size, __pgprot(prot));
}
-EXPORT_SYMBOL(xtensa_ioremap_nocache);
+EXPORT_SYMBOL(ioremap_prot);
-void __iomem *xtensa_ioremap_cache(unsigned long addr, unsigned long size)
+void iounmap(volatile void __iomem *addr)
{
- return xtensa_ioremap(addr, size, PAGE_KERNEL);
-}
-EXPORT_SYMBOL(xtensa_ioremap_cache);
+ unsigned long va = (unsigned long) addr;
-void xtensa_iounmap(volatile void __iomem *io_addr)
-{
- void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr);
+ if ((va >= XCHAL_KIO_CACHED_VADDR &&
+ va - XCHAL_KIO_CACHED_VADDR < XCHAL_KIO_SIZE) ||
+ (va >= XCHAL_KIO_BYPASS_VADDR &&
+ va - XCHAL_KIO_BYPASS_VADDR < XCHAL_KIO_SIZE))
+ return;
- vunmap(addr);
+ generic_iounmap(addr);
}
-EXPORT_SYMBOL(xtensa_iounmap);
+EXPORT_SYMBOL(iounmap);
--
2.34.1
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic versions if there's
arch specific handling in its ioremap_prot(), ioremap() or iounmap().
This change will simplify implementation by removing duplicated codes
with generic_ioremap_prot() and generic_iounmap(), and has the equivalent
functioality as before.
Here, add wrapper function ioremap_prot() for parisc's special operation
when iounmap().
Signed-off-by: Baoquan He <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: [email protected]
---
arch/parisc/Kconfig | 1 +
arch/parisc/include/asm/io.h | 15 ++++++---
arch/parisc/mm/ioremap.c | 62 +++---------------------------------
3 files changed, 15 insertions(+), 63 deletions(-)
diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index a98940e64243..0ed18e673aba 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -36,6 +36,7 @@ config PARISC
select GENERIC_ATOMIC64 if !64BIT
select GENERIC_IRQ_PROBE
select GENERIC_PCI_IOMAP
+ select GENERIC_IOREMAP
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select GENERIC_SMP_IDLE_THREAD
select GENERIC_ARCH_TOPOLOGY if SMP
diff --git a/arch/parisc/include/asm/io.h b/arch/parisc/include/asm/io.h
index c05e781be2f5..366537042465 100644
--- a/arch/parisc/include/asm/io.h
+++ b/arch/parisc/include/asm/io.h
@@ -125,12 +125,17 @@ static inline void gsc_writeq(unsigned long long val, unsigned long addr)
/*
* The standard PCI ioremap interfaces
*/
-void __iomem *ioremap(unsigned long offset, unsigned long size);
-#define ioremap_wc ioremap
-#define ioremap_uc ioremap
-#define pci_iounmap pci_iounmap
+#define ioremap_prot ioremap_prot
+
+#define _PAGE_IOREMAP (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | \
+ _PAGE_ACCESSED | _PAGE_NO_CACHE)
-extern void iounmap(const volatile void __iomem *addr);
+#define ioremap_wc(addr, size) \
+ ioremap_prot((addr), (size), _PAGE_IOREMAP)
+#define ioremap_uc(addr, size) \
+ ioremap_prot((addr), (size), _PAGE_IOREMAP)
+
+#define pci_iounmap pci_iounmap
void memset_io(volatile void __iomem *addr, unsigned char val, int count);
void memcpy_fromio(void *dst, const volatile void __iomem *src, int count);
diff --git a/arch/parisc/mm/ioremap.c b/arch/parisc/mm/ioremap.c
index 345ff0b66499..fd996472dfe7 100644
--- a/arch/parisc/mm/ioremap.c
+++ b/arch/parisc/mm/ioremap.c
@@ -13,25 +13,9 @@
#include <linux/io.h>
#include <linux/mm.h>
-/*
- * Generic mapping function (not visible outside):
- */
-
-/*
- * Remap an arbitrary physical address space into the kernel virtual
- * address space.
- *
- * NOTE! We need to allow non-page-aligned mappings too: we will obviously
- * have to convert them into an offset in a page-aligned mapping, but the
- * caller shouldn't need to know that small detail.
- */
-void __iomem *ioremap(unsigned long phys_addr, unsigned long size)
+void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+ unsigned long prot)
{
- void __iomem *addr;
- struct vm_struct *area;
- unsigned long offset, last_addr;
- pgprot_t pgprot;
-
#ifdef CONFIG_EISA
unsigned long end = phys_addr + size - 1;
/* Support EISA addresses */
@@ -40,11 +24,6 @@ void __iomem *ioremap(unsigned long phys_addr, unsigned long size)
phys_addr |= F_EXTEND(0xfc000000);
#endif
- /* Don't allow wraparound or zero size */
- last_addr = phys_addr + size - 1;
- if (!size || last_addr < phys_addr)
- return NULL;
-
/*
* Don't allow anybody to remap normal RAM that we're using..
*/
@@ -62,39 +41,6 @@ void __iomem *ioremap(unsigned long phys_addr, unsigned long size)
}
}
- pgprot = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY |
- _PAGE_ACCESSED | _PAGE_NO_CACHE);
-
- /*
- * Mappings have to be page-aligned
- */
- offset = phys_addr & ~PAGE_MASK;
- phys_addr &= PAGE_MASK;
- size = PAGE_ALIGN(last_addr + 1) - phys_addr;
-
- /*
- * Ok, go for it..
- */
- area = get_vm_area(size, VM_IOREMAP);
- if (!area)
- return NULL;
-
- addr = (void __iomem *) area->addr;
- if (ioremap_page_range((unsigned long)addr, (unsigned long)addr + size,
- phys_addr, pgprot)) {
- vunmap(addr);
- return NULL;
- }
-
- return (void __iomem *) (offset + (char __iomem *)addr);
-}
-EXPORT_SYMBOL(ioremap);
-
-void iounmap(const volatile void __iomem *io_addr)
-{
- unsigned long addr = (unsigned long)io_addr & PAGE_MASK;
-
- if (is_vmalloc_addr((void *)addr))
- vunmap((void *)addr);
+ return generic_ioremap_prot(phys_addr, size, __pgprot(prot));
}
-EXPORT_SYMBOL(iounmap);
+EXPORT_SYMBOL(ioremap_prot);
--
2.34.1
From: Christophe Leroy <[email protected]>
Architectures like powerpc have a dedicated space for IOREMAP mappings.
If so, use it in generic_ioremap_pro().
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Baoquan He <[email protected]>
---
mm/ioremap.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/ioremap.c b/mm/ioremap.c
index 2fbe6b9bc50e..4a7749d85044 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -35,8 +35,13 @@ void __iomem *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
if (!ioremap_allowed(phys_addr, size, pgprot_val(prot)))
return NULL;
+#ifdef IOREMAP_START
+ area = __get_vm_area_caller(size, VM_IOREMAP, IOREMAP_START,
+ IOREMAP_END, __builtin_return_address(0));
+#else
area = get_vm_area_caller(size, VM_IOREMAP,
__builtin_return_address(0));
+#endif
if (!area)
return NULL;
vaddr = (unsigned long)area->addr;
@@ -66,7 +71,7 @@ void generic_iounmap(volatile void __iomem *addr)
if (!iounmap_allowed(vaddr))
return;
- if (is_vmalloc_addr(vaddr))
+ if (is_ioremap_addr(vaddr))
vunmap(vaddr);
}
--
2.34.1
From: Christophe Leroy <[email protected]>
By taking GENERIC_IOREMAP method, the generic generic_ioremap_prot(),
generic_iounmap(), and their generic wrapper ioremap_prot(), ioremap()
and iounmap() are all visible and available to arch. Arch needs to
provide wrapper functions to override the generic versions if there's
arch specific handling in its ioremap_prot(), ioremap() or iounmap().
This change will simplify implementation by removing duplicated codes
with generic_ioremap_prot() and generic_iounmap(), and has the equivalent
functioality as before.
Here, add wrapper functions ioremap_prot() and iounmap() for powerpc's
special operation when ioremap() and iounmap().
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Baoquan He <[email protected]>
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/io.h | 8 +++-----
arch/powerpc/mm/ioremap.c | 26 +-------------------------
arch/powerpc/mm/ioremap_32.c | 19 +++++++++----------
arch/powerpc/mm/ioremap_64.c | 12 ++----------
5 files changed, 16 insertions(+), 50 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2c9cdf1d8761..7441295b718e 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -181,6 +181,7 @@ config PPC
select GENERIC_CPU_VULNERABILITIES if PPC_BARRIER_NOSPEC
select GENERIC_EARLY_IOREMAP
select GENERIC_GETTIMEOFDAY
+ select GENERIC_IOREMAP
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
select GENERIC_PCI_IOMAP if PCI
diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
index 7a82e6f70ced..978d687edf32 100644
--- a/arch/powerpc/include/asm/io.h
+++ b/arch/powerpc/include/asm/io.h
@@ -852,8 +852,8 @@ static inline void iosync(void)
*
*/
extern void __iomem *ioremap(phys_addr_t address, unsigned long size);
-extern void __iomem *ioremap_prot(phys_addr_t address, unsigned long size,
- unsigned long flags);
+#define ioremap ioremap
+#define ioremap_prot ioremap_prot
extern void __iomem *ioremap_wc(phys_addr_t address, unsigned long size);
#define ioremap_wc ioremap_wc
@@ -867,14 +867,12 @@ void __iomem *ioremap_coherent(phys_addr_t address, unsigned long size);
#define ioremap_cache(addr, size) \
ioremap_prot((addr), (size), pgprot_val(PAGE_KERNEL))
-extern void iounmap(volatile void __iomem *addr);
+#define iounmap iounmap
void __iomem *ioremap_phb(phys_addr_t paddr, unsigned long size);
int early_ioremap_range(unsigned long ea, phys_addr_t pa,
unsigned long size, pgprot_t prot);
-void __iomem *do_ioremap(phys_addr_t pa, phys_addr_t offset, unsigned long size,
- pgprot_t prot, void *caller);
extern void __iomem *__ioremap_caller(phys_addr_t, unsigned long size,
pgprot_t prot, void *caller);
diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c
index 4f12504fb405..705e8e8ffde4 100644
--- a/arch/powerpc/mm/ioremap.c
+++ b/arch/powerpc/mm/ioremap.c
@@ -41,7 +41,7 @@ void __iomem *ioremap_coherent(phys_addr_t addr, unsigned long size)
return __ioremap_caller(addr, size, prot, caller);
}
-void __iomem *ioremap_prot(phys_addr_t addr, unsigned long size, unsigned long flags)
+void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long flags)
{
pte_t pte = __pte(flags);
void *caller = __builtin_return_address(0);
@@ -74,27 +74,3 @@ int early_ioremap_range(unsigned long ea, phys_addr_t pa,
return 0;
}
-
-void __iomem *do_ioremap(phys_addr_t pa, phys_addr_t offset, unsigned long size,
- pgprot_t prot, void *caller)
-{
- struct vm_struct *area;
- int ret;
- unsigned long va;
-
- area = __get_vm_area_caller(size, VM_IOREMAP, IOREMAP_START, IOREMAP_END, caller);
- if (area == NULL)
- return NULL;
-
- area->phys_addr = pa;
- va = (unsigned long)area->addr;
-
- ret = ioremap_page_range(va, va + size, pa, prot);
- if (!ret)
- return (void __iomem *)area->addr + offset;
-
- vunmap_range(va, va + size);
- free_vm_area(area);
-
- return NULL;
-}
diff --git a/arch/powerpc/mm/ioremap_32.c b/arch/powerpc/mm/ioremap_32.c
index 9d13143b8be4..ca5bc6be3e6f 100644
--- a/arch/powerpc/mm/ioremap_32.c
+++ b/arch/powerpc/mm/ioremap_32.c
@@ -21,6 +21,13 @@ __ioremap_caller(phys_addr_t addr, unsigned long size, pgprot_t prot, void *call
phys_addr_t p, offset;
int err;
+ /*
+ * If the address lies within the first 16 MB, assume it's in ISA
+ * memory space
+ */
+ if (addr < SZ_16M)
+ addr += _ISA_MEM_BASE;
+
/*
* Choose an address to map it to.
* Once the vmalloc system is running, we use it.
@@ -31,13 +38,6 @@ __ioremap_caller(phys_addr_t addr, unsigned long size, pgprot_t prot, void *call
offset = addr & ~PAGE_MASK;
size = PAGE_ALIGN(addr + size) - p;
- /*
- * If the address lies within the first 16 MB, assume it's in ISA
- * memory space
- */
- if (p < 16 * 1024 * 1024)
- p += _ISA_MEM_BASE;
-
#ifndef CONFIG_CRASH_DUMP
/*
* Don't allow anybody to remap normal RAM that we're using.
@@ -63,7 +63,7 @@ __ioremap_caller(phys_addr_t addr, unsigned long size, pgprot_t prot, void *call
return (void __iomem *)v + offset;
if (slab_is_available())
- return do_ioremap(p, offset, size, prot, caller);
+ return generic_ioremap_prot(addr, size, prot);
/*
* Should check if it is a candidate for a BAT mapping
@@ -87,7 +87,6 @@ void iounmap(volatile void __iomem *addr)
if (v_block_mapped((unsigned long)addr))
return;
- if (addr > high_memory && (unsigned long)addr < ioremap_bot)
- vunmap((void *)(PAGE_MASK & (unsigned long)addr));
+ generic_iounmap(addr);
}
EXPORT_SYMBOL(iounmap);
diff --git a/arch/powerpc/mm/ioremap_64.c b/arch/powerpc/mm/ioremap_64.c
index 3acece00b33e..d24e5f166723 100644
--- a/arch/powerpc/mm/ioremap_64.c
+++ b/arch/powerpc/mm/ioremap_64.c
@@ -29,7 +29,7 @@ void __iomem *__ioremap_caller(phys_addr_t addr, unsigned long size,
return NULL;
if (slab_is_available())
- return do_ioremap(paligned, offset, size, prot, caller);
+ return generic_ioremap_prot(addr, size, prot);
pr_warn("ioremap() called early from %pS. Use early_ioremap() instead\n", caller);
@@ -49,17 +49,9 @@ void __iomem *__ioremap_caller(phys_addr_t addr, unsigned long size,
*/
void iounmap(volatile void __iomem *token)
{
- void *addr;
-
if (!slab_is_available())
return;
- addr = (void *)((unsigned long __force)PCI_FIX_ADDR(token) & PAGE_MASK);
-
- if ((unsigned long)addr < ioremap_bot) {
- pr_warn("Attempt to iounmap early bolted mapping at 0x%p\n", addr);
- return;
- }
- vunmap(addr);
+ generic_iounmap(PCI_FIX_ADDR(token));
}
EXPORT_SYMBOL(iounmap);
--
2.34.1
Now there are no users of ioremap_allowed and iounmap_allowed, clean
them up.
Signed-off-by: Baoquan He <[email protected]>
---
include/asm-generic/io.h | 26 --------------------------
mm/ioremap.c | 6 ------
2 files changed, 32 deletions(-)
diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index 29ee791164ac..c13109d17dcb 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
@@ -1047,32 +1047,6 @@ static inline void iounmap(volatile void __iomem *addr)
#elif defined(CONFIG_GENERIC_IOREMAP)
#include <linux/pgtable.h>
-/*
- * Arch code can implement the following two hooks when using GENERIC_IOREMAP
- * ioremap_allowed() return a bool,
- * - true means continue to remap
- * - false means skip remap and return directly
- * iounmap_allowed() return a bool,
- * - true means continue to vunmap
- * - false means skip vunmap and return directly
- */
-#ifndef ioremap_allowed
-#define ioremap_allowed ioremap_allowed
-static inline bool ioremap_allowed(phys_addr_t phys_addr, size_t size,
- unsigned long prot)
-{
- return true;
-}
-#endif
-
-#ifndef iounmap_allowed
-#define iounmap_allowed iounmap_allowed
-static inline bool iounmap_allowed(void *addr)
-{
- return true;
-}
-#endif
-
void __iomem *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
pgprot_t prot);
diff --git a/mm/ioremap.c b/mm/ioremap.c
index 4a7749d85044..8cb337446bba 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -32,9 +32,6 @@ void __iomem *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
phys_addr -= offset;
size = PAGE_ALIGN(size + offset);
- if (!ioremap_allowed(phys_addr, size, pgprot_val(prot)))
- return NULL;
-
#ifdef IOREMAP_START
area = __get_vm_area_caller(size, VM_IOREMAP, IOREMAP_START,
IOREMAP_END, __builtin_return_address(0));
@@ -68,9 +65,6 @@ void generic_iounmap(volatile void __iomem *addr)
{
void *vaddr = (void *)((unsigned long)addr & PAGE_MASK);
- if (!iounmap_allowed(vaddr))
- return;
-
if (is_ioremap_addr(vaddr))
vunmap(vaddr);
}
--
2.34.1
Since hook functions ioremap_allowed() and iounmap_allowed() will be
obsoleted, add wrapper function ioremap_prot() to contain the
the specific handling in addition to generic_ioremap_prot() invocation.
Signed-off-by: Baoquan He <[email protected]>
---
arch/arm64/include/asm/io.h | 3 +--
arch/arm64/mm/ioremap.c | 10 ++++++----
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 877495a0fd0c..97dd4ff1253b 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -139,8 +139,7 @@ extern void __memset_io(volatile void __iomem *, int, size_t);
* I/O memory mapping functions.
*/
-bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot);
-#define ioremap_allowed ioremap_allowed
+#define ioremap_prot ioremap_prot
#define _PAGE_IOREMAP PROT_DEVICE_nGnRE
diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
index c5af103d4ad4..269f2f63ab7d 100644
--- a/arch/arm64/mm/ioremap.c
+++ b/arch/arm64/mm/ioremap.c
@@ -3,20 +3,22 @@
#include <linux/mm.h>
#include <linux/io.h>
-bool ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
+void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+ unsigned long prot)
{
unsigned long last_addr = phys_addr + size - 1;
/* Don't allow outside PHYS_MASK */
if (last_addr & ~PHYS_MASK)
- return false;
+ return NULL;
/* Don't allow RAM to be mapped. */
if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr))))
- return false;
+ return NULL;
- return true;
+ return generic_ioremap_prot(phys_addr, size, __pgprot(prot));
}
+EXPORT_SYMBOL(ioremap_prot);
/*
* Must be called after early_fixmap_init
--
2.34.1
On 01/03/2023 03:42, Baoquan He wrote:
> diff --git a/drivers/net/ethernet/sfc/io.h b/drivers/net/ethernet/sfc/io.h
> index 30439cc83a89..07f99ad14bf3 100644
> --- a/drivers/net/ethernet/sfc/io.h
> +++ b/drivers/net/ethernet/sfc/io.h
> @@ -70,7 +70,7 @@
> */
> #ifdef CONFIG_X86_64
> /* PIO is a win only if write-combining is possible */
> -#ifdef ARCH_HAS_IOREMAP_WC
> +#ifdef ioremap_wc
> #define EFX_USE_PIO 1
> #endif
> #endif
So I don't know how valid what we're doing here is...
> diff --git a/include/asm-generic/iomap.h b/include/asm-generic/iomap.h
> index 08237ae8b840..196087a8126e 100644
> --- a/include/asm-generic/iomap.h
> +++ b/include/asm-generic/iomap.h
> @@ -93,15 +93,15 @@ extern void __iomem *ioport_map(unsigned long port, unsigned int nr);
> extern void ioport_unmap(void __iomem *);
> #endif
>
> -#ifndef ARCH_HAS_IOREMAP_WC
> +#ifndef ioremap_wc
> #define ioremap_wc ioremap
> #endif
... but it looks like this will break it, since in sfc/io.h
`#ifdef ioremap_wc` will always be true (if I'm correctly
understanding what we get via #include <linux/io.h>, which I'm
probably not because asm includes always confuse me).
I.e. we're not just interested in "can code that calls ioremap_wc
compile?", we care about whether we actually get WC, because
we're making an optimisation decision based on it.
On Wed, Mar 01, 2023 at 04:38:10AM +0000, Edward Cree wrote:
> On 01/03/2023 03:42, Baoquan He wrote:
> > diff --git a/drivers/net/ethernet/sfc/io.h b/drivers/net/ethernet/sfc/io.h
> > index 30439cc83a89..07f99ad14bf3 100644
> > --- a/drivers/net/ethernet/sfc/io.h
> > +++ b/drivers/net/ethernet/sfc/io.h
> > @@ -70,7 +70,7 @@
> > */
> > #ifdef CONFIG_X86_64
> > /* PIO is a win only if write-combining is possible */
> > -#ifdef ARCH_HAS_IOREMAP_WC
> > +#ifdef ioremap_wc
> > #define EFX_USE_PIO 1
> > #endif
> > #endif
>
> So I don't know how valid what we're doing here is...
Well, x86 defines ARCH_HAS_IOREMAP_WC unconditionally, so it doesn't
affect you ... but you raise a good question about how a driver can
determine if it's actually getting WC memory.
On 03/01/23 at 05:11am, Matthew Wilcox wrote:
> On Wed, Mar 01, 2023 at 04:38:10AM +0000, Edward Cree wrote:
> > On 01/03/2023 03:42, Baoquan He wrote:
> > > diff --git a/drivers/net/ethernet/sfc/io.h b/drivers/net/ethernet/sfc/io.h
> > > index 30439cc83a89..07f99ad14bf3 100644
> > > --- a/drivers/net/ethernet/sfc/io.h
> > > +++ b/drivers/net/ethernet/sfc/io.h
> > > @@ -70,7 +70,7 @@
> > > */
> > > #ifdef CONFIG_X86_64
> > > /* PIO is a win only if write-combining is possible */
> > > -#ifdef ARCH_HAS_IOREMAP_WC
> > > +#ifdef ioremap_wc
> > > #define EFX_USE_PIO 1
> > > #endif
> > > #endif
> >
> > So I don't know how valid what we're doing here is...
>
> Well, x86 defines ARCH_HAS_IOREMAP_WC unconditionally, so it doesn't
> affect you ... but you raise a good question about how a driver can
> determine if it's actually getting WC memory.
Yeah, this change doesn't affect sfc. Because ARCH_HAS_IOREMAP_WC is used to
make ioremap_wc defined in <asm/io.h> override the default one in
<asm-generic/iomap.h>, this patch has made code have the same effect.
Besides, I have a question still in my mind. Surely this is unrelated to
this patch.
In commit 38d9029a652c (parisc: Define ioremap_uc and ioremap_wc),
ioremap_wc definition was added in arch/parisc/include/asm/io.h, and it
didn't add ARCH_HAS_IOREMAP_WC definition. However, it won't cause
redefinition of ioremap_wc, even though there's "#include <asm-generic/iomap.h>"
at below. I could be dizzy on these io.h and iomap.h.
When I added ioremap_wt and ioremap_np to debug, ioremap_np will
cause redefinition, while ioremap_wt woundn't. Does anyone know what
I am missing?
diff --git a/arch/parisc/include/asm/io.h b/arch/parisc/include/asm/io.h
index c05e781be2f5..20d566eec3b3 100644
--- a/arch/parisc/include/asm/io.h
+++ b/arch/parisc/include/asm/io.h
@@ -127,6 +127,8 @@ static inline void gsc_writeq(unsigned long long val, unsigned long addr)
*/
void __iomem *ioremap(unsigned long offset, unsigned long size);
#define ioremap_wc ioremap
+#define ioremap_wt ioremap
+#define ioremap_np ioremap
#define ioremap_uc ioremap
#define pci_iounmap pci_iounmap