2022-06-06 07:58:28

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH v4 0/6] arm64: Cleanup ioremap() and support ioremap_prot()

1. Enhance generic ioremap to make it more useful.
2. Let's arm64 use GENERIC_IOREMAP to cleanup code.
3. Support HAVE_IOREMAP_PROT on arm64, which enable generic_access_phys(),
it is useful when debug(eg, gdb) via access_process_vm device memory
infrastructure.

v4:
- update based on v5.19-rc1
- add generic arch_ioremap/arch_iounmap define, per Andrew Monrton
- simply return an int for arch_ioremap and rename arch_ioremap/arch_iounmap
to a better name, ioremap_allowed/iounmap_allowed, per Arnd Bergmann
- add __force annotation to slince sparse warning in vunmap()

Note,
1) after the renaming, the arm's change(patch1) is not the necessary
dependence for the following changes, but as a cleanup, still post
it here, hope it go in via the arm64 tree with reset of the series
directly if no object.
2) the changes in this version only influence on patch4/5, so retain
the ack/review.

v3:
- add cleanup patch to kill ARM's unused arch_iounmap(the naming will be
used in GENERIC_IOREMAP) and add comments for arch_ioremap/arch_iounmap
hooks, per Anshuman Khandual
- collect ack/review

v2:
- s/addr/phys_addr in ioremap_prot, suggested by Andrew Morton
- rename arch_ioremap/iounmap_check to arch_ioremap/iounmap
and change return value, per Christoph Hellwig and Andrew Morton
- and use 'ifndef arch_ioremap' instead of weak function, per Arnd Bergmann
- collect ack/review

Kefeng Wang (6):
ARM: mm: kill unused runtime hook arch_iounmap()
mm: ioremap: Use more sensibly name in ioremap_prot()
mm: ioremap: Setup phys_addr of struct vm_struct
mm: ioremap: Add ioremap/iounmap_allowed()
arm64: mm: Convert to GENERIC_IOREMAP
arm64: Add HAVE_IOREMAP_PROT support

.../features/vm/ioremap_prot/arch-support.txt | 2 +-
arch/arm/include/asm/io.h | 4 +-
arch/arm/mm/ioremap.c | 9 +-
arch/arm/mm/nommu.c | 9 +-
arch/arm64/Kconfig | 2 +
arch/arm64/include/asm/io.h | 20 +++--
arch/arm64/include/asm/pgtable.h | 10 +++
arch/arm64/kernel/acpi.c | 2 +-
arch/arm64/mm/hugetlbpage.c | 10 ---
arch/arm64/mm/ioremap.c | 84 +++----------------
include/asm-generic/io.h | 27 +++++-
mm/ioremap.c | 26 ++++--
12 files changed, 84 insertions(+), 121 deletions(-)

--
2.35.3


2022-06-06 07:58:56

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH v4 4/6] mm: ioremap: Add ioremap/iounmap_allowed()

Add special hook for architecture to verify addr, size or prot
when ioremap() or iounmap(), which will make the generic ioremap
more useful.

ioremap_allowed() return an int,
- NULL means continue to remap
- error code means skip remap and return directly
iounmap_allowed() return an int,
- 0 means continue to vunmap
- error code means skip vunmap and return directly

Acked-by: Andrew Morton <[email protected]>
Signed-off-by: Kefeng Wang <[email protected]>
---
include/asm-generic/io.h | 25 +++++++++++++++++++++++++
mm/ioremap.c | 13 ++++++++++---
2 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index e6ffa2519f08..9429387a3e65 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
@@ -964,6 +964,31 @@ static inline void iounmap(volatile void __iomem *addr)
#elif defined(CONFIG_GENERIC_IOREMAP)
#include <linux/pgtable.h>

+/*
+ * Arch code can implement the following two special hooks when using GENERIC_IOREMAP
+ * ioremap_allowed() return an int,
+ * - 0 means continue to remap
+ * - error code means skip remap and return directly
+ * iounmap_allowed() return an int,
+ * - 0 means continue to vunmap
+ * - error code means skip vunmap and return directly
+ */
+#ifndef ioremap_allowed
+#define ioremap_allowed ioremap_allowed
+static inline int ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
+{
+ return 0;
+}
+#endif
+
+#ifndef iounmap_allowed
+#define iounmap_allowed iounmap_allowed
+static inline int iounmap_allowed(void __iomem *addr)
+{
+ return 0;
+}
+#endif
+
void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size, unsigned long prot);
void iounmap(volatile void __iomem *addr);

diff --git a/mm/ioremap.c b/mm/ioremap.c
index 7cb9996b0c12..196c93c0beb8 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -27,8 +27,10 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size, unsigned long pro
phys_addr -= offset;
size = PAGE_ALIGN(size + offset);

- area = get_vm_area_caller(size, VM_IOREMAP,
- __builtin_return_address(0));
+ if (ioremap_allowed(phys_addr, size, prot))
+ return NULL;
+
+ area = get_vm_area_caller(size, VM_IOREMAP, __builtin_return_address(0));
if (!area)
return NULL;
vaddr = (unsigned long)area->addr;
@@ -45,6 +47,11 @@ EXPORT_SYMBOL(ioremap_prot);

void iounmap(volatile void __iomem *addr)
{
- vunmap((void *)((unsigned long)addr & PAGE_MASK));
+ void __iomem *vaddr = (void __iomem *)((unsigned long)addr & PAGE_MASK);
+
+ if (iounmap_allowed(vaddr))
+ return;
+
+ vunmap((void __force *)vaddr);
}
EXPORT_SYMBOL(iounmap);
--
2.35.3

2022-06-06 08:00:14

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH v4 6/6] arm64: Add HAVE_IOREMAP_PROT support

With ioremap_prot() definition from generic ioremap, also move
pte_pgprot() from hugetlbpage.c into pgtable.h, then arm64 could
have HAVE_IOREMAP_PROT, which will enable generic_access_phys()
code, it is useful for debug, eg, gdb.

Acked-by: Catalin Marinas <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
Signed-off-by: Kefeng Wang <[email protected]>
---
.../features/vm/ioremap_prot/arch-support.txt | 2 +-
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/pgtable.h | 10 ++++++++++
arch/arm64/mm/hugetlbpage.c | 10 ----------
4 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/Documentation/features/vm/ioremap_prot/arch-support.txt b/Documentation/features/vm/ioremap_prot/arch-support.txt
index 205a90e82050..a710bd99c32e 100644
--- a/Documentation/features/vm/ioremap_prot/arch-support.txt
+++ b/Documentation/features/vm/ioremap_prot/arch-support.txt
@@ -9,7 +9,7 @@
| alpha: | TODO |
| arc: | ok |
| arm: | TODO |
- | arm64: | TODO |
+ | arm64: | ok |
| csky: | TODO |
| hexagon: | TODO |
| ia64: | TODO |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ac160aa26126..1267f325d32b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -189,6 +189,7 @@ config ARM64
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_GCC_PLUGINS
select HAVE_HW_BREAKPOINT if PERF_EVENTS
+ select HAVE_IOREMAP_PROT
select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_KVM
select HAVE_NMI
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 0b6632f18364..5a2eb6232e69 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -427,6 +427,16 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
return clear_pte_bit(pte, __pgprot(PTE_SWP_EXCLUSIVE));
}

+/*
+ * Select all bits except the pfn
+ */
+static inline pgprot_t pte_pgprot(pte_t pte)
+{
+ unsigned long pfn = pte_pfn(pte);
+
+ return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte));
+}
+
#ifdef CONFIG_NUMA_BALANCING
/*
* See the comment in include/linux/pgtable.h
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index e2a5ec9fdc0d..8eab05367549 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -100,16 +100,6 @@ int pud_huge(pud_t pud)
#endif
}

-/*
- * Select all bits except the pfn
- */
-static inline pgprot_t pte_pgprot(pte_t pte)
-{
- unsigned long pfn = pte_pfn(pte);
-
- return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte));
-}
-
static int find_num_contig(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, size_t *pgsize)
{
--
2.35.3

2022-06-06 08:00:29

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH v4 5/6] arm64: mm: Convert to GENERIC_IOREMAP

Add hook for arm64's special operation when ioremap() and iounmap(),
then ioremap_wc/np/cache is converted to use ioremap_prot()
from GENERIC_IOREMAP, update the Copyright and kill the unused
inclusions.

Reviewed-by: Anshuman Khandual <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/io.h | 20 ++++++---
arch/arm64/kernel/acpi.c | 2 +-
arch/arm64/mm/ioremap.c | 84 +++++--------------------------------
4 files changed, 26 insertions(+), 81 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1652a9800ebe..ac160aa26126 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -126,6 +126,7 @@ config ARM64
select GENERIC_CPU_VULNERABILITIES
select GENERIC_EARLY_IOREMAP
select GENERIC_IDLE_POLL_SETUP
+ select GENERIC_IOREMAP
select GENERIC_IRQ_IPI
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 3995652daf81..02aa5a5e9190 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -163,13 +163,21 @@ extern void __memset_io(volatile void __iomem *, int, size_t);
/*
* I/O memory mapping functions.
*/
-extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot);
-extern void iounmap(volatile void __iomem *addr);
-extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size);

-#define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
-#define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC))
-#define ioremap_np(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRnE))
+int ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot);
+#define ioremap_allowed ioremap_allowed
+
+int iounmap_allowed(void __iomem *addr);
+#define iounmap_allowed iounmap_allowed
+
+#define _PAGE_IOREMAP PROT_DEVICE_nGnRE
+
+#define ioremap_wc(addr, size) ioremap_prot((addr), (size), PROT_NORMAL_NC)
+#define ioremap_np(addr, size) ioremap_prot((addr), (size), PROT_DEVICE_nGnRnE)
+#define ioremap_cache(addr, size) ({ \
+ pfn_is_map_memory(__phys_to_pfn(addr)) ? \
+ (void __iomem *)__phys_to_virt(addr) : ioremap_prot(addr, size, PROT_NORMAL); \
+})

/*
* io{read,write}{16,32,64}be() macros
diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
index e4dea8db6924..a5a256e3f9fe 100644
--- a/arch/arm64/kernel/acpi.c
+++ b/arch/arm64/kernel/acpi.c
@@ -351,7 +351,7 @@ void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
prot = __acpi_get_writethrough_mem_attribute();
}
}
- return __ioremap(phys, size, prot);
+ return ioremap_prot(phys, size, pgprot_val(prot));
}

/*
diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
index b21f91cd830d..4a3f526c6057 100644
--- a/arch/arm64/mm/ioremap.c
+++ b/arch/arm64/mm/ioremap.c
@@ -1,96 +1,32 @@
// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Based on arch/arm/mm/ioremap.c
- *
- * (C) Copyright 1995 1996 Linus Torvalds
- * Hacked for ARM by Phil Blundell <[email protected]>
- * Hacked to allow all architectures to build, and various cleanups
- * by Russell King
- * Copyright (C) 2012 ARM Ltd.
- */

-#include <linux/export.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/io.h>

-#include <asm/fixmap.h>
-#include <asm/tlbflush.h>
-
-static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size,
- pgprot_t prot, void *caller)
+int ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
{
- unsigned long last_addr;
- unsigned long offset = phys_addr & ~PAGE_MASK;
- int err;
- unsigned long addr;
- struct vm_struct *area;
-
- /*
- * Page align the mapping address and size, taking account of any
- * offset.
- */
- phys_addr &= PAGE_MASK;
- size = PAGE_ALIGN(size + offset);
+ unsigned long last_addr = phys_addr + size - 1;

- /*
- * Don't allow wraparound, zero size or outside PHYS_MASK.
- */
- last_addr = phys_addr + size - 1;
- if (!size || last_addr < phys_addr || (last_addr & ~PHYS_MASK))
- return NULL;
+ /* Don't allow outside PHYS_MASK */
+ if (last_addr & ~PHYS_MASK)
+ return -EINVAL;

- /*
- * Don't allow RAM to be mapped.
- */
+ /* Don't allow RAM to be mapped. */
if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr))))
- return NULL;
+ return -EINVAL;

- area = get_vm_area_caller(size, VM_IOREMAP, caller);
- if (!area)
- return NULL;
- addr = (unsigned long)area->addr;
- area->phys_addr = phys_addr;
-
- err = ioremap_page_range(addr, addr + size, phys_addr, prot);
- if (err) {
- vunmap((void *)addr);
- return NULL;
- }
-
- return (void __iomem *)(offset + addr);
-}
-
-void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot)
-{
- return __ioremap_caller(phys_addr, size, prot,
- __builtin_return_address(0));
+ return 0;
}
-EXPORT_SYMBOL(__ioremap);

-void iounmap(volatile void __iomem *io_addr)
+int iounmap_allowed(void __iomem *addr)
{
- unsigned long addr = (unsigned long)io_addr & PAGE_MASK;
-
/*
* We could get an address outside vmalloc range in case
* of ioremap_cache() reusing a RAM mapping.
*/
- if (is_vmalloc_addr((void *)addr))
- vunmap((void *)addr);
-}
-EXPORT_SYMBOL(iounmap);
-
-void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size)
-{
- /* For normal memory we already have a cacheable mapping. */
- if (pfn_is_map_memory(__phys_to_pfn(phys_addr)))
- return (void __iomem *)__phys_to_virt(phys_addr);
-
- return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL),
- __builtin_return_address(0));
+ return is_vmalloc_addr(addr) ? 0 : -EINVAL;
}
-EXPORT_SYMBOL(ioremap_cache);

/*
* Must be called after early_fixmap_init
--
2.35.3

2022-06-06 08:09:24

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] mm: ioremap: Add ioremap/iounmap_allowed()

Please fix the > 80 character lines.

2022-06-06 08:13:24

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] arm64: mm: Convert to GENERIC_IOREMAP

> +#define ioremap_wc(addr, size) ioremap_prot((addr), (size), PROT_NORMAL_NC)
> +#define ioremap_np(addr, size) ioremap_prot((addr), (size), PROT_DEVICE_nGnRnE)

Please avoid the overly long lines here. Independt of that having
a non-trivial body on a separate line tends to generlly be a lot more
readable anyway.

> +#define ioremap_cache(addr, size) ({ \
> + pfn_is_map_memory(__phys_to_pfn(addr)) ? \
> + (void __iomem *)__phys_to_virt(addr) : ioremap_prot(addr, size, PROT_NORMAL); \
> +})

And this really should be an inline function.

> +int iounmap_allowed(void __iomem *addr)
> {
> /*
> * We could get an address outside vmalloc range in case
> * of ioremap_cache() reusing a RAM mapping.
> */
> + return is_vmalloc_addr(addr) ? 0 : -EINVAL;

As the generic ioremap only returns vmalloc addresses, this check
really should go into common code.

2022-06-06 13:40:16

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] arm64: mm: Convert to GENERIC_IOREMAP


On 2022/6/6 15:54, Christoph Hellwig wrote:
>> +#define ioremap_wc(addr, size) ioremap_prot((addr), (size), PROT_NORMAL_NC)
>> +#define ioremap_np(addr, size) ioremap_prot((addr), (size), PROT_DEVICE_nGnRnE)
> Please avoid the overly long lines here. Independt of that having
> a non-trivial body on a separate line tends to generlly be a lot more
> readable anyway.

Hi Christoph,

As commit bdc48fa11e46  ("checkpatch/coding-style: deprecate 80-column
warning") increased

the limit to 100 columns,so I don't get warning when using checkpatch,
and it is not a hard limit,

but if this is a mandatory requirement, I will resend them with break lines.

>> +#define ioremap_cache(addr, size) ({ \
>> + pfn_is_map_memory(__phys_to_pfn(addr)) ? \
>> + (void __iomem *)__phys_to_virt(addr) : ioremap_prot(addr, size, PROT_NORMAL); \
>> +})
> And this really should be an inline function.

We still need a define, see kernel/iomem.c,

#ifndef ioremap_cache
__weak void __iomem *ioremap_cache(resource_size_t offset, unsigned long
size)
{
        return ioremap(offset, size);
}
#endif

>
>> +int iounmap_allowed(void __iomem *addr)
>> {
>> /*
>> * We could get an address outside vmalloc range in case
>> * of ioremap_cache() reusing a RAM mapping.
>> */
>> + return is_vmalloc_addr(addr) ? 0 : -EINVAL;
> As the generic ioremap only returns vmalloc addresses, this check
> really should go into common code.

Good point, will move into generic ioremap, thanks.

> .

2022-06-06 15:28:00

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] mm: ioremap: Add ioremap/iounmap_allowed()

On 06/06/22 at 03:48pm, Kefeng Wang wrote:
> Add special hook for architecture to verify addr, size or prot
> when ioremap() or iounmap(), which will make the generic ioremap
> more useful.
>
> ioremap_allowed() return an int,
> - NULL means continue to remap
> - error code means skip remap and return directly
> iounmap_allowed() return an int,
> - 0 means continue to vunmap
> - error code means skip vunmap and return directly

Aren't they bool type function and better return bool value?

>
> Acked-by: Andrew Morton <[email protected]>
> Signed-off-by: Kefeng Wang <[email protected]>
> ---
> include/asm-generic/io.h | 25 +++++++++++++++++++++++++
> mm/ioremap.c | 13 ++++++++++---
> 2 files changed, 35 insertions(+), 3 deletions(-)
>
> diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
> index e6ffa2519f08..9429387a3e65 100644
> --- a/include/asm-generic/io.h
> +++ b/include/asm-generic/io.h
> @@ -964,6 +964,31 @@ static inline void iounmap(volatile void __iomem *addr)
> #elif defined(CONFIG_GENERIC_IOREMAP)
> #include <linux/pgtable.h>
>
> +/*
> + * Arch code can implement the following two special hooks when using GENERIC_IOREMAP
> + * ioremap_allowed() return an int,
> + * - 0 means continue to remap
> + * - error code means skip remap and return directly
> + * iounmap_allowed() return an int,
> + * - 0 means continue to vunmap
> + * - error code means skip vunmap and return directly
> + */
> +#ifndef ioremap_allowed
> +#define ioremap_allowed ioremap_allowed
> +static inline int ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot)
> +{
> + return 0;
> +}
> +#endif
> +
> +#ifndef iounmap_allowed
> +#define iounmap_allowed iounmap_allowed
> +static inline int iounmap_allowed(void __iomem *addr)
> +{
> + return 0;
> +}
> +#endif
> +
> void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size, unsigned long prot);
> void iounmap(volatile void __iomem *addr);
>
> diff --git a/mm/ioremap.c b/mm/ioremap.c
> index 7cb9996b0c12..196c93c0beb8 100644
> --- a/mm/ioremap.c
> +++ b/mm/ioremap.c
> @@ -27,8 +27,10 @@ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size, unsigned long pro
> phys_addr -= offset;
> size = PAGE_ALIGN(size + offset);
>
> - area = get_vm_area_caller(size, VM_IOREMAP,
> - __builtin_return_address(0));
> + if (ioremap_allowed(phys_addr, size, prot))
> + return NULL;
> +
> + area = get_vm_area_caller(size, VM_IOREMAP, __builtin_return_address(0));
> if (!area)
> return NULL;
> vaddr = (unsigned long)area->addr;
> @@ -45,6 +47,11 @@ EXPORT_SYMBOL(ioremap_prot);
>
> void iounmap(volatile void __iomem *addr)
> {
> - vunmap((void *)((unsigned long)addr & PAGE_MASK));
> + void __iomem *vaddr = (void __iomem *)((unsigned long)addr & PAGE_MASK);
> +
> + if (iounmap_allowed(vaddr))
> + return;
> +
> + vunmap((void __force *)vaddr);
> }
> EXPORT_SYMBOL(iounmap);
> --
> 2.35.3
>
>

2022-06-06 16:47:12

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] arm64: mm: Convert to GENERIC_IOREMAP

On Mon, Jun 06, 2022 at 09:28:22PM +0800, Kefeng Wang wrote:
> As commit bdc48fa11e46  ("checkpatch/coding-style: deprecate 80-column
> warning") increased
>
> the limit to 100 columns,so I don't get warning when using checkpatch, and
> it is not a hard limit,

No, it did not increase the general limit, just for exception cases.
Block comments certainly don't fall under that, and while the mileage
on the defineѕ may vary it generally is more readable to have them
on the next line.

checkpath is unfortunately totall broken these days :(

>
> but if this is a mandatory requirement, I will resend them with break lines.
>
> > > +#define ioremap_cache(addr, size) ({ \
> > > + pfn_is_map_memory(__phys_to_pfn(addr)) ? \
> > > + (void __iomem *)__phys_to_virt(addr) : ioremap_prot(addr, size, PROT_NORMAL); \
> > > +})
> > And this really should be an inline function.
>
> We still need a define, see kernel/iomem.c,

You can just define it to the same name after the inline.

2022-06-06 19:42:04

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v4 5/6] arm64: mm: Convert to GENERIC_IOREMAP

Hi Kefeng,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v5.19-rc1 next-20220606]
[cannot apply to arm64/for-next/core arnd-asm-generic/master akpm-mm/mm-everything]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/arm64-Cleanup-ioremap-and-support-ioremap_prot/20220606-154131
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git f2906aa863381afb0015a9eb7fefad885d4e5a56
config: arm64-randconfig-s032-20220605 (https://download.01.org/0day-ci/archive/20220607/[email protected]/config)
compiler: aarch64-linux-gcc (GCC) 11.3.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.4-18-g56afb504-dirty
# https://github.com/intel-lab-lkp/linux/commit/3fd9e961045f81f0ee188501228f8425c3d868a8
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Kefeng-Wang/arm64-Cleanup-ioremap-and-support-ioremap_prot/20220606-154131
git checkout 3fd9e961045f81f0ee188501228f8425c3d868a8
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/mm/

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <[email protected]>


sparse warnings: (new ones prefixed by >>)
>> arch/arm64/mm/ioremap.c:28:32: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected void const *x @@ got void [noderef] __iomem *addr @@
arch/arm64/mm/ioremap.c:28:32: sparse: expected void const *x
arch/arm64/mm/ioremap.c:28:32: sparse: got void [noderef] __iomem *addr

vim +28 arch/arm64/mm/ioremap.c

21
22 int iounmap_allowed(void __iomem *addr)
23 {
24 /*
25 * We could get an address outside vmalloc range in case
26 * of ioremap_cache() reusing a RAM mapping.
27 */
> 28 return is_vmalloc_addr(addr) ? 0 : -EINVAL;
29 }
30

--
0-DAY CI Kernel Test Service
https://01.org/lkp