2019-12-23 11:01:05

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH 0/2] fix recent nds32 build breakage

From: Mike Rapoport <[email protected]>

Hi,

The kbuild robot reported build breakage of nds32 architecture [1] that
happens with CONFIG_CPU_CACHE_ALIASING=n and CONFIG_HUGHMEM=y.

There are two issues: one with a missing macro during conversion of page
folding and another one is a conflict between cacheflush.h definitions in
arch/nds32 and asm-generic.

[1] https://lore.kernel.org/lkml/201912212139.yptX8CsV%[email protected]/

Mike Rapoport (2):
asm-generic/nds32: don't redefine cacheflush primitives
nds32: fix build failure caused by page table folding updates

arch/nds32/include/asm/cacheflush.h | 11 ++++++----
arch/nds32/include/asm/pgtable.h | 2 +-
include/asm-generic/cacheflush.h | 33 ++++++++++++++++++++++++++++-
3 files changed, 40 insertions(+), 6 deletions(-)

--
2.24.0


2019-12-23 11:01:08

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH 1/2] asm-generic/nds32: don't redefine cacheflush primitives

From: Mike Rapoport <[email protected]>

The commit c296d4dc13ae ("asm-generic: fix a compilation warning") changed
asm-generic/cachflush.h to use static inlines instead of macros and as a
result the nds32 build with CONFIG_CPU_CACHE_ALIASING=n fails:

CC init/main.o
In file included from arch/nds32/include/asm/cacheflush.h:43,
from include/linux/highmem.h:12,
from include/linux/pagemap.h:11,
from include/linux/blkdev.h:16,
from include/linux/blk-cgroup.h:23,
from include/linux/writeback.h:14,
from init/main.c:44:
include/asm-generic/cacheflush.h:50:20: error: static declaration of 'flush_icache_range' follows non-static declaration
static inline void flush_icache_range(unsigned long start, unsigned long end)
^~~~~~~~~~~~~~~~~~
In file included from include/linux/highmem.h:12,
from include/linux/pagemap.h:11,
from include/linux/blkdev.h:16,
from include/linux/blk-cgroup.h:23,
from include/linux/writeback.h:14,
from init/main.c:44:
arch/nds32/include/asm/cacheflush.h:11:6: note: previous declaration of 'flush_icache_range' was here
void flush_icache_range(unsigned long start, unsigned long end);
^~~~~~~~~~~~~~~~~~

Surround the inline functions in asm-generic/cacheflush.h by ifdef's so
that architectures could override them and add the required overrides to
nds32.

Fixes: c296d4dc13ae ("asm-generic: fix a compilation warning")
Link: https://lore.kernel.org/lkml/201912212139.yptX8CsV%[email protected]/
Reported-by: kbuild test robot <[email protected]>
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/nds32/include/asm/cacheflush.h | 11 ++++++----
include/asm-generic/cacheflush.h | 33 ++++++++++++++++++++++++++++-
2 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/arch/nds32/include/asm/cacheflush.h b/arch/nds32/include/asm/cacheflush.h
index d9ac7e6408ef..caddded56e77 100644
--- a/arch/nds32/include/asm/cacheflush.h
+++ b/arch/nds32/include/asm/cacheflush.h
@@ -9,7 +9,11 @@
#define PG_dcache_dirty PG_arch_1

void flush_icache_range(unsigned long start, unsigned long end);
+#define flush_icache_range flush_icache_range
+
void flush_icache_page(struct vm_area_struct *vma, struct page *page);
+#define flush_icache_page flush_icache_page
+
#ifdef CONFIG_CPU_CACHE_ALIASING
void flush_cache_mm(struct mm_struct *mm);
void flush_cache_dup_mm(struct mm_struct *mm);
@@ -40,12 +44,11 @@ void invalidate_kernel_vmap_range(void *addr, int size);
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&(mapping)->i_pages)

#else
-#include <asm-generic/cacheflush.h>
-#undef flush_icache_range
-#undef flush_icache_page
-#undef flush_icache_user_range
void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
unsigned long addr, int len);
+#define flush_icache_user_range flush_icache_user_range
+
+#include <asm-generic/cacheflush.h>
#endif

#endif /* __NDS32_CACHEFLUSH_H__ */
diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h
index a950a22c4890..cac7404b2bdd 100644
--- a/include/asm-generic/cacheflush.h
+++ b/include/asm-generic/cacheflush.h
@@ -11,71 +11,102 @@
* The cache doesn't need to be flushed when TLB entries change when
* the cache is mapped to physical memory, not virtual memory
*/
+#ifndef flush_cache_all
static inline void flush_cache_all(void)
{
}
+#endif

+#ifndef flush_cache_mm
static inline void flush_cache_mm(struct mm_struct *mm)
{
}
+#endif

+#ifndef flush_cache_dup_mm
static inline void flush_cache_dup_mm(struct mm_struct *mm)
{
}
+#endif

+#ifndef flush_cache_range
static inline void flush_cache_range(struct vm_area_struct *vma,
unsigned long start,
unsigned long end)
{
}
+#endif

+#ifndef flush_cache_page
static inline void flush_cache_page(struct vm_area_struct *vma,
unsigned long vmaddr,
unsigned long pfn)
{
}
+#endif

+#ifndef flush_dcache_page
static inline void flush_dcache_page(struct page *page)
{
}
+#endif

+#ifndef flush_dcache_mmap_lock
static inline void flush_dcache_mmap_lock(struct address_space *mapping)
{
}
+#endif

+#ifndef flush_dcache_mmap_unlock
static inline void flush_dcache_mmap_unlock(struct address_space *mapping)
{
}
+#endif

+#ifndef flush_icache_range
static inline void flush_icache_range(unsigned long start, unsigned long end)
{
}
+#endif

+#ifndef flush_icache_page
static inline void flush_icache_page(struct vm_area_struct *vma,
struct page *page)
{
}
+#endif

+#ifndef flush_icache_user_range
static inline void flush_icache_user_range(struct vm_area_struct *vma,
struct page *page,
unsigned long addr, int len)
{
}
+#endif

+#ifndef flush_cache_vmap
static inline void flush_cache_vmap(unsigned long start, unsigned long end)
{
}
+#endif

+#ifndef flush_cache_vunmap
static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
{
}
+#endif

-#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
+#ifndef copy_to_user_page
+#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { \
memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
+#endif
+
+#ifndef copy_from_user_page
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
+#endif

#endif /* __ASM_CACHEFLUSH_H */
--
2.24.0

2019-12-23 11:01:36

by Mike Rapoport

[permalink] [raw]
Subject: [PATCH 2/2] nds32: fix build failure caused by page table folding updates

From: Mike Rapoport <[email protected]>

The commit 7c2763c42326 ("nds32: use pgtable-nopmd instead of
4level-fixup") missed the pmd_off_k() macro which caused the following
build error:

CC arch/nds32/mm/highmem.o
In file included from arch/nds32/include/asm/page.h:57,
from include/linux/mm_types_task.h:16,
from include/linux/mm_types.h:5,
from include/linux/mmzone.h:21,
from include/linux/gfp.h:6,
from include/linux/xarray.h:14,
from include/linux/radix-tree.h:18,
from include/linux/fs.h:15,
from include/linux/highmem.h:5,
from arch/nds32/mm/highmem.c:5:
arch/nds32/mm/highmem.c: In function 'kmap_atomic':
arch/nds32/include/asm/pgtable.h:360:44: error: passing argument 1 of 'pmd_offset' from incompatible pointer type [-Werror=incompatible-pointer-types]
#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
arch/nds32/include/asm/memory.h:33:29: note: in definition of macro '__phys_to_virt'
#define __phys_to_virt(x) ((x) - PHYS_OFFSET + PAGE_OFFSET)
^
arch/nds32/include/asm/pgtable.h:193:55: note: in expansion of macro '__va'
#define pmd_page_kernel(pmd) ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
^~~~
include/asm-generic/pgtable-nop4d.h:41:24: note: in expansion of macro 'pgd_val'
#define p4d_val(x) (pgd_val((x).pgd))
^~~~~~~
include/asm-generic/pgtable-nopud.h:50:24: note: in expansion of macro 'p4d_val'
#define pud_val(x) (p4d_val((x).p4d))
^~~~~~~
include/asm-generic/pgtable-nopmd.h:49:24: note: in expansion of macro 'pud_val'
#define pmd_val(x) (pud_val((x).pud))
^~~~~~~
arch/nds32/include/asm/pgtable.h:193:60: note: in expansion of macro 'pmd_val'
#define pmd_page_kernel(pmd) ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
^~~~~~~
arch/nds32/include/asm/pgtable.h:190:56: note: in expansion of macro 'pmd_page_kernel'
#define pte_offset_kernel(dir, address) ((pte_t *)pmd_page_kernel(*(dir)) + pte_index(address))
^~~~~~~~~~~~~~~
arch/nds32/mm/highmem.c:52:9: note: in expansion of macro 'pte_offset_kernel'
ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
^~~~~~~~~~~~~~~~~
arch/nds32/include/asm/pgtable.h:362:33: note: in expansion of macro 'pgd_offset'
#define pgd_offset_k(addr) pgd_offset(&init_mm, addr)
^~~~~~~~~~
arch/nds32/include/asm/pgtable.h:198:39: note: in expansion of macro 'pgd_offset_k'
#define pmd_off_k(address) pmd_offset(pgd_offset_k(address), address)
^~~~~~~~~~~~
arch/nds32/mm/highmem.c:52:27: note: in expansion of macro 'pmd_off_k'
ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
^~~~~~~~~
In file included from arch/nds32/include/asm/pgtable.h:7,
from include/linux/mm.h:99,
from include/linux/highmem.h:8,
from arch/nds32/mm/highmem.c:5:
include/asm-generic/pgtable-nopmd.h:44:42: note: expected 'pud_t *' {aka 'struct <anonymous> *'} but argument is of type 'pgd_t *' {aka 'long unsigned int *'}
static inline pmd_t * pmd_offset(pud_t * pud, unsigned long address)
~~~~~~~~^~~
In file included from arch/nds32/include/asm/page.h:57,
from include/linux/mm_types_task.h:16,
from include/linux/mm_types.h:5,
from include/linux/mmzone.h:21,
from include/linux/gfp.h:6,
from include/linux/xarray.h:14,
from include/linux/radix-tree.h:18,
from include/linux/fs.h:15,
from include/linux/highmem.h:5,
from arch/nds32/mm/highmem.c:5:

Updating the pmd_off_k() macro to use the correct page table unfolding
fixes the issue.

Fixes: 7c2763c42326 ("nds32: use pgtable-nopmd instead of 4level-fixup")
Link: https://lore.kernel.org/lkml/201912212139.yptX8CsV%[email protected]/
Reported-by: kbuild test robot <[email protected]>
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/nds32/include/asm/pgtable.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/nds32/include/asm/pgtable.h b/arch/nds32/include/asm/pgtable.h
index 0214e4150539..6abc58ac406d 100644
--- a/arch/nds32/include/asm/pgtable.h
+++ b/arch/nds32/include/asm/pgtable.h
@@ -195,7 +195,7 @@ extern void paging_init(void);
#define pte_unmap(pte) do { } while (0)
#define pte_unmap_nested(pte) do { } while (0)

-#define pmd_off_k(address) pmd_offset(pgd_offset_k(address), address)
+#define pmd_off_k(address) pmd_offset(pud_offset(p4d_offset(pgd_offset_k(address), (address)), (address)), (address))

#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
/*
--
2.24.0

2019-12-24 02:15:38

by Greentime Hu

[permalink] [raw]
Subject: Re: [PATCH 1/2] asm-generic/nds32: don't redefine cacheflush primitives

Mike Rapoport <[email protected]> 於 2019年12月23日 週一 下午7:00寫道:
>
> From: Mike Rapoport <[email protected]>
>
> The commit c296d4dc13ae ("asm-generic: fix a compilation warning") changed
> asm-generic/cachflush.h to use static inlines instead of macros and as a
> result the nds32 build with CONFIG_CPU_CACHE_ALIASING=n fails:
>
> CC init/main.o
> In file included from arch/nds32/include/asm/cacheflush.h:43,
> from include/linux/highmem.h:12,
> from include/linux/pagemap.h:11,
> from include/linux/blkdev.h:16,
> from include/linux/blk-cgroup.h:23,
> from include/linux/writeback.h:14,
> from init/main.c:44:
> include/asm-generic/cacheflush.h:50:20: error: static declaration of 'flush_icache_range' follows non-static declaration
> static inline void flush_icache_range(unsigned long start, unsigned long end)
> ^~~~~~~~~~~~~~~~~~
> In file included from include/linux/highmem.h:12,
> from include/linux/pagemap.h:11,
> from include/linux/blkdev.h:16,
> from include/linux/blk-cgroup.h:23,
> from include/linux/writeback.h:14,
> from init/main.c:44:
> arch/nds32/include/asm/cacheflush.h:11:6: note: previous declaration of 'flush_icache_range' was here
> void flush_icache_range(unsigned long start, unsigned long end);
> ^~~~~~~~~~~~~~~~~~
>
> Surround the inline functions in asm-generic/cacheflush.h by ifdef's so
> that architectures could override them and add the required overrides to
> nds32.
>
> Fixes: c296d4dc13ae ("asm-generic: fix a compilation warning")
> Link: https://lore.kernel.org/lkml/201912212139.yptX8CsV%[email protected]/
> Reported-by: kbuild test robot <[email protected]>
> Signed-off-by: Mike Rapoport <[email protected]>
> ---
> arch/nds32/include/asm/cacheflush.h | 11 ++++++----
> include/asm-generic/cacheflush.h | 33 ++++++++++++++++++++++++++++-
> 2 files changed, 39 insertions(+), 5 deletions(-)
>
> diff --git a/arch/nds32/include/asm/cacheflush.h b/arch/nds32/include/asm/cacheflush.h
> index d9ac7e6408ef..caddded56e77 100644
> --- a/arch/nds32/include/asm/cacheflush.h
> +++ b/arch/nds32/include/asm/cacheflush.h
> @@ -9,7 +9,11 @@
> #define PG_dcache_dirty PG_arch_1
>
> void flush_icache_range(unsigned long start, unsigned long end);
> +#define flush_icache_range flush_icache_range
> +
> void flush_icache_page(struct vm_area_struct *vma, struct page *page);
> +#define flush_icache_page flush_icache_page
> +
> #ifdef CONFIG_CPU_CACHE_ALIASING
> void flush_cache_mm(struct mm_struct *mm);
> void flush_cache_dup_mm(struct mm_struct *mm);
> @@ -40,12 +44,11 @@ void invalidate_kernel_vmap_range(void *addr, int size);
> #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&(mapping)->i_pages)
>
> #else
> -#include <asm-generic/cacheflush.h>
> -#undef flush_icache_range
> -#undef flush_icache_page
> -#undef flush_icache_user_range
> void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
> unsigned long addr, int len);
> +#define flush_icache_user_range flush_icache_user_range
> +
> +#include <asm-generic/cacheflush.h>
> #endif
>
> #endif /* __NDS32_CACHEFLUSH_H__ */
> diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h
> index a950a22c4890..cac7404b2bdd 100644
> --- a/include/asm-generic/cacheflush.h
> +++ b/include/asm-generic/cacheflush.h
> @@ -11,71 +11,102 @@
> * The cache doesn't need to be flushed when TLB entries change when
> * the cache is mapped to physical memory, not virtual memory
> */
> +#ifndef flush_cache_all
> static inline void flush_cache_all(void)
> {
> }
> +#endif
>
> +#ifndef flush_cache_mm
> static inline void flush_cache_mm(struct mm_struct *mm)
> {
> }
> +#endif
>
> +#ifndef flush_cache_dup_mm
> static inline void flush_cache_dup_mm(struct mm_struct *mm)
> {
> }
> +#endif
>
> +#ifndef flush_cache_range
> static inline void flush_cache_range(struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end)
> {
> }
> +#endif
>
> +#ifndef flush_cache_page
> static inline void flush_cache_page(struct vm_area_struct *vma,
> unsigned long vmaddr,
> unsigned long pfn)
> {
> }
> +#endif
>
> +#ifndef flush_dcache_page
> static inline void flush_dcache_page(struct page *page)
> {
> }
> +#endif
>
> +#ifndef flush_dcache_mmap_lock
> static inline void flush_dcache_mmap_lock(struct address_space *mapping)
> {
> }
> +#endif
>
> +#ifndef flush_dcache_mmap_unlock
> static inline void flush_dcache_mmap_unlock(struct address_space *mapping)
> {
> }
> +#endif
>
> +#ifndef flush_icache_range
> static inline void flush_icache_range(unsigned long start, unsigned long end)
> {
> }
> +#endif
>
> +#ifndef flush_icache_page
> static inline void flush_icache_page(struct vm_area_struct *vma,
> struct page *page)
> {
> }
> +#endif
>
> +#ifndef flush_icache_user_range
> static inline void flush_icache_user_range(struct vm_area_struct *vma,
> struct page *page,
> unsigned long addr, int len)
> {
> }
> +#endif
>
> +#ifndef flush_cache_vmap
> static inline void flush_cache_vmap(unsigned long start, unsigned long end)
> {
> }
> +#endif
>
> +#ifndef flush_cache_vunmap
> static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
> {
> }
> +#endif
>
> -#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
> +#ifndef copy_to_user_page
> +#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
> do { \
> memcpy(dst, src, len); \
> flush_icache_user_range(vma, page, vaddr, len); \
> } while (0)
> +#endif
> +
> +#ifndef copy_from_user_page
> #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
> memcpy(dst, src, len)
> +#endif
>
> #endif /* __ASM_CACHEFLUSH_H */

Thank you, Mike.
Reviewed-by: Greentime Hu <[email protected]>

2019-12-24 02:15:52

by Greentime Hu

[permalink] [raw]
Subject: Re: [PATCH 2/2] nds32: fix build failure caused by page table folding updates

Mike Rapoport <[email protected]> 於 2019年12月23日 週一 下午7:00寫道:
>
> From: Mike Rapoport <[email protected]>
>
> The commit 7c2763c42326 ("nds32: use pgtable-nopmd instead of
> 4level-fixup") missed the pmd_off_k() macro which caused the following
> build error:
>
> CC arch/nds32/mm/highmem.o
> In file included from arch/nds32/include/asm/page.h:57,
> from include/linux/mm_types_task.h:16,
> from include/linux/mm_types.h:5,
> from include/linux/mmzone.h:21,
> from include/linux/gfp.h:6,
> from include/linux/xarray.h:14,
> from include/linux/radix-tree.h:18,
> from include/linux/fs.h:15,
> from include/linux/highmem.h:5,
> from arch/nds32/mm/highmem.c:5:
> arch/nds32/mm/highmem.c: In function 'kmap_atomic':
> arch/nds32/include/asm/pgtable.h:360:44: error: passing argument 1 of 'pmd_offset' from incompatible pointer type [-Werror=incompatible-pointer-types]
> #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
> ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
> arch/nds32/include/asm/memory.h:33:29: note: in definition of macro '__phys_to_virt'
> #define __phys_to_virt(x) ((x) - PHYS_OFFSET + PAGE_OFFSET)
> ^
> arch/nds32/include/asm/pgtable.h:193:55: note: in expansion of macro '__va'
> #define pmd_page_kernel(pmd) ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
> ^~~~
> include/asm-generic/pgtable-nop4d.h:41:24: note: in expansion of macro 'pgd_val'
> #define p4d_val(x) (pgd_val((x).pgd))
> ^~~~~~~
> include/asm-generic/pgtable-nopud.h:50:24: note: in expansion of macro 'p4d_val'
> #define pud_val(x) (p4d_val((x).p4d))
> ^~~~~~~
> include/asm-generic/pgtable-nopmd.h:49:24: note: in expansion of macro 'pud_val'
> #define pmd_val(x) (pud_val((x).pud))
> ^~~~~~~
> arch/nds32/include/asm/pgtable.h:193:60: note: in expansion of macro 'pmd_val'
> #define pmd_page_kernel(pmd) ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
> ^~~~~~~
> arch/nds32/include/asm/pgtable.h:190:56: note: in expansion of macro 'pmd_page_kernel'
> #define pte_offset_kernel(dir, address) ((pte_t *)pmd_page_kernel(*(dir)) + pte_index(address))
> ^~~~~~~~~~~~~~~
> arch/nds32/mm/highmem.c:52:9: note: in expansion of macro 'pte_offset_kernel'
> ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
> ^~~~~~~~~~~~~~~~~
> arch/nds32/include/asm/pgtable.h:362:33: note: in expansion of macro 'pgd_offset'
> #define pgd_offset_k(addr) pgd_offset(&init_mm, addr)
> ^~~~~~~~~~
> arch/nds32/include/asm/pgtable.h:198:39: note: in expansion of macro 'pgd_offset_k'
> #define pmd_off_k(address) pmd_offset(pgd_offset_k(address), address)
> ^~~~~~~~~~~~
> arch/nds32/mm/highmem.c:52:27: note: in expansion of macro 'pmd_off_k'
> ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
> ^~~~~~~~~
> In file included from arch/nds32/include/asm/pgtable.h:7,
> from include/linux/mm.h:99,
> from include/linux/highmem.h:8,
> from arch/nds32/mm/highmem.c:5:
> include/asm-generic/pgtable-nopmd.h:44:42: note: expected 'pud_t *' {aka 'struct <anonymous> *'} but argument is of type 'pgd_t *' {aka 'long unsigned int *'}
> static inline pmd_t * pmd_offset(pud_t * pud, unsigned long address)
> ~~~~~~~~^~~
> In file included from arch/nds32/include/asm/page.h:57,
> from include/linux/mm_types_task.h:16,
> from include/linux/mm_types.h:5,
> from include/linux/mmzone.h:21,
> from include/linux/gfp.h:6,
> from include/linux/xarray.h:14,
> from include/linux/radix-tree.h:18,
> from include/linux/fs.h:15,
> from include/linux/highmem.h:5,
> from arch/nds32/mm/highmem.c:5:
>
> Updating the pmd_off_k() macro to use the correct page table unfolding
> fixes the issue.
>
> Fixes: 7c2763c42326 ("nds32: use pgtable-nopmd instead of 4level-fixup")
> Link: https://lore.kernel.org/lkml/201912212139.yptX8CsV%[email protected]/
> Reported-by: kbuild test robot <[email protected]>
> Signed-off-by: Mike Rapoport <[email protected]>
> ---
> arch/nds32/include/asm/pgtable.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/nds32/include/asm/pgtable.h b/arch/nds32/include/asm/pgtable.h
> index 0214e4150539..6abc58ac406d 100644
> --- a/arch/nds32/include/asm/pgtable.h
> +++ b/arch/nds32/include/asm/pgtable.h
> @@ -195,7 +195,7 @@ extern void paging_init(void);
> #define pte_unmap(pte) do { } while (0)
> #define pte_unmap_nested(pte) do { } while (0)
>
> -#define pmd_off_k(address) pmd_offset(pgd_offset_k(address), address)
> +#define pmd_off_k(address) pmd_offset(pud_offset(p4d_offset(pgd_offset_k(address), (address)), (address)), (address))
>
> #define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval)
> /*

Thank you, Mike.
Reviewed-by: Greentime Hu <[email protected]>

2019-12-27 11:08:33

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH 0/2] fix recent nds32 build breakage

Arnd,

Can you please take these via asm-generic tree?

On Mon, Dec 23, 2019 at 01:00:02PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <[email protected]>
>
> Hi,
>
> The kbuild robot reported build breakage of nds32 architecture [1] that
> happens with CONFIG_CPU_CACHE_ALIASING=n and CONFIG_HUGHMEM=y.
>
> There are two issues: one with a missing macro during conversion of page
> folding and another one is a conflict between cacheflush.h definitions in
> arch/nds32 and asm-generic.
>
> [1] https://lore.kernel.org/lkml/201912212139.yptX8CsV%[email protected]/
>
> Mike Rapoport (2):
> asm-generic/nds32: don't redefine cacheflush primitives
> nds32: fix build failure caused by page table folding updates
>
> arch/nds32/include/asm/cacheflush.h | 11 ++++++----
> arch/nds32/include/asm/pgtable.h | 2 +-
> include/asm-generic/cacheflush.h | 33 ++++++++++++++++++++++++++++-
> 3 files changed, 40 insertions(+), 6 deletions(-)
>
> --
> 2.24.0
>

--
Sincerely yours,
Mike.

2019-12-30 10:22:30

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH 0/2] fix recent nds32 build breakage

On Fri, Dec 27, 2019 at 12:07 PM Mike Rapoport <[email protected]> wrote:
> Can you please take these via asm-generic tree?

Merged into my asm-generic tree now, I'll send a pull request in a few days
after the build bots have had a chance to check for remaining problems.

Arnd