This change uses the original virt_to_page() (the one with __pa()) to
check the given virtual address if CONFIG_DEBUG_VIRTUAL=y.
Recently, I worked on a bug: a driver passes a symbol address to
dma_map_single() and the virt_to_page() (called by dma_map_single())
does not work for non-linear addresses after commit 9f2875912dac
("arm64: mm: restrict virt_to_page() to the linear mapping").
I tried to trap the bug by enabling CONFIG_DEBUG_VIRTUAL but it
did not work - bacause the commit removes the __pa() from
virt_to_page() but CONFIG_DEBUG_VIRTUAL checks the virtual address
in __pa()/__virt_to_phys().
A simple solution is to use the original virt_to_page()
(the one with__pa()) if CONFIG_DEBUG_VIRTUAL=y.
Signed-off-by: Miles Chen <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
---
arch/arm64/include/asm/memory.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 290195168bb3..2cb8248fa2c8 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -302,7 +302,7 @@ static inline void *phys_to_virt(phys_addr_t x)
*/
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
-#ifndef CONFIG_SPARSEMEM_VMEMMAP
+#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
#else
--
2.18.0
On Tue, Apr 16, 2019 at 01:36:36AM +0800, Miles Chen wrote:
> This change uses the original virt_to_page() (the one with __pa()) to
> check the given virtual address if CONFIG_DEBUG_VIRTUAL=y.
>
> Recently, I worked on a bug: a driver passes a symbol address to
> dma_map_single() and the virt_to_page() (called by dma_map_single())
> does not work for non-linear addresses after commit 9f2875912dac
> ("arm64: mm: restrict virt_to_page() to the linear mapping").
>
> I tried to trap the bug by enabling CONFIG_DEBUG_VIRTUAL but it
> did not work - bacause the commit removes the __pa() from
> virt_to_page() but CONFIG_DEBUG_VIRTUAL checks the virtual address
> in __pa()/__virt_to_phys().
>
> A simple solution is to use the original virt_to_page()
> (the one with__pa()) if CONFIG_DEBUG_VIRTUAL=y.
>
> Signed-off-by: Miles Chen <[email protected]>
> Cc: Ard Biesheuvel <[email protected]>
> ---
> arch/arm64/include/asm/memory.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 290195168bb3..2cb8248fa2c8 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -302,7 +302,7 @@ static inline void *phys_to_virt(phys_addr_t x)
> */
> #define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
>
> -#ifndef CONFIG_SPARSEMEM_VMEMMAP
> +#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> #define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
> #else
IIUC, this shouldn't change the behaviour of virt_addr_valid(). The
patch looks fine to me.
Acked-by: Catalin Marinas <[email protected]>