2016-11-29 18:56:08

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 00/10] CONFIG_DEBUG_VIRTUAL for arm64


Hi,

This is v4 of the series to add CONFIG_DEBUG_VIRTUAL for arm64. This mostly
expanded on __pa_symbol conversion with a few new sites found. There's also
some reworking done to avoid calling __va too early. __va relies on having
memstart_addr set so very early code in early_fixmap_init and early KASAN
initialization can't just call __va(__Ipa_symbol(...)) to get the linear map
alias. I found this while testing with DEBUG_VM.

All of this could use probably use more testing under more configurations.
KVM, Xen, kexec, hibernate should all be tested.

Thanks,
Laura

Laura Abbott (10):
lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL
mm/cma: Cleanup highmem check
arm64: Move some macros under #ifndef __ASSEMBLY__
arm64: Add cast for virt_to_pfn
arm64: Use __pa_symbol for kernel symbols
xen: Switch to using __pa_symbol
kexec: Switch to __pa_symbol
mm/kasan: Switch to using __pa_symbol and lm_alias
mm/usercopy: Switch to using lm_alias
arm64: Add support for CONFIG_DEBUG_VIRTUAL

arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/kvm_mmu.h | 4 +-
arch/arm64/include/asm/memory.h | 67 ++++++++++++++++++++++---------
arch/arm64/include/asm/mmu_context.h | 6 +--
arch/arm64/include/asm/pgtable.h | 2 +-
arch/arm64/kernel/acpi_parking_protocol.c | 2 +-
arch/arm64/kernel/cpu-reset.h | 2 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/hibernate.c | 13 +++---
arch/arm64/kernel/insn.c | 2 +-
arch/arm64/kernel/psci.c | 2 +-
arch/arm64/kernel/setup.c | 8 ++--
arch/arm64/kernel/smp_spin_table.c | 2 +-
arch/arm64/kernel/vdso.c | 4 +-
arch/arm64/mm/Makefile | 2 +
arch/arm64/mm/init.c | 11 ++---
arch/arm64/mm/kasan_init.c | 21 ++++++----
arch/arm64/mm/mmu.c | 32 +++++++++------
arch/arm64/mm/physaddr.c | 28 +++++++++++++
arch/x86/Kconfig | 1 +
drivers/firmware/psci.c | 2 +-
drivers/xen/xenbus/xenbus_dev_backend.c | 2 +-
drivers/xen/xenfs/xenstored.c | 2 +-
include/linux/mm.h | 4 ++
kernel/kexec_core.c | 2 +-
lib/Kconfig.debug | 5 ++-
mm/cma.c | 15 +++----
mm/kasan/kasan_init.c | 12 +++---
mm/usercopy.c | 4 +-
29 files changed, 167 insertions(+), 93 deletions(-)
create mode 100644 arch/arm64/mm/physaddr.c

--
2.7.4


2016-11-29 18:55:54

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 01/10] lib/Kconfig.debug: Add ARCH_HAS_DEBUG_VIRTUAL


DEBUG_VIRTUAL currently depends on DEBUG_KERNEL && X86. arm64 is getting
the same support. Rather than add a list of architectures, switch this
to ARCH_HAS_DEBUG_VIRTUAL and let architectures select it as
appropriate.

Acked-by: Ingo Molnar <[email protected]>
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Suggested-by: Mark Rutland <[email protected]>
Signed-off-by: Laura Abbott <[email protected]>
---
v4: No changes, just Acks
---
arch/x86/Kconfig | 1 +
lib/Kconfig.debug | 5 ++++-
2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index bada636..f533321 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -23,6 +23,7 @@ config X86
select ARCH_CLOCKSOURCE_DATA
select ARCH_DISCARD_MEMBLOCK
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
+ select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_FAST_MULTIPLIER
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index a6c8db1..be65e04 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -603,9 +603,12 @@ config DEBUG_VM_PGFLAGS

If unsure, say N.

+config ARCH_HAS_DEBUG_VIRTUAL
+ bool
+
config DEBUG_VIRTUAL
bool "Debug VM translations"
- depends on DEBUG_KERNEL && X86
+ depends on DEBUG_KERNEL && ARCH_HAS_DEBUG_VIRTUAL
help
Enable some costly sanity checks in virtual to page code. This can
catch mistakes with virt_to_page() and friends.
--
2.7.4

2016-11-29 18:56:19

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 04/10] arm64: Add cast for virt_to_pfn


virt_to_pfn lacks a cast at the top level. Don't rely on __virt_to_phys
and explicitly cast to unsigned long.

Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Signed-off-by: Laura Abbott <[email protected]>
---
v4: No changes
---
arch/arm64/include/asm/memory.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b4d2b32..d773e2c 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -204,7 +204,7 @@ static inline void *phys_to_virt(phys_addr_t x)
#define __pa(x) __virt_to_phys((unsigned long)(x))
#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
-#define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys(x))
+#define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x)))

/*
* virt_to_page(k) convert a _valid_ virtual address to struct page *
--
2.7.4

2016-11-29 18:56:36

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 10/10] arm64: Add support for CONFIG_DEBUG_VIRTUAL


x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks
on virt_to_phys calls. The goal is to catch users who are calling
virt_to_phys on non-linear addresses immediately. This inclues callers
using virt_to_phys on image addresses instead of __pa_symbol. As features
such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly
important. Add checks to catch bad virt_to_phys usage.

Signed-off-by: Laura Abbott <[email protected]>
---
v4: Refactored virt_to_phys macros for better reuse per suggestions.
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/memory.h | 31 ++++++++++++++++++++++++++++---
arch/arm64/mm/Makefile | 2 ++
arch/arm64/mm/physaddr.c | 28 ++++++++++++++++++++++++++++
4 files changed, 59 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/mm/physaddr.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 969ef88..83b95bc 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -6,6 +6,7 @@ config ARM64
select ACPI_MCFG if ACPI
select ACPI_SPCR_TABLE if ACPI
select ARCH_CLOCKSOURCE_DATA
+ select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
select ARCH_HAS_ELF_RANDOMIZE
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index a219d3f..41ee96f 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -167,10 +167,33 @@ extern u64 kimage_voffset;
* private definitions which should NOT be used outside memory.h
* files. Use virt_to_phys/phys_to_virt/__pa/__va instead.
*/
-#define __virt_to_phys(x) ({ \
+
+
+/*
+ * The linear kernel range starts in the middle of the virtual adddress
+ * space. Testing the top bit for the start of the region is a
+ * sufficient check.
+ */
+#define __is_lm_address(addr) (!!((addr) & BIT(VA_BITS - 1)))
+
+#define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+#define __kimg_to_phys(addr) ((addr) - kimage_voffset)
+
+#define __virt_to_phys_nodebug(x) ({ \
phys_addr_t __x = (phys_addr_t)(x); \
- __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET : \
- (__x - kimage_voffset); })
+ __is_lm_address(__x) ? __lm_to_phys(__x) : \
+ __kimg_to_phys(__x); \
+})
+
+#define __pa_symbol_nodebug(x) __kimg_to_phys((phys_addr_t)(x))
+
+#ifdef CONFIG_DEBUG_VIRTUAL
+extern phys_addr_t __virt_to_phys(unsigned long x);
+extern phys_addr_t __phys_addr_symbol(unsigned long x);
+#else
+#define __virt_to_phys(x) __virt_to_phys_nodebug(x)
+#define __phys_addr_symbol(x) __pa_symbol_nodebug(x)
+#endif

#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
#define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset))
@@ -202,6 +225,8 @@ static inline void *phys_to_virt(phys_addr_t x)
* Drivers should NOT use these either.
*/
#define __pa(x) __virt_to_phys((unsigned long)(x))
+#define __pa_symbol(x) __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
+#define __pa_nodebug(x) __virt_to_phys_nodebug((unsigned long)(x))
#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
#define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index 54bb209..38d3811 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -5,6 +5,8 @@ obj-y := dma-mapping.o extable.o fault.o init.o \
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_ARM64_PTDUMP) += dump.o
obj-$(CONFIG_NUMA) += numa.o
+obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
+KASAN_SANITIZE_physaddr.o += n

obj-$(CONFIG_KASAN) += kasan_init.o
KASAN_SANITIZE_kasan_init.o := n
diff --git a/arch/arm64/mm/physaddr.c b/arch/arm64/mm/physaddr.c
new file mode 100644
index 0000000..6684f43
--- /dev/null
+++ b/arch/arm64/mm/physaddr.c
@@ -0,0 +1,28 @@
+#include <linux/bug.h>
+#include <linux/export.h>
+#include <linux/types.h>
+#include <linux/mmdebug.h>
+#include <linux/mm.h>
+
+#include <asm/memory.h>
+
+phys_addr_t __virt_to_phys(unsigned long x)
+{
+ WARN(!__is_lm_address(x),
+ "virt_to_phys used for non-linear address :%pK\n", (void *)x);
+
+ return __virt_to_phys_nodebug(x);
+}
+EXPORT_SYMBOL(__virt_to_phys);
+
+phys_addr_t __phys_addr_symbol(unsigned long x)
+{
+ /*
+ * This is bounds checking against the kernel image only.
+ * __pa_symbol should only be used on kernel symbol addresses.
+ */
+ VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
+ x > (unsigned long) KERNEL_END);
+ return __pa_symbol_nodebug(x);
+}
+EXPORT_SYMBOL(__phys_addr_symbol);
--
2.7.4

2016-11-29 18:56:28

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 03/10] arm64: Move some macros under #ifndef __ASSEMBLY__


Several macros for various x_to_y exist outside the bounds of an
__ASSEMBLY__ guard. Move them in preparation for support for
CONFIG_DEBUG_VIRTUAL.

Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Signed-off-by: Laura Abbott <[email protected]>
---
v4: No changes
---
arch/arm64/include/asm/memory.h | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b71086d..b4d2b32 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -102,25 +102,6 @@
#endif

/*
- * Physical vs virtual RAM address space conversion. These are
- * private definitions which should NOT be used outside memory.h
- * files. Use virt_to_phys/phys_to_virt/__pa/__va instead.
- */
-#define __virt_to_phys(x) ({ \
- phys_addr_t __x = (phys_addr_t)(x); \
- __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET : \
- (__x - kimage_voffset); })
-
-#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
-#define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset))
-
-/*
- * Convert a page to/from a physical address
- */
-#define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page)))
-#define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys)))
-
-/*
* Memory types available.
*/
#define MT_DEVICE_nGnRnE 0
@@ -182,6 +163,25 @@ extern u64 kimage_voffset;
#define PHYS_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT)

/*
+ * Physical vs virtual RAM address space conversion. These are
+ * private definitions which should NOT be used outside memory.h
+ * files. Use virt_to_phys/phys_to_virt/__pa/__va instead.
+ */
+#define __virt_to_phys(x) ({ \
+ phys_addr_t __x = (phys_addr_t)(x); \
+ __x & BIT(VA_BITS - 1) ? (__x & ~PAGE_OFFSET) + PHYS_OFFSET : \
+ (__x - kimage_voffset); })
+
+#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+#define __phys_to_kimg(x) ((unsigned long)((x) + kimage_voffset))
+
+/*
+ * Convert a page to/from a physical address
+ */
+#define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page)))
+#define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys)))
+
+/*
* Note: Drivers should NOT use these. They are the wrong
* translation for translating DMA addresses. Use the driver
* DMA support - see dma-mapping.h.
--
2.7.4

2016-11-29 18:57:11

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 09/10] mm/usercopy: Switch to using lm_alias


The usercopy checking code currently calls __va(__pa(...)) to check for
aliases on symbols. Switch to using lm_alias instead.

Signed-off-by: Laura Abbott <[email protected]>
---
Found when reviewing the kernel. Tested.
---
mm/usercopy.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 3c8da0a..8345299 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -108,13 +108,13 @@ static inline const char *check_kernel_text_object(const void *ptr,
* __pa() is not just the reverse of __va(). This can be detected
* and checked:
*/
- textlow_linear = (unsigned long)__va(__pa(textlow));
+ textlow_linear = (unsigned long)lm_alias(textlow);
/* No different mapping: we're done. */
if (textlow_linear == textlow)
return NULL;

/* Check the secondary mapping... */
- texthigh_linear = (unsigned long)__va(__pa(texthigh));
+ texthigh_linear = (unsigned long)lm_alias(texthigh);
if (overlaps(ptr, n, textlow_linear, texthigh_linear))
return "<linear kernel text>";

--
2.7.4

2016-11-29 18:57:23

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 08/10] mm/kasan: Switch to using __pa_symbol and lm_alias

__pa_symbol is the correct API to find the physical address of symbols.
Switch to it to allow for debugging APIs to work correctly. Other
functions such as p*d_populate may call __pa internally. Ensure that the
address passed is in the linear region by calling lm_alias.

Signed-off-by: Laura Abbott <[email protected]>
---
Pointed out during review/testing of v3.
---
mm/kasan/kasan_init.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/kasan/kasan_init.c b/mm/kasan/kasan_init.c
index 3f9a41c..ff04721 100644
--- a/mm/kasan/kasan_init.c
+++ b/mm/kasan/kasan_init.c
@@ -49,7 +49,7 @@ static void __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
pte_t *pte = pte_offset_kernel(pmd, addr);
pte_t zero_pte;

- zero_pte = pfn_pte(PFN_DOWN(__pa(kasan_zero_page)), PAGE_KERNEL);
+ zero_pte = pfn_pte(PFN_DOWN(__pa_symbol(kasan_zero_page)), PAGE_KERNEL);
zero_pte = pte_wrprotect(zero_pte);

while (addr + PAGE_SIZE <= end) {
@@ -69,7 +69,7 @@ static void __init zero_pmd_populate(pud_t *pud, unsigned long addr,
next = pmd_addr_end(addr, end);

if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) {
- pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
+ pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte));
continue;
}

@@ -94,7 +94,7 @@ static void __init zero_pud_populate(pgd_t *pgd, unsigned long addr,

pud_populate(&init_mm, pud, kasan_zero_pmd);
pmd = pmd_offset(pud, addr);
- pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
+ pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte));
continue;
}

@@ -135,11 +135,11 @@ void __init kasan_populate_zero_shadow(const void *shadow_start,
* puds,pmds, so pgd_populate(), pud_populate()
* is noops.
*/
- pgd_populate(&init_mm, pgd, kasan_zero_pud);
+ pgd_populate(&init_mm, pgd, lm_alias(kasan_zero_pud));
pud = pud_offset(pgd, addr);
- pud_populate(&init_mm, pud, kasan_zero_pmd);
+ pud_populate(&init_mm, pud, lm_alias(kasan_zero_pmd));
pmd = pmd_offset(pud, addr);
- pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
+ pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte));
continue;
}

--
2.7.4

2016-11-29 18:57:42

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 06/10] xen: Switch to using __pa_symbol

__pa_symbol is the correct macro to use on kernel
symbols. Switch to this from __pa.

Signed-off-by: Laura Abbott <[email protected]>
---
Found during a sweep of the kernel. Untested.
---
drivers/xen/xenbus/xenbus_dev_backend.c | 2 +-
drivers/xen/xenfs/xenstored.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_dev_backend.c b/drivers/xen/xenbus/xenbus_dev_backend.c
index 4a41ac9..31ca2bf 100644
--- a/drivers/xen/xenbus/xenbus_dev_backend.c
+++ b/drivers/xen/xenbus/xenbus_dev_backend.c
@@ -99,7 +99,7 @@ static int xenbus_backend_mmap(struct file *file, struct vm_area_struct *vma)
return -EINVAL;

if (remap_pfn_range(vma, vma->vm_start,
- virt_to_pfn(xen_store_interface),
+ PHYS_PFN(__pa_symbol(xen_store_interface)),
size, vma->vm_page_prot))
return -EAGAIN;

diff --git a/drivers/xen/xenfs/xenstored.c b/drivers/xen/xenfs/xenstored.c
index fef20db..21009ea 100644
--- a/drivers/xen/xenfs/xenstored.c
+++ b/drivers/xen/xenfs/xenstored.c
@@ -38,7 +38,7 @@ static int xsd_kva_mmap(struct file *file, struct vm_area_struct *vma)
return -EINVAL;

if (remap_pfn_range(vma, vma->vm_start,
- virt_to_pfn(xen_store_interface),
+ PHYS_PFN(__pa_symbol(xen_store_interface)),
size, vma->vm_page_prot))
return -EAGAIN;

--
2.7.4

2016-11-29 18:57:54

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

__pa_symbol is technically the marco that should be used for kernel
symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
will do bounds checking. As part of this, introduce lm_alias, a
macro which wraps the __va(__pa(...)) idiom used a few places to
get the alias.

Signed-off-by: Laura Abbott <[email protected]>
---
v4: Stop calling __va early, conversion of a few more sites. I decided against
wrapping the __p*d_populate calls into new functions since the call sites
should be limited.
---
arch/arm64/include/asm/kvm_mmu.h | 4 ++--
arch/arm64/include/asm/memory.h | 2 ++
arch/arm64/include/asm/mmu_context.h | 6 +++---
arch/arm64/include/asm/pgtable.h | 2 +-
arch/arm64/kernel/acpi_parking_protocol.c | 2 +-
arch/arm64/kernel/cpu-reset.h | 2 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/hibernate.c | 13 +++++--------
arch/arm64/kernel/insn.c | 2 +-
arch/arm64/kernel/psci.c | 2 +-
arch/arm64/kernel/setup.c | 8 ++++----
arch/arm64/kernel/smp_spin_table.c | 2 +-
arch/arm64/kernel/vdso.c | 4 ++--
arch/arm64/mm/init.c | 11 ++++++-----
arch/arm64/mm/kasan_init.c | 21 +++++++++++++-------
arch/arm64/mm/mmu.c | 32 +++++++++++++++++++------------
drivers/firmware/psci.c | 2 +-
include/linux/mm.h | 4 ++++
18 files changed, 70 insertions(+), 51 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 6f72fe8..55772c1 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -47,7 +47,7 @@
* If the page is in the bottom half, we have to use the top half. If
* the page is in the top half, we have to use the bottom half:
*
- * T = __virt_to_phys(__hyp_idmap_text_start)
+ * T = __pa_symbol(__hyp_idmap_text_start)
* if (T & BIT(VA_BITS - 1))
* HYP_VA_MIN = 0 //idmap in upper half
* else
@@ -271,7 +271,7 @@ static inline void __kvm_flush_dcache_pud(pud_t pud)
kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE);
}

-#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x))
+#define kvm_virt_to_phys(x) __pa_symbol(x)

void kvm_set_way_flush(struct kvm_vcpu *vcpu);
void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d773e2c..a219d3f 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
#define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
+#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x))
+#define lm_alias(x) __va(__pa_symbol(x))

/*
* virt_to_page(k) convert a _valid_ virtual address to struct page *
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index a501853..ea0f969 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -44,7 +44,7 @@ static inline void contextidr_thread_switch(struct task_struct *next)
*/
static inline void cpu_set_reserved_ttbr0(void)
{
- unsigned long ttbr = virt_to_phys(empty_zero_page);
+ unsigned long ttbr = __pa_symbol(empty_zero_page);

write_sysreg(ttbr, ttbr0_el1);
isb();
@@ -113,7 +113,7 @@ static inline void cpu_install_idmap(void)
local_flush_tlb_all();
cpu_set_idmap_tcr_t0sz();

- cpu_switch_mm(idmap_pg_dir, &init_mm);
+ cpu_switch_mm(lm_alias(idmap_pg_dir), &init_mm);
}

/*
@@ -128,7 +128,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)

phys_addr_t pgd_phys = virt_to_phys(pgd);

- replace_phys = (void *)virt_to_phys(idmap_cpu_replace_ttbr1);
+ replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);

cpu_install_idmap();
replace_phys(pgd_phys);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index ffbb9a5..090134c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -52,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
* for zero-mapped memory areas etc..
*/
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
-#define ZERO_PAGE(vaddr) pfn_to_page(PHYS_PFN(__pa(empty_zero_page)))
+#define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page))

#define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte))

diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c
index a32b401..df58310 100644
--- a/arch/arm64/kernel/acpi_parking_protocol.c
+++ b/arch/arm64/kernel/acpi_parking_protocol.c
@@ -109,7 +109,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
* that read this address need to convert this address to the
* Boot-Loader's endianness before jumping.
*/
- writeq_relaxed(__pa(secondary_entry), &mailbox->entry_point);
+ writeq_relaxed(__pa_symbol(secondary_entry), &mailbox->entry_point);
writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id);

arch_send_wakeup_ipi_mask(cpumask_of(cpu));
diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h
index d4e9ecb..6c2b1b4 100644
--- a/arch/arm64/kernel/cpu-reset.h
+++ b/arch/arm64/kernel/cpu-reset.h
@@ -24,7 +24,7 @@ static inline void __noreturn cpu_soft_restart(unsigned long el2_switch,

el2_switch = el2_switch && !is_kernel_in_hyp_mode() &&
is_hyp_mode_available();
- restart = (void *)virt_to_phys(__cpu_soft_restart);
+ restart = (void *)__pa_symbol(__cpu_soft_restart);

cpu_install_idmap();
restart(el2_switch, entry, arg0, arg1, arg2);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c02504e..6ccadf2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -736,7 +736,7 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
int __unused)
{
- phys_addr_t idmap_addr = virt_to_phys(__hyp_idmap_text_start);
+ phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);

/*
* Activate the lower HYP offset only if:
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index d55a7b0..4f0c77d 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -50,9 +50,6 @@
*/
extern int in_suspend;

-/* Find a symbols alias in the linear map */
-#define LMADDR(x) phys_to_virt(virt_to_phys(x))
-
/* Do we need to reset el2? */
#define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode())

@@ -102,8 +99,8 @@ static inline void arch_hdr_invariants(struct arch_hibernate_hdr_invariants *i)

int pfn_is_nosave(unsigned long pfn)
{
- unsigned long nosave_begin_pfn = virt_to_pfn(&__nosave_begin);
- unsigned long nosave_end_pfn = virt_to_pfn(&__nosave_end - 1);
+ unsigned long nosave_begin_pfn = sym_to_pfn(&__nosave_begin);
+ unsigned long nosave_end_pfn = sym_to_pfn(&__nosave_end - 1);

return (pfn >= nosave_begin_pfn) && (pfn <= nosave_end_pfn);
}
@@ -125,12 +122,12 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
return -EOVERFLOW;

arch_hdr_invariants(&hdr->invariants);
- hdr->ttbr1_el1 = virt_to_phys(swapper_pg_dir);
+ hdr->ttbr1_el1 = __pa_symbol(swapper_pg_dir);
hdr->reenter_kernel = _cpu_resume;

/* We can't use __hyp_get_vectors() because kvm may still be loaded */
if (el2_reset_needed())
- hdr->__hyp_stub_vectors = virt_to_phys(__hyp_stub_vectors);
+ hdr->__hyp_stub_vectors = __pa_symbol(__hyp_stub_vectors);
else
hdr->__hyp_stub_vectors = 0;

@@ -484,7 +481,7 @@ int swsusp_arch_resume(void)
* Since we only copied the linear map, we need to find restore_pblist's
* linear map address.
*/
- lm_restore_pblist = LMADDR(restore_pblist);
+ lm_restore_pblist = lm_alias(restore_pblist);

/*
* We need a zero page that is zero before & after resume in order to
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 6f2ac4f..f607b38 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -97,7 +97,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
page = vmalloc_to_page(addr);
else if (!module)
- page = pfn_to_page(PHYS_PFN(__pa(addr)));
+ page = phys_to_page(__pa_symbol(addr));
else
return addr;

diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index 42816be..f0f2abb 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -45,7 +45,7 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu)

static int cpu_psci_cpu_boot(unsigned int cpu)
{
- int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_entry));
+ int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
if (err)
pr_err("failed to boot CPU%d (%d)\n", cpu, err);

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index f534f49..e2dbc02 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -199,10 +199,10 @@ static void __init request_standard_resources(void)
struct memblock_region *region;
struct resource *res;

- kernel_code.start = virt_to_phys(_text);
- kernel_code.end = virt_to_phys(__init_begin - 1);
- kernel_data.start = virt_to_phys(_sdata);
- kernel_data.end = virt_to_phys(_end - 1);
+ kernel_code.start = __pa_symbol(_text);
+ kernel_code.end = __pa_symbol(__init_begin - 1);
+ kernel_data.start = __pa_symbol(_sdata);
+ kernel_data.end = __pa_symbol(_end - 1);

for_each_memblock(memory, region) {
res = alloc_bootmem_low(sizeof(*res));
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 9a00eee..25fccca 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -98,7 +98,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
* boot-loader's endianess before jumping. This is mandated by
* the boot protocol.
*/
- writeq_relaxed(__pa(secondary_holding_pen), release_addr);
+ writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
__flush_dcache_area((__force void *)release_addr,
sizeof(*release_addr));

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index a2c2478..79cd86b 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -140,11 +140,11 @@ static int __init vdso_init(void)
return -ENOMEM;

/* Grab the vDSO data page. */
- vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
+ vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));

/* Grab the vDSO code pages. */
for (i = 0; i < vdso_pages; i++)
- vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
+ vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);

vdso_spec[0].pages = &vdso_pagelist[0];
vdso_spec[1].pages = &vdso_pagelist[1];
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 212c4d1..95ef998 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -209,8 +209,8 @@ void __init arm64_memblock_init(void)
* linear mapping. Take care not to clip the kernel which may be
* high in memory.
*/
- memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)),
- ULLONG_MAX);
+ memblock_remove(max_t(u64, memstart_addr + linear_region_size,
+ __pa_symbol(_end)), ULLONG_MAX);
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
/* ensure that memstart_addr remains sufficiently aligned */
memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
@@ -225,7 +225,7 @@ void __init arm64_memblock_init(void)
*/
if (memory_limit != (phys_addr_t)ULLONG_MAX) {
memblock_mem_limit_remove_map(memory_limit);
- memblock_add(__pa(_text), (u64)(_end - _text));
+ memblock_add(__pa_symbol(_text), (u64)(_end - _text));
}

if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) {
@@ -278,7 +278,7 @@ void __init arm64_memblock_init(void)
* Register the kernel text, kernel data, initrd, and initial
* pagetables with memblock.
*/
- memblock_reserve(__pa(_text), _end - _text);
+ memblock_reserve(__pa_symbol(_text), _end - _text);
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start) {
memblock_reserve(initrd_start, initrd_end - initrd_start);
@@ -483,7 +483,8 @@ void __init mem_init(void)

void free_initmem(void)
{
- free_reserved_area(__va(__pa(__init_begin)), __va(__pa(__init_end)),
+ free_reserved_area(lm_alias(__init_begin),
+ lm_alias(__init_end),
0, "unused kernel");
/*
* Unmap the __init region but leave the VM area in place. This
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 757009d..0fb8110 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -26,6 +26,13 @@

static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);

+/*
+ * The p*d_populate functions call virt_to_phys implicitly so they can't be used
+ * directly on kernel symbols (bm_p*d). All the early functions are called too
+ * early to use lm_alias so __p*d_populate functions must be used to populate
+ * with the physical address from __pa_symbol.
+ */
+
static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr,
unsigned long end)
{
@@ -33,12 +40,12 @@ static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr,
unsigned long next;

if (pmd_none(*pmd))
- pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
+ __pmd_populate(pmd, __pa_symbol(kasan_zero_pte), PMD_TYPE_TABLE);

pte = pte_offset_kimg(pmd, addr);
do {
next = addr + PAGE_SIZE;
- set_pte(pte, pfn_pte(virt_to_pfn(kasan_zero_page),
+ set_pte(pte, pfn_pte(sym_to_pfn(kasan_zero_page),
PAGE_KERNEL));
} while (pte++, addr = next, addr != end && pte_none(*pte));
}
@@ -51,7 +58,7 @@ static void __init kasan_early_pmd_populate(pud_t *pud,
unsigned long next;

if (pud_none(*pud))
- pud_populate(&init_mm, pud, kasan_zero_pmd);
+ __pud_populate(pud, __pa_symbol(kasan_zero_pmd), PMD_TYPE_TABLE);

pmd = pmd_offset_kimg(pud, addr);
do {
@@ -68,7 +75,7 @@ static void __init kasan_early_pud_populate(pgd_t *pgd,
unsigned long next;

if (pgd_none(*pgd))
- pgd_populate(&init_mm, pgd, kasan_zero_pud);
+ __pgd_populate(pgd, __pa_symbol(kasan_zero_pud), PUD_TYPE_TABLE);

pud = pud_offset_kimg(pgd, addr);
edo {
@@ -148,7 +155,7 @@ void __init kasan_init(void)
*/
memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir));
dsb(ishst);
- cpu_replace_ttbr1(tmp_pg_dir);
+ cpu_replace_ttbr1(lm_alias(tmp_pg_dir));

clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);

@@ -199,10 +206,10 @@ void __init kasan_init(void)
*/
for (i = 0; i < PTRS_PER_PTE; i++)
set_pte(&kasan_zero_pte[i],
- pfn_pte(virt_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
+ pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));

memset(kasan_zero_page, 0, PAGE_SIZE);
- cpu_replace_ttbr1(swapper_pg_dir);
+ cpu_replace_ttbr1(lm_alias(swapper_pg_dir));

/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 05615a3..7498ebd 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -319,8 +319,8 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,

static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
{
- unsigned long kernel_start = __pa(_text);
- unsigned long kernel_end = __pa(__init_begin);
+ unsigned long kernel_start = __pa_symbol(_text);
+ unsigned long kernel_end = __pa_symbol(__init_begin);

/*
* Take care not to create a writable alias for the
@@ -387,21 +387,21 @@ void mark_rodata_ro(void)
unsigned long section_size;

section_size = (unsigned long)_etext - (unsigned long)_text;
- create_mapping_late(__pa(_text), (unsigned long)_text,
+ create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
section_size, PAGE_KERNEL_ROX);
/*
* mark .rodata as read only. Use __init_begin rather than __end_rodata
* to cover NOTES and EXCEPTION_TABLE.
*/
section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata;
- create_mapping_late(__pa(__start_rodata), (unsigned long)__start_rodata,
+ create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
section_size, PAGE_KERNEL_RO);
}

static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
pgprot_t prot, struct vm_struct *vma)
{
- phys_addr_t pa_start = __pa(va_start);
+ phys_addr_t pa_start = __pa_symbol(va_start);
unsigned long size = va_end - va_start;

BUG_ON(!PAGE_ALIGNED(pa_start));
@@ -449,7 +449,7 @@ static void __init map_kernel(pgd_t *pgd)
*/
BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START),
- __pud(__pa(bm_pmd) | PUD_TYPE_TABLE));
+ __pud(__pa_symbol(bm_pmd) | PUD_TYPE_TABLE));
pud_clear_fixmap();
} else {
BUG();
@@ -480,7 +480,7 @@ void __init paging_init(void)
*/
cpu_replace_ttbr1(__va(pgd_phys));
memcpy(swapper_pg_dir, pgd, PAGE_SIZE);
- cpu_replace_ttbr1(swapper_pg_dir);
+ cpu_replace_ttbr1(lm_alias(swapper_pg_dir));

pgd_clear_fixmap();
memblock_free(pgd_phys, PAGE_SIZE);
@@ -489,7 +489,7 @@ void __init paging_init(void)
* We only reuse the PGD from the swapper_pg_dir, not the pud + pmd
* allocated with it.
*/
- memblock_free(__pa(swapper_pg_dir) + PAGE_SIZE,
+ memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE,
SWAPPER_DIR_SIZE - PAGE_SIZE);
}

@@ -600,6 +600,12 @@ static inline pte_t * fixmap_pte(unsigned long addr)
return &bm_pte[pte_index(addr)];
}

+/*
+ * The p*d_populate functions call virt_to_phys implicitly so they can't be used
+ * directly on kernel symbols (bm_p*d). This function is called too early to use
+ * lm_alias so __p*d_populate functions must be used to populate with the
+ * physical address from __pa_symbol.
+ */
void __init early_fixmap_init(void)
{
pgd_t *pgd;
@@ -609,7 +615,7 @@ void __init early_fixmap_init(void)

pgd = pgd_offset_k(addr);
if (CONFIG_PGTABLE_LEVELS > 3 &&
- !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa(bm_pud))) {
+ !(pgd_none(*pgd) || pgd_page_paddr(*pgd) == __pa_symbol(bm_pud))) {
/*
* We only end up here if the kernel mapping and the fixmap
* share the top level pgd entry, which should only happen on
@@ -618,12 +624,14 @@ void __init early_fixmap_init(void)
BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
pud = pud_offset_kimg(pgd, addr);
} else {
- pgd_populate(&init_mm, pgd, bm_pud);
+ if (pgd_none(*pgd))
+ __pgd_populate(pgd, __pa_symbol(bm_pud), PUD_TYPE_TABLE);
pud = fixmap_pud(addr);
}
- pud_populate(&init_mm, pud, bm_pmd);
+ if (pud_none(*pud))
+ __pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
pmd = fixmap_pmd(addr);
- pmd_populate_kernel(&init_mm, pmd, bm_pte);
+ __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);

/*
* The boot-ioremap range spans multiple pmds, for which
diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 8263429..9defbe2 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index)
u32 *state = __this_cpu_read(psci_power_state);

return psci_ops.cpu_suspend(state[index - 1],
- virt_to_phys(cpu_resume));
+ __pa_symbol(cpu_resume));
}

int psci_cpu_suspend_enter(unsigned long index)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index a92c8d7..88556b8 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -76,6 +76,10 @@ extern int mmap_rnd_compat_bits __read_mostly;
#define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
#endif

+#ifndef lm_alias
+#define lm_alias(x) __va(__pa_symbol(x))
+#endif
+
/*
* To prevent common memory management code establishing
* a zero page mapping on a read fault.
--
2.7.4

2016-11-29 18:58:04

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 02/10] mm/cma: Cleanup highmem check



6b101e2a3ce4 ("mm/CMA: fix boot regression due to physical address of
high_memory") added checks to use __pa_nodebug on x86 since
CONFIG_DEBUG_VIRTUAL complains about high_memory not being linearlly
mapped. arm64 is now getting support for CONFIG_DEBUG_VIRTUAL as well.
Rather than add an explosion of arches to the #ifdef, switch to an
alternate method to calculate the physical start of highmem using
the page before highmem starts. This avoids the need for the #ifdef and
extra __pa_nodebug calls.

Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Signed-off-by: Laura Abbott <[email protected]>
---
v4: No changes
---
mm/cma.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index c960459..94b3460 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -235,18 +235,13 @@ int __init cma_declare_contiguous(phys_addr_t base,
phys_addr_t highmem_start;
int ret = 0;

-#ifdef CONFIG_X86
/*
- * high_memory isn't direct mapped memory so retrieving its physical
- * address isn't appropriate. But it would be useful to check the
- * physical address of the highmem boundary so it's justifiable to get
- * the physical address from it. On x86 there is a validation check for
- * this case, so the following workaround is needed to avoid it.
+ * We can't use __pa(high_memory) directly, since high_memory
+ * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
+ * complain. Find the boundary by adding one to the last valid
+ * address.
*/
- highmem_start = __pa_nodebug(high_memory);
-#else
- highmem_start = __pa(high_memory);
-#endif
+ highmem_start = __pa(high_memory - 1) + 1;
pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
__func__, &size, &base, &limit, &alignment);

--
2.7.4

2016-11-29 18:57:33

by Laura Abbott

[permalink] [raw]
Subject: [PATCHv4 07/10] kexec: Switch to __pa_symbol


__pa_symbol is the correct api to get the physical address of kernel
symbols. Switch to it to allow for better debug checking.

Signed-off-by: Laura Abbott <[email protected]>
---
Found during review of the kernel. Untested.
---
kernel/kexec_core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 5616755..e1b625e 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -1397,7 +1397,7 @@ void __weak arch_crash_save_vmcoreinfo(void)

phys_addr_t __weak paddr_vmcoreinfo_note(void)
{
- return __pa((unsigned long)(char *)&vmcoreinfo_note);
+ return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
}

static int __init crash_save_vmcoreinfo_init(void)
--
2.7.4

2016-11-29 19:39:55

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCHv4 09/10] mm/usercopy: Switch to using lm_alias

On Tue, Nov 29, 2016 at 10:55 AM, Laura Abbott <[email protected]> wrote:
>
> The usercopy checking code currently calls __va(__pa(...)) to check for
> aliases on symbols. Switch to using lm_alias instead.
>
> Signed-off-by: Laura Abbott <[email protected]>

Acked-by: Kees Cook <[email protected]>

I should probably add a corresponding alias test to lkdtm...

-Kees

> ---
> Found when reviewing the kernel. Tested.
> ---
> mm/usercopy.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 3c8da0a..8345299 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -108,13 +108,13 @@ static inline const char *check_kernel_text_object(const void *ptr,
> * __pa() is not just the reverse of __va(). This can be detected
> * and checked:
> */
> - textlow_linear = (unsigned long)__va(__pa(textlow));
> + textlow_linear = (unsigned long)lm_alias(textlow);
> /* No different mapping: we're done. */
> if (textlow_linear == textlow)
> return NULL;
>
> /* Check the secondary mapping... */
> - texthigh_linear = (unsigned long)__va(__pa(texthigh));
> + texthigh_linear = (unsigned long)lm_alias(texthigh);
> if (overlaps(ptr, n, textlow_linear, texthigh_linear))
> return "<linear kernel text>";
>
> --
> 2.7.4
>



--
Kees Cook
Nexus Security

2016-11-29 22:24:46

by Boris Ostrovsky

[permalink] [raw]
Subject: Re: [PATCHv4 06/10] xen: Switch to using __pa_symbol

On 11/29/2016 01:55 PM, Laura Abbott wrote:
> __pa_symbol is the correct macro to use on kernel
> symbols. Switch to this from __pa.
>
> Signed-off-by: Laura Abbott <[email protected]>
> ---
> Found during a sweep of the kernel. Untested.
> ---
> drivers/xen/xenbus/xenbus_dev_backend.c | 2 +-
> drivers/xen/xenfs/xenstored.c | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/xen/xenbus/xenbus_dev_backend.c b/drivers/xen/xenbus/xenbus_dev_backend.c
> index 4a41ac9..31ca2bf 100644
> --- a/drivers/xen/xenbus/xenbus_dev_backend.c
> +++ b/drivers/xen/xenbus/xenbus_dev_backend.c
> @@ -99,7 +99,7 @@ static int xenbus_backend_mmap(struct file *file, struct vm_area_struct *vma)
> return -EINVAL;
>
> if (remap_pfn_range(vma, vma->vm_start,
> - virt_to_pfn(xen_store_interface),
> + PHYS_PFN(__pa_symbol(xen_store_interface)),
> size, vma->vm_page_prot))
> return -EAGAIN;
>
> diff --git a/drivers/xen/xenfs/xenstored.c b/drivers/xen/xenfs/xenstored.c
> index fef20db..21009ea 100644
> --- a/drivers/xen/xenfs/xenstored.c
> +++ b/drivers/xen/xenfs/xenstored.c
> @@ -38,7 +38,7 @@ static int xsd_kva_mmap(struct file *file, struct vm_area_struct *vma)
> return -EINVAL;
>
> if (remap_pfn_range(vma, vma->vm_start,
> - virt_to_pfn(xen_store_interface),
> + PHYS_PFN(__pa_symbol(xen_store_interface)),
> size, vma->vm_page_prot))
> return -EAGAIN;
>


I suspect this won't work --- xen_store_interface doesn't point to a
kernel symbol.

-boris

2016-11-29 22:43:11

by Laura Abbott

[permalink] [raw]
Subject: Re: [PATCHv4 06/10] xen: Switch to using __pa_symbol

On 11/29/2016 02:26 PM, Boris Ostrovsky wrote:
> On 11/29/2016 01:55 PM, Laura Abbott wrote:
>> __pa_symbol is the correct macro to use on kernel
>> symbols. Switch to this from __pa.
>>
>> Signed-off-by: Laura Abbott <[email protected]>
>> ---
>> Found during a sweep of the kernel. Untested.
>> ---
>> drivers/xen/xenbus/xenbus_dev_backend.c | 2 +-
>> drivers/xen/xenfs/xenstored.c | 2 +-
>> 2 files changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/xen/xenbus/xenbus_dev_backend.c b/drivers/xen/xenbus/xenbus_dev_backend.c
>> index 4a41ac9..31ca2bf 100644
>> --- a/drivers/xen/xenbus/xenbus_dev_backend.c
>> +++ b/drivers/xen/xenbus/xenbus_dev_backend.c
>> @@ -99,7 +99,7 @@ static int xenbus_backend_mmap(struct file *file, struct vm_area_struct *vma)
>> return -EINVAL;
>>
>> if (remap_pfn_range(vma, vma->vm_start,
>> - virt_to_pfn(xen_store_interface),
>> + PHYS_PFN(__pa_symbol(xen_store_interface)),
>> size, vma->vm_page_prot))
>> return -EAGAIN;
>>
>> diff --git a/drivers/xen/xenfs/xenstored.c b/drivers/xen/xenfs/xenstored.c
>> index fef20db..21009ea 100644
>> --- a/drivers/xen/xenfs/xenstored.c
>> +++ b/drivers/xen/xenfs/xenstored.c
>> @@ -38,7 +38,7 @@ static int xsd_kva_mmap(struct file *file, struct vm_area_struct *vma)
>> return -EINVAL;
>>
>> if (remap_pfn_range(vma, vma->vm_start,
>> - virt_to_pfn(xen_store_interface),
>> + PHYS_PFN(__pa_symbol(xen_store_interface)),
>> size, vma->vm_page_prot))
>> return -EAGAIN;
>>
>
>
> I suspect this won't work --- xen_store_interface doesn't point to a
> kernel symbol.
>
> -boris
>

I reviewed this again and yes you are right. I missed that this
was a pointer and not just a symbol so I think this patch can
just be dropped.

Thanks,
Laura

2016-11-30 17:17:28

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
> #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
> #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
> #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
> +#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x))
> +#define lm_alias(x) __va(__pa_symbol(x))
[...]
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -76,6 +76,10 @@ extern int mmap_rnd_compat_bits __read_mostly;
> #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
> #endif
>
> +#ifndef lm_alias
> +#define lm_alias(x) __va(__pa_symbol(x))
> +#endif

You can drop the arm64-specific lm_alias macro as it's the same as the
generic one you introduced in the same patch.

--
Catalin

2016-12-01 02:41:20

by Dave Young

[permalink] [raw]
Subject: Re: [PATCHv4 07/10] kexec: Switch to __pa_symbol

Hi, Laura
On 11/29/16 at 10:55am, Laura Abbott wrote:
>
> __pa_symbol is the correct api to get the physical address of kernel
> symbols. Switch to it to allow for better debug checking.
>

I assume __pa_symbol is faster than __pa, but it still need some testing
on all arches which support kexec.

But seems long long ago there is a commit e3ebadd95cb in the commit log
I see below from:
"we should deprecate __pa_symbol(), and preferably __pa() too - and
just use "virt_to_phys()" instead, which is is more readable and has
nicer semantics."

But maybe in modern code __pa_symbol is prefered I may miss background.
virt_to_phys still sounds more readable now for me though.

> Signed-off-by: Laura Abbott <[email protected]>
> ---
> Found during review of the kernel. Untested.
> ---
> kernel/kexec_core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> index 5616755..e1b625e 100644
> --- a/kernel/kexec_core.c
> +++ b/kernel/kexec_core.c
> @@ -1397,7 +1397,7 @@ void __weak arch_crash_save_vmcoreinfo(void)
>
> phys_addr_t __weak paddr_vmcoreinfo_note(void)
> {
> - return __pa((unsigned long)(char *)&vmcoreinfo_note);
> + return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
> }
>
> static int __init crash_save_vmcoreinfo_init(void)
> --
> 2.7.4
>
>
> _______________________________________________
> kexec mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/kexec

Thanks
Dave

2016-12-01 03:16:24

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCHv4 07/10] kexec: Switch to __pa_symbol

Dave Young <[email protected]> writes:

> Hi, Laura
> On 11/29/16 at 10:55am, Laura Abbott wrote:
>>
>> __pa_symbol is the correct api to get the physical address of kernel
>> symbols. Switch to it to allow for better debug checking.
>>
>
> I assume __pa_symbol is faster than __pa, but it still need some testing
> on all arches which support kexec.
>
> But seems long long ago there is a commit e3ebadd95cb in the commit log
> I see below from:
> "we should deprecate __pa_symbol(), and preferably __pa() too - and
> just use "virt_to_phys()" instead, which is is more readable and has
> nicer semantics."
>
> But maybe in modern code __pa_symbol is prefered I may miss background.
> virt_to_phys still sounds more readable now for me though.

There has been a lot of history with the various definitions.
__pa_symbol used to be x86 specific.

Now what we have is that __pa_symbol is just __pa(RELOC_HIDE(x));

Now arguably that whole reloc hide thing should happen by architectures
having a non-inline version of __pa as was done in the commit you
mention. But at this point there appears to be nothing wrong with
changing a __pa to a __pa_symbol it might make things a tad more
reliable depending on the implementation of __pa.

Acked-by: "Eric W. Biederman" <[email protected]>


Eric

>> Signed-off-by: Laura Abbott <[email protected]>
>> ---
>> Found during review of the kernel. Untested.
>> ---
>> kernel/kexec_core.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
>> index 5616755..e1b625e 100644
>> --- a/kernel/kexec_core.c
>> +++ b/kernel/kexec_core.c
>> @@ -1397,7 +1397,7 @@ void __weak arch_crash_save_vmcoreinfo(void)
>>
>> phys_addr_t __weak paddr_vmcoreinfo_note(void)
>> {
>> - return __pa((unsigned long)(char *)&vmcoreinfo_note);
>> + return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
>> }
>>
>> static int __init crash_save_vmcoreinfo_init(void)
>> --
>> 2.7.4
>>
>>
>> _______________________________________________
>> kexec mailing list
>> [email protected]
>> http://lists.infradead.org/mailman/listinfo/kexec
>
> Thanks
> Dave

2016-12-01 04:27:19

by Dave Young

[permalink] [raw]
Subject: Re: [PATCHv4 07/10] kexec: Switch to __pa_symbol

On 11/30/16 at 09:13pm, Eric W. Biederman wrote:
> Dave Young <[email protected]> writes:
>
> > Hi, Laura
> > On 11/29/16 at 10:55am, Laura Abbott wrote:
> >>
> >> __pa_symbol is the correct api to get the physical address of kernel
> >> symbols. Switch to it to allow for better debug checking.
> >>
> >
> > I assume __pa_symbol is faster than __pa, but it still need some testing
> > on all arches which support kexec.
> >
> > But seems long long ago there is a commit e3ebadd95cb in the commit log
> > I see below from:
> > "we should deprecate __pa_symbol(), and preferably __pa() too - and
> > just use "virt_to_phys()" instead, which is is more readable and has
> > nicer semantics."
> >
> > But maybe in modern code __pa_symbol is prefered I may miss background.
> > virt_to_phys still sounds more readable now for me though.
>
> There has been a lot of history with the various definitions.
> __pa_symbol used to be x86 specific.
>
> Now what we have is that __pa_symbol is just __pa(RELOC_HIDE(x));
>
> Now arguably that whole reloc hide thing should happen by architectures
> having a non-inline version of __pa as was done in the commit you
> mention. But at this point there appears to be nothing wrong with
> changing a __pa to a __pa_symbol it might make things a tad more
> reliable depending on the implementation of __pa.

Then it is safe and reasonable, thanks for the clarification.

>
> Acked-by: "Eric W. Biederman" <[email protected]>
>
>
> Eric
>
> >> Signed-off-by: Laura Abbott <[email protected]>
> >> ---
> >> Found during review of the kernel. Untested.
> >> ---
> >> kernel/kexec_core.c | 2 +-
> >> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> >> index 5616755..e1b625e 100644
> >> --- a/kernel/kexec_core.c
> >> +++ b/kernel/kexec_core.c
> >> @@ -1397,7 +1397,7 @@ void __weak arch_crash_save_vmcoreinfo(void)
> >>
> >> phys_addr_t __weak paddr_vmcoreinfo_note(void)
> >> {
> >> - return __pa((unsigned long)(char *)&vmcoreinfo_note);
> >> + return __pa_symbol((unsigned long)(char *)&vmcoreinfo_note);
> >> }
> >>
> >> static int __init crash_save_vmcoreinfo_init(void)
> >> --
> >> 2.7.4
> >>
> >>
> >> _______________________________________________
> >> kexec mailing list
> >> [email protected]
> >> http://lists.infradead.org/mailman/listinfo/kexec
> >
> > Thanks
> > Dave

2016-12-01 12:05:38

by James Morse

[permalink] [raw]
Subject: Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

Hi Laura,

On 29/11/16 18:55, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel

macro

> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.

> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index d55a7b0..4f0c77d 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -484,7 +481,7 @@ int swsusp_arch_resume(void)
> * Since we only copied the linear map, we need to find restore_pblist's
> * linear map address.
> */
> - lm_restore_pblist = LMADDR(restore_pblist);
> + lm_restore_pblist = lm_alias(restore_pblist);
>
> /*
> * We need a zero page that is zero before & after resume in order to

This change causes resume from hibernate to panic in:
> VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
> x > (unsigned long) KERNEL_END);

It looks like kaslr's relocation code has already fixed restore_pblist, so your
debug virtual check catches this doing the wrong thing. My bug.

readelf -s vmlinux | grep ...
> 103495: ffff000008080000 0 NOTYPE GLOBAL DEFAULT 1 _text
> 92104: ffff000008e43860 8 OBJECT GLOBAL DEFAULT 24 restore_pblist
> 105442: ffff000008e85000 0 NOTYPE GLOBAL DEFAULT 24 _end

But restore_pblist == 0xffff800971b7f998 when passed to __phys_addr_symbol().

This fixes the problem:
----------------%<----------------
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 4f0c77d2ff7a..8bed26a2d558 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -457,7 +457,6 @@ int swsusp_arch_resume(void)
void *zero_page;
size_t exit_size;
pgd_t *tmp_pg_dir;
- void *lm_restore_pblist;
phys_addr_t phys_hibernate_exit;
void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
void *, phys_addr_t, phys_addr_t);
@@ -478,12 +477,6 @@ int swsusp_arch_resume(void)
goto out;

/*
- * Since we only copied the linear map, we need to find restore_pblist's
- * linear map address.
- */
- lm_restore_pblist = lm_alias(restore_pblist);
-
- /*
* We need a zero page that is zero before & after resume in order to
* to break before make on the ttbr1 page tables.
*/
@@ -534,7 +527,7 @@ int swsusp_arch_resume(void)
}

hibernate_exit(virt_to_phys(tmp_pg_dir), resume_hdr.ttbr1_el1,
- resume_hdr.reenter_kernel, lm_restore_pblist,
+ resume_hdr.reenter_kernel, restore_pblist,
resume_hdr.__hyp_stub_vectors, virt_to_phys(zero_page));

out:
----------------%<----------------

I can post it as a separate fixes patch if you prefer.

I also tested kexec. FWIW:
Tested-by: James Morse <[email protected]>


Thanks,

James

[0] Trace
[ 4.191607] Freezing user space processes ... (elapsed 0.000 seconds) done.
[ 4.224251] random: fast init done
[ 4.243825] PM: Using 3 thread(s) for decompression.
[ 4.243825] PM: Loading and decompressing image data (90831 pages)...
[ 4.255257] hibernate: Hibernated on CPU 0 [mpidr:0x100]
[ 5.213469] PM: Image loading progress: 0%
[ 5.579886] PM: Image loading progress: 10%
[ 5.740234] ata2: SATA link down (SStatus 0 SControl 0)
[ 5.760435] PM: Image loading progress: 20%
[ 5.970647] PM: Image loading progress: 30%
[ 6.563108] PM: Image loading progress: 40%
[ 6.848389] PM: Image loading progress: 50%
[ 7.526272] PM: Image loading progress: 60%
[ 7.702727] PM: Image loading progress: 70%
[ 7.899754] PM: Image loading progress: 80%
[ 8.100703] PM: Image loading progress: 90%
[ 8.300978] PM: Image loading progress: 100%
[ 8.305441] PM: Image loading done.
[ 8.308975] PM: Read 363324 kbytes in 4.05 seconds (89.70 MB/s)
[ 8.344299] PM: quiesce of devices complete after 22.706 msecs
[ 8.350762] PM: late quiesce of devices complete after 0.596 msecs
[ 8.381334] PM: noirq quiesce of devices complete after 24.365 msecs
[ 8.387729] Disabling non-boot CPUs ...
[ 8.412500] CPU1: shutdown
[ 8.415211] psci: CPU1 killed.
[ 8.460465] CPU2: shutdown
[ 8.463175] psci: CPU2 killed.
[ 8.504447] CPU3: shutdown
[ 8.507156] psci: CPU3 killed.
[ 8.540375] CPU4: shutdown
[ 8.543084] psci: CPU4 killed.
[ 8.580333] CPU5: shutdown
[ 8.583043] psci: CPU5 killed.
[ 8.601206] ------------[ cut here ]------------
[ 8.601206] kernel BUG at ../arch/arm64/mm/physaddr.c:25!
[ 8.601206] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 8.601206] Modules linked in:
[ 8.601206] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.0-rc7-00010-g27c672
[ 8.601206] Hardware name: ARM Juno development board (r1) (DT)
[ 8.601206] task: ffff800976ca8000 task.stack: ffff800976c3c000
[ 8.601206] PC is at __phys_addr_symbol+0x30/0x34
[ 8.601206] LR is at swsusp_arch_resume+0x304/0x588
[ 8.601206] pc : [<ffff0000080992d8>] lr : [<ffff0000080938b4>] pstate: 20005
[ 8.601206] sp : ffff800976c3fca0
[ 8.601206] x29: ffff800976c3fca0 x28: ffff000008bee000
[ 8.601206] x27: 000000000e5ea000 x26: ffff000008e83000
[ 8.601206] x25: 0000000000000801 x24: 0000000040000000
[ 8.601206] x23: 0000000000000000 x22: 000000000e00d000
[ 8.601206] x21: ffff808000000000 x20: ffff800080000000
[ 8.601206] x19: 0000000000000000 x18: 4000000000000000
[ 8.601206] x17: 0000000000000000 x16: 0000000000000694
[ 8.601206] x15: ffff000008bee000 x14: 0000000000000008
[ 8.601206] x13: 0000000000000000 x12: 003d090000000000
[ 8.601206] x11: 0000000000000001 x10: fffffffff1a0f000
[ 8.601206] x9 : 0000000000000001 x8 : ffff800971a0aff8
[ 8.601206] x7 : 0000000000000001 x6 : 000000000000003f
[ 8.601206] x5 : 0000000000000040 x4 : 0000000000000000
[ 8.601206] x3 : ffff807fffffffff x2 : 0000000000000000
[ 8.601206] x1 : ffff000008e85000 x0 : ffff80097152b578
[ 8.601206]
[ 8.601206] Process swapper/0 (pid: 1, stack limit = 0xffff800976c3c020)
[ 8.601206] Stack: (0xffff800976c3fca0 to 0xffff800976c40000)

[ 8.601206] Call trace:
[ 8.601206] Exception stack(0xffff800976c3fad0 to 0xffff800976c3fc00)

[ 8.601206] [<ffff0000080992d8>] __phys_addr_symbol+0x30/0x34
[ 8.601206] [<ffff0000080fe340>] hibernation_restore+0xf8/0x130
[ 8.601206] [<ffff0000080fe3e4>] load_image_and_restore+0x6c/0x70
[ 8.601206] [<ffff0000080fe640>] software_resume+0x258/0x288
[ 8.601206] [<ffff0000080830b8>] do_one_initcall+0x38/0x128
[ 8.601206] [<ffff000008c60cf4>] kernel_init_freeable+0x1ac/0x250
[ 8.601206] [<ffff0000088acd10>] kernel_init+0x10/0x100
[ 8.601206] [<ffff000008082e80>] ret_from_fork+0x10/0x50
[ 8.601206] Code: b0005aa1 f9475c21 cb010000 d65f03c0 (d4210000)
[ 8.601206] ---[ end trace e15be9f4f989f0b5 ]---
[ 8.601206] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0b
[ 8.601206]
[ 8.601206] Kernel Offset: disabled
[ 8.601206] Memory Limit: none
[ 8.601206] ---[ end Kernel panic - not syncing: Attempted to kill init! exib
[ 8.601206]

2016-12-01 14:09:39

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCHv4 08/10] mm/kasan: Switch to using __pa_symbol and lm_alias

On 11/29/2016 09:55 PM, Laura Abbott wrote:
> __pa_symbol is the correct API to find the physical address of symbols.
> Switch to it to allow for debugging APIs to work correctly.

But __pa() is correct for symbols. I see how __pa_symbol() might be a little
faster than __pa(), but there is nothing wrong in using __pa() on symbols.

> Other
> functions such as p*d_populate may call __pa internally. Ensure that the
> address passed is in the linear region by calling lm_alias.

Why it should be linear mapping address? __pa() translates kernel image address just fine.
This lm_alias() only obfuscates source code. Generated code is probably worse too.


2016-12-01 19:10:45

by Laura Abbott

[permalink] [raw]
Subject: Re: [PATCHv4 08/10] mm/kasan: Switch to using __pa_symbol and lm_alias

On 12/01/2016 03:36 AM, Andrey Ryabinin wrote:
> On 11/29/2016 09:55 PM, Laura Abbott wrote:
>> __pa_symbol is the correct API to find the physical address of symbols.
>> Switch to it to allow for debugging APIs to work correctly.
>
> But __pa() is correct for symbols. I see how __pa_symbol() might be a little
> faster than __pa(), but there is nothing wrong in using __pa() on symbols.
>
>> Other
>> functions such as p*d_populate may call __pa internally. Ensure that the
>> address passed is in the linear region by calling lm_alias.
>
> Why it should be linear mapping address? __pa() translates kernel image address just fine.
> This lm_alias() only obfuscates source code. Generated code is probably worse too.
>
>

This is part of adding CONFIG_DEBUG_VIRTUAL for arm64. We want to
differentiate between __pa and __pa_symbol to enforce stronger
virtual checks and have __pa only be for linear map addresses.

Thanks,
Laura

2016-12-06 00:50:38

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

On 11/29/2016 10:55 AM, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel
> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.
>
> Signed-off-by: Laura Abbott <[email protected]>
> ---
> v4: Stop calling __va early, conversion of a few more sites. I decided against
> wrapping the __p*d_populate calls into new functions since the call sites
> should be limited.
> ---


> - pud_populate(&init_mm, pud, bm_pmd);
> + if (pud_none(*pud))
> + __pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
> pmd = fixmap_pmd(addr);
> - pmd_populate_kernel(&init_mm, pmd, bm_pte);
> + __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);

Is there a particular reason why pmd_populate_kernel() is not changed to
use __pa_symbol() instead of using __pa()? The other users in the arm64
kernel is arch/arm64/kernel/hibernate.c which seems to call this against
kernel symbols as well?
--
Florian

2016-12-06 11:47:15

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

On Mon, Dec 05, 2016 at 04:50:33PM -0800, Florian Fainelli wrote:
> On 11/29/2016 10:55 AM, Laura Abbott wrote:
> > __pa_symbol is technically the marco that should be used for kernel
> > symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> > will do bounds checking. As part of this, introduce lm_alias, a
> > macro which wraps the __va(__pa(...)) idiom used a few places to
> > get the alias.
> >
> > Signed-off-by: Laura Abbott <[email protected]>
> > ---
> > v4: Stop calling __va early, conversion of a few more sites. I decided against
> > wrapping the __p*d_populate calls into new functions since the call sites
> > should be limited.
> > ---
>
>
> > - pud_populate(&init_mm, pud, bm_pmd);
> > + if (pud_none(*pud))
> > + __pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
> > pmd = fixmap_pmd(addr);
> > - pmd_populate_kernel(&init_mm, pmd, bm_pte);
> > + __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);
>
> Is there a particular reason why pmd_populate_kernel() is not changed to
> use __pa_symbol() instead of using __pa()? The other users in the arm64
> kernel is arch/arm64/kernel/hibernate.c which seems to call this against
> kernel symbols as well?

create_safe_exec_page() may allocate a pte from the linear map and
passes such pointer to pmd_populate_kernel(). The copy_pte() function
does something similar. In addition, we have the generic
__pte_alloc_kernel() in mm/memory.c using linear addresses.

--
Catalin

2016-12-06 16:08:50

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

On Thu, Dec 01, 2016 at 12:04:27PM +0000, James Morse wrote:
> On 29/11/16 18:55, Laura Abbott wrote:
> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > index d55a7b0..4f0c77d 100644
> > --- a/arch/arm64/kernel/hibernate.c
> > +++ b/arch/arm64/kernel/hibernate.c
> > @@ -484,7 +481,7 @@ int swsusp_arch_resume(void)
> > * Since we only copied the linear map, we need to find restore_pblist's
> > * linear map address.
> > */
> > - lm_restore_pblist = LMADDR(restore_pblist);
> > + lm_restore_pblist = lm_alias(restore_pblist);
> >
> > /*
> > * We need a zero page that is zero before & after resume in order to
>
> This change causes resume from hibernate to panic in:
> > VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
> > x > (unsigned long) KERNEL_END);
>
> It looks like kaslr's relocation code has already fixed restore_pblist, so your
> debug virtual check catches this doing the wrong thing. My bug.
>
> readelf -s vmlinux | grep ...
> > 103495: ffff000008080000 0 NOTYPE GLOBAL DEFAULT 1 _text
> > 92104: ffff000008e43860 8 OBJECT GLOBAL DEFAULT 24 restore_pblist
> > 105442: ffff000008e85000 0 NOTYPE GLOBAL DEFAULT 24 _end
>
> But restore_pblist == 0xffff800971b7f998 when passed to __phys_addr_symbol().

I think KASLR's a red herring; it shouldn't change the distance between
the restore_pblist symbol and {_text,_end}.

Above, ffff000008e43860 is the location of the pointer in the kernel
image (i.e. it's &restore_pblist). 0xffff800971b7f998 is the pointer
that was assigned to restore_pblist. For KASLR, the low bits (at least
up to a page boundary) shouldn't change across relocation.

Assuming it's only ever assigned a dynamic allocation, which should fall
in the linear map, the LMADDR() dance doesn't appear to be necessary.

> This fixes the problem:
> ----------------%<----------------
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index 4f0c77d2ff7a..8bed26a2d558 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -457,7 +457,6 @@ int swsusp_arch_resume(void)
> void *zero_page;
> size_t exit_size;
> pgd_t *tmp_pg_dir;
> - void *lm_restore_pblist;
> phys_addr_t phys_hibernate_exit;
> void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
> void *, phys_addr_t, phys_addr_t);
> @@ -478,12 +477,6 @@ int swsusp_arch_resume(void)
> goto out;
>
> /*
> - * Since we only copied the linear map, we need to find restore_pblist's
> - * linear map address.
> - */
> - lm_restore_pblist = lm_alias(restore_pblist);
> -
> - /*
> * We need a zero page that is zero before & after resume in order to
> * to break before make on the ttbr1 page tables.
> */
> @@ -534,7 +527,7 @@ int swsusp_arch_resume(void)
> }
>
> hibernate_exit(virt_to_phys(tmp_pg_dir), resume_hdr.ttbr1_el1,
> - resume_hdr.reenter_kernel, lm_restore_pblist,
> + resume_hdr.reenter_kernel, restore_pblist,
> resume_hdr.__hyp_stub_vectors, virt_to_phys(zero_page));
>
> out:
> ----------------%<----------------

Folding that in (or having it as a preparatory cleanup patch) makes
sense to me. AFAICT the logic was valid (albeit confused) until now, so
it's not strictly a fix.

Thanks,
Mark.

2016-12-06 17:03:13

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

Hi,

As a heads-up, it looks like this got mangled somewhere. In the hunk at
arch/arm64/mm/kasan_init.c:68, 'do' in the context became 'edo'.
Deleting the 'e' makes it apply.

I think this is almost there; other than James's hibernate bug I only
see one real issue, and everything else is a minor nit.

On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel
> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.

I think the addition of the lm_alias() macro under include/mm should be
a separate preparatory patch. That way it's separate from the
arm64-specific parts, and more obvious to !arm64 people reviewing the
other parts.

> Signed-off-by: Laura Abbott <[email protected]>
> ---
> v4: Stop calling __va early, conversion of a few more sites. I decided against
> wrapping the __p*d_populate calls into new functions since the call sites
> should be limited.
> ---
> arch/arm64/include/asm/kvm_mmu.h | 4 ++--
> arch/arm64/include/asm/memory.h | 2 ++
> arch/arm64/include/asm/mmu_context.h | 6 +++---
> arch/arm64/include/asm/pgtable.h | 2 +-
> arch/arm64/kernel/acpi_parking_protocol.c | 2 +-
> arch/arm64/kernel/cpu-reset.h | 2 +-
> arch/arm64/kernel/cpufeature.c | 2 +-
> arch/arm64/kernel/hibernate.c | 13 +++++--------
> arch/arm64/kernel/insn.c | 2 +-
> arch/arm64/kernel/psci.c | 2 +-
> arch/arm64/kernel/setup.c | 8 ++++----
> arch/arm64/kernel/smp_spin_table.c | 2 +-
> arch/arm64/kernel/vdso.c | 4 ++--
> arch/arm64/mm/init.c | 11 ++++++-----
> arch/arm64/mm/kasan_init.c | 21 +++++++++++++-------
> arch/arm64/mm/mmu.c | 32 +++++++++++++++++++------------
> drivers/firmware/psci.c | 2 +-
> include/linux/mm.h | 4 ++++
> 18 files changed, 70 insertions(+), 51 deletions(-)

It looks like we need to make sure these (directly) include <linux/mm.h>
for __pa_symbol() and lm_alias(), or there's some fragility, e.g.

[mark@leverpostej:~/src/linux]% uselinaro 15.08 make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -j10 -s
arch/arm64/kernel/psci.c: In function 'cpu_psci_cpu_boot':
arch/arm64/kernel/psci.c:48:50: error: implicit declaration of function '__pa_symbol' [-Werror=implicit-function-declaration]
int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
^
cc1: some warnings being treated as errors
make[1]: *** [arch/arm64/kernel/psci.o] Error 1
make: *** [arch/arm64/kernel] Error 2
make: *** Waiting for unfinished jobs....

> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
> #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
> #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
> #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
> +#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x))
> +#define lm_alias(x) __va(__pa_symbol(x))

As Catalin mentioned, we should be able to drop this copy of lm_alias(),
given we have the same in the core headers.

> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index a2c2478..79cd86b 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
> return -ENOMEM;
>
> /* Grab the vDSO data page. */
> - vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
> + vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
>
> /* Grab the vDSO code pages. */
> for (i = 0; i < vdso_pages; i++)
> - vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
> + vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);

I see you added sym_to_pfn(), which we can use here to keep this short
and legible. It might also be worth using a temporary pfn_t, e.g.

pfn = sym_to_pfn(&vdso_start);

for (i = 0; i < vdso_pages; i++)
vdso_pagelist[i + 1] = pfn_to_page(pfn + i);

> diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
> index 8263429..9defbe2 100644
> --- a/drivers/firmware/psci.c
> +++ b/drivers/firmware/psci.c
> @@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index)
> u32 *state = __this_cpu_read(psci_power_state);
>
> return psci_ops.cpu_suspend(state[index - 1],
> - virt_to_phys(cpu_resume));
> + __pa_symbol(cpu_resume));
> }
>
> int psci_cpu_suspend_enter(unsigned long index)

This should probably be its own patch since it's not under arch/arm64/.

I'm happy for this to go via the arm64 tree with the rest regardless
(assuming Lorenzo has no objections).

Thanks,
Mark.

2016-12-06 17:26:04

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 08/10] mm/kasan: Switch to using __pa_symbol and lm_alias

On Thu, Dec 01, 2016 at 02:36:05PM +0300, Andrey Ryabinin wrote:
> On 11/29/2016 09:55 PM, Laura Abbott wrote:
> > __pa_symbol is the correct API to find the physical address of symbols.
> > Switch to it to allow for debugging APIs to work correctly.
>
> But __pa() is correct for symbols. I see how __pa_symbol() might be a little
> faster than __pa(), but there is nothing wrong in using __pa() on symbols.

While it's true today that __pa() works on symbols, this is for
pragmatic reasons (allowing existing code to work until it is all
cleaned up), and __pa_symbol() is the correct API to use.

Relying on this means that __pa() can't be optimised for the (vastly
common) case of translating linear map addresses.

Consistent use of __pa_symbol() will allow for subsequent optimisation
of __pa() in the common case, in adition to being necessary for arm64's
DEBUG_VIRTUAL.

> > Other functions such as p*d_populate may call __pa internally.
> > Ensure that the address passed is in the linear region by calling
> > lm_alias.
>
> Why it should be linear mapping address? __pa() translates kernel
> image address just fine.

As above, while that's true today, but is something that we wish to
change.

> Generated code is probably worse too.

Even if that is the case, given this is code run once at boot on a debug
build, I think it's outweighed by the gain from DEBUG_VIRTUAL, and as a
step towards optimising __pa().

Thanks,
Mark.

2016-12-06 17:45:34

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 08/10] mm/kasan: Switch to using __pa_symbol and lm_alias

On Tue, Nov 29, 2016 at 10:55:27AM -0800, Laura Abbott wrote:
> __pa_symbol is the correct API to find the physical address of symbols.
> Switch to it to allow for debugging APIs to work correctly. Other
> functions such as p*d_populate may call __pa internally. Ensure that the
> address passed is in the linear region by calling lm_alias.

I've given this a go on Juno with CONFIG_KASAN_INLINE enabled, and
everything seems happy.

We'll need an include of <linux/mm.h> as that appears to be missing. I
guess we're getting lucky with transitive includes. Otherwise this looks
good to me.

With that fixed up:

Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>

Thanks,
Mark.

> Signed-off-by: Laura Abbott <[email protected]>
> ---
> Pointed out during review/testing of v3.
> ---
> mm/kasan/kasan_init.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/kasan/kasan_init.c b/mm/kasan/kasan_init.c
> index 3f9a41c..ff04721 100644
> --- a/mm/kasan/kasan_init.c
> +++ b/mm/kasan/kasan_init.c
> @@ -49,7 +49,7 @@ static void __init zero_pte_populate(pmd_t *pmd, unsigned long addr,
> pte_t *pte = pte_offset_kernel(pmd, addr);
> pte_t zero_pte;
>
> - zero_pte = pfn_pte(PFN_DOWN(__pa(kasan_zero_page)), PAGE_KERNEL);
> + zero_pte = pfn_pte(PFN_DOWN(__pa_symbol(kasan_zero_page)), PAGE_KERNEL);
> zero_pte = pte_wrprotect(zero_pte);
>
> while (addr + PAGE_SIZE <= end) {
> @@ -69,7 +69,7 @@ static void __init zero_pmd_populate(pud_t *pud, unsigned long addr,
> next = pmd_addr_end(addr, end);
>
> if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) {
> - pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte));
> continue;
> }
>
> @@ -94,7 +94,7 @@ static void __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
>
> pud_populate(&init_mm, pud, kasan_zero_pmd);
> pmd = pmd_offset(pud, addr);
> - pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte));
> continue;
> }
>
> @@ -135,11 +135,11 @@ void __init kasan_populate_zero_shadow(const void *shadow_start,
> * puds,pmds, so pgd_populate(), pud_populate()
> * is noops.
> */
> - pgd_populate(&init_mm, pgd, kasan_zero_pud);
> + pgd_populate(&init_mm, pgd, lm_alias(kasan_zero_pud));
> pud = pud_offset(pgd, addr);
> - pud_populate(&init_mm, pud, kasan_zero_pmd);
> + pud_populate(&init_mm, pud, lm_alias(kasan_zero_pmd));
> pmd = pmd_offset(pud, addr);
> - pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
> + pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte));
> continue;
> }
>
> --
> 2.7.4
>

2016-12-06 18:19:49

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 09/10] mm/usercopy: Switch to using lm_alias

On Tue, Nov 29, 2016 at 11:39:44AM -0800, Kees Cook wrote:
> On Tue, Nov 29, 2016 at 10:55 AM, Laura Abbott <[email protected]> wrote:
> >
> > The usercopy checking code currently calls __va(__pa(...)) to check for
> > aliases on symbols. Switch to using lm_alias instead.
> >
> > Signed-off-by: Laura Abbott <[email protected]>
>
> Acked-by: Kees Cook <[email protected]>
>
> I should probably add a corresponding alias test to lkdtm...
>
> -Kees

Something like the below?

It uses lm_alias(), so it depends on Laura's patches. We seem to do the
right thing, anyhow:

root@ribbensteg:/home/nanook# echo USERCOPY_KERNEL_ALIAS > /sys/kernel/debug/provoke-crash/DIRECT
[ 44.493400] usercopy: kernel memory exposure attempt detected from ffff80000031a730 (<linear kernel text>) (4096 bytes)
[ 44.504263] kernel BUG at mm/usercopy.c:75!

Thanks,
Mark.

---->8----
diff --git a/drivers/misc/lkdtm.h b/drivers/misc/lkdtm.h
index fdf954c..96d8d76 100644
--- a/drivers/misc/lkdtm.h
+++ b/drivers/misc/lkdtm.h
@@ -56,5 +56,6 @@
void lkdtm_USERCOPY_STACK_FRAME_FROM(void);
void lkdtm_USERCOPY_STACK_BEYOND(void);
void lkdtm_USERCOPY_KERNEL(void);
+void lkdtm_USERCOPY_KERNEL_ALIAS(void);

#endif
diff --git a/drivers/misc/lkdtm_core.c b/drivers/misc/lkdtm_core.c
index f9154b8..f6bc6d6 100644
--- a/drivers/misc/lkdtm_core.c
+++ b/drivers/misc/lkdtm_core.c
@@ -228,6 +228,7 @@ struct crashtype crashtypes[] = {
CRASHTYPE(USERCOPY_STACK_FRAME_FROM),
CRASHTYPE(USERCOPY_STACK_BEYOND),
CRASHTYPE(USERCOPY_KERNEL),
+ CRASHTYPE(USERCOPY_KERNEL_ALIAS),
};


diff --git a/drivers/misc/lkdtm_usercopy.c b/drivers/misc/lkdtm_usercopy.c
index 1dd6114..955f2dc 100644
--- a/drivers/misc/lkdtm_usercopy.c
+++ b/drivers/misc/lkdtm_usercopy.c
@@ -279,9 +279,16 @@ void lkdtm_USERCOPY_STACK_BEYOND(void)
do_usercopy_stack(true, false);
}

-void lkdtm_USERCOPY_KERNEL(void)
+static void do_usercopy_kernel(bool use_alias)
{
unsigned long user_addr;
+ const void *rodata = test_text;
+ void *text = vm_mmap;
+
+ if (use_alias) {
+ rodata = lm_alias(rodata);
+ text = lm_alias(text);
+ }

user_addr = vm_mmap(NULL, 0, PAGE_SIZE,
PROT_READ | PROT_WRITE | PROT_EXEC,
@@ -292,14 +299,14 @@ void lkdtm_USERCOPY_KERNEL(void)
}

pr_info("attempting good copy_to_user from kernel rodata\n");
- if (copy_to_user((void __user *)user_addr, test_text,
+ if (copy_to_user((void __user *)user_addr, rodata,
unconst + sizeof(test_text))) {
pr_warn("copy_to_user failed unexpectedly?!\n");
goto free_user;
}

pr_info("attempting bad copy_to_user from kernel text\n");
- if (copy_to_user((void __user *)user_addr, vm_mmap,
+ if (copy_to_user((void __user *)user_addr, text,
unconst + PAGE_SIZE)) {
pr_warn("copy_to_user failed, but lacked Oops\n");
goto free_user;
@@ -309,6 +316,16 @@ void lkdtm_USERCOPY_KERNEL(void)
vm_munmap(user_addr, PAGE_SIZE);
}

+void lkdtm_USERCOPY_KERNEL(void)
+{
+ do_usercopy_kernel(false);
+}
+
+void lkdtm_USERCOPY_KERNEL_ALIAS(void)
+{
+ do_usercopy_kernel(true);
+}
+
void __init lkdtm_usercopy_init(void)
{
/* Prepare cache that lacks SLAB_USERCOPY flag. */

2016-12-06 18:21:19

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 09/10] mm/usercopy: Switch to using lm_alias

On Tue, Nov 29, 2016 at 10:55:28AM -0800, Laura Abbott wrote:
>
> The usercopy checking code currently calls __va(__pa(...)) to check for
> aliases on symbols. Switch to using lm_alias instead.
>
> Signed-off-by: Laura Abbott <[email protected]>

I've given this a go on Juno, which boots happily. LKDTM triggers as
expected when copying from the kernel text and its alias.

Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>

Thanks,
Mark.

> ---
> Found when reviewing the kernel. Tested.
> ---
> mm/usercopy.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 3c8da0a..8345299 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -108,13 +108,13 @@ static inline const char *check_kernel_text_object(const void *ptr,
> * __pa() is not just the reverse of __va(). This can be detected
> * and checked:
> */
> - textlow_linear = (unsigned long)__va(__pa(textlow));
> + textlow_linear = (unsigned long)lm_alias(textlow);
> /* No different mapping: we're done. */
> if (textlow_linear == textlow)
> return NULL;
>
> /* Check the secondary mapping... */
> - texthigh_linear = (unsigned long)__va(__pa(texthigh));
> + texthigh_linear = (unsigned long)lm_alias(texthigh);
> if (overlaps(ptr, n, textlow_linear, texthigh_linear))
> return "<linear kernel text>";
>
> --
> 2.7.4
>

2016-12-06 18:59:13

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 10/10] arm64: Add support for CONFIG_DEBUG_VIRTUAL

On Tue, Nov 29, 2016 at 10:55:29AM -0800, Laura Abbott wrote:
>
> + WARN(!__is_lm_address(x),
> + "virt_to_phys used for non-linear address :%pK\n", (void *)x);

Nit: s/ :/: /

It might be worth adding %pS too; i.e.

WARN(!__is_lm_address(x),
"virt_to_phys used for non-linear address: %pK (%pS)\n",
(void *)x, (void *)x);

... that way we might get a better idea before we have to resort to
grepping objdump output.

Other than that this looks good to me. This builds cleanly with and
without DEBUG_VIRTUAL enabled, and boots happily with DEBUG_VIRTUAL
disabled.

With both DEBUG_VIRTUAL and KASAN, I'm hitting a sea of warnings from
kasan_init at boot time, but I don't think that's a problem with this
patch as such, so FWIW:

Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>

Thanks,
Mark.

2016-12-06 19:12:26

by Laura Abbott

[permalink] [raw]
Subject: Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

On 12/06/2016 09:02 AM, Mark Rutland wrote:
> Hi,
>
> As a heads-up, it looks like this got mangled somewhere. In the hunk at
> arch/arm64/mm/kasan_init.c:68, 'do' in the context became 'edo'.
> Deleting the 'e' makes it apply.
>

Argh, this must have come in while editing the .patch before e-mailing.
Sorry about that.

> I think this is almost there; other than James's hibernate bug I only
> see one real issue, and everything else is a minor nit.
>
> On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
>> __pa_symbol is technically the marco that should be used for kernel
>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
>> will do bounds checking. As part of this, introduce lm_alias, a
>> macro which wraps the __va(__pa(...)) idiom used a few places to
>> get the alias.
>
> I think the addition of the lm_alias() macro under include/mm should be
> a separate preparatory patch. That way it's separate from the
> arm64-specific parts, and more obvious to !arm64 people reviewing the
> other parts.
>

I debated if it was more obvious to show how it was used in context
vs a stand alone patch. I think you're right that for non-arm64 reviewers
the separate patch would be easier to find.

>> Signed-off-by: Laura Abbott <[email protected]>
>> ---
>> v4: Stop calling __va early, conversion of a few more sites. I decided against
>> wrapping the __p*d_populate calls into new functions since the call sites
>> should be limited.
>> ---
>> arch/arm64/include/asm/kvm_mmu.h | 4 ++--
>> arch/arm64/include/asm/memory.h | 2 ++
>> arch/arm64/include/asm/mmu_context.h | 6 +++---
>> arch/arm64/include/asm/pgtable.h | 2 +-
>> arch/arm64/kernel/acpi_parking_protocol.c | 2 +-
>> arch/arm64/kernel/cpu-reset.h | 2 +-
>> arch/arm64/kernel/cpufeature.c | 2 +-
>> arch/arm64/kernel/hibernate.c | 13 +++++--------
>> arch/arm64/kernel/insn.c | 2 +-
>> arch/arm64/kernel/psci.c | 2 +-
>> arch/arm64/kernel/setup.c | 8 ++++----
>> arch/arm64/kernel/smp_spin_table.c | 2 +-
>> arch/arm64/kernel/vdso.c | 4 ++--
>> arch/arm64/mm/init.c | 11 ++++++-----
>> arch/arm64/mm/kasan_init.c | 21 +++++++++++++-------
>> arch/arm64/mm/mmu.c | 32 +++++++++++++++++++------------
>> drivers/firmware/psci.c | 2 +-
>> include/linux/mm.h | 4 ++++
>> 18 files changed, 70 insertions(+), 51 deletions(-)
>
> It looks like we need to make sure these (directly) include <linux/mm.h>
> for __pa_symbol() and lm_alias(), or there's some fragility, e.g.
>
> [mark@leverpostej:~/src/linux]% uselinaro 15.08 make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -j10 -s
> arch/arm64/kernel/psci.c: In function 'cpu_psci_cpu_boot':
> arch/arm64/kernel/psci.c:48:50: error: implicit declaration of function '__pa_symbol' [-Werror=implicit-function-declaration]
> int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
> ^
> cc1: some warnings being treated as errors
> make[1]: *** [arch/arm64/kernel/psci.o] Error 1
> make: *** [arch/arm64/kernel] Error 2
> make: *** Waiting for unfinished jobs....
>

Right, I'll double check.

>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>> #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
>> #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
>> #define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
>> +#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x))
>> +#define lm_alias(x) __va(__pa_symbol(x))
>
> As Catalin mentioned, we should be able to drop this copy of lm_alias(),
> given we have the same in the core headers.
>
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index a2c2478..79cd86b 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>> return -ENOMEM;
>>
>> /* Grab the vDSO data page. */
>> - vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
>> + vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
>>
>> /* Grab the vDSO code pages. */
>> for (i = 0; i < vdso_pages; i++)
>> - vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
>> + vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa_symbol(&vdso_start)) + i);
>
> I see you added sym_to_pfn(), which we can use here to keep this short
> and legible. It might also be worth using a temporary pfn_t, e.g.
>
> pfn = sym_to_pfn(&vdso_start);
>
> for (i = 0; i < vdso_pages; i++)
> vdso_pagelist[i + 1] = pfn_to_page(pfn + i);
>

Good idea.

>> diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
>> index 8263429..9defbe2 100644
>> --- a/drivers/firmware/psci.c
>> +++ b/drivers/firmware/psci.c
>> @@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index)
>> u32 *state = __this_cpu_read(psci_power_state);
>>
>> return psci_ops.cpu_suspend(state[index - 1],
>> - virt_to_phys(cpu_resume));
>> + __pa_symbol(cpu_resume));
>> }
>>
>> int psci_cpu_suspend_enter(unsigned long index)
>
> This should probably be its own patch since it's not under arch/arm64/.
>

Fine by me.

> I'm happy for this to go via the arm64 tree with the rest regardless
> (assuming Lorenzo has no objections).
>
> Thanks,
> Mark.
>

Thanks,
Laura

2016-12-06 19:19:34

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 08/10] mm/kasan: Switch to using __pa_symbol and lm_alias

On Tue, Nov 29, 2016 at 10:55:27AM -0800, Laura Abbott wrote:
> @@ -94,7 +94,7 @@ static void __init zero_pud_populate(pgd_t *pgd, unsigned long addr,
>
> pud_populate(&init_mm, pud, kasan_zero_pmd);

We also need to lm_alias()-ify kasan_zero_pmd here, or we'll get a
stream of warnings at boot (example below).

I should have spotted that. :/

With that fixed up, I'm able to boot Juno with both KASAN_INLINE and
DEBUG_VIRTUAL, without issued. With that, my previous Reviewed-by and Tested-by
stand.

Thanks,
Mark.

---->8----

[ 0.000000] virt_to_phys used for non-linear address :ffff20000a367000
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] WARNING: CPU: 0 PID: 0 at arch/arm64/mm/physaddr.c:13 __virt_to_phys+0x48/0x68
[ 0.000000] Modules linked in:
[ 0.000000]
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc6-00012-gdcc0162-dirty #13
[ 0.000000] Hardware name: ARM Juno development board (r1) (DT)
[ 0.000000] task: ffff200009ec2200 task.stack: ffff200009eb0000
[ 0.000000] PC is at __virt_to_phys+0x48/0x68
[ 0.000000] LR is at __virt_to_phys+0x48/0x68
[ 0.000000] pc : [<ffff2000080af310>] lr : [<ffff2000080af310>] pstate: 600000c5
[ 0.000000] sp : ffff200009eb3c80
[ 0.000000] x29: ffff200009eb3c80 x28: ffff20000abdd000
[ 0.000000] x27: ffff200009ce1000 x26: ffff047fffffffff
[ 0.000000] x25: ffff200009ce1000 x24: ffff20000a366100
[ 0.000000] x23: ffff048000000000 x22: ffff20000a366000
[ 0.000000] x21: ffff040080000000 x20: ffff040040000000
[ 0.000000] x19: ffff20000a367000 x18: 000000000000005c
[ 0.000000] x17: 00000009ffec20e0 x16: 00000000fefff4b0
[ 0.000000] x15: ffffffffffffffff x14: 302b646d705f6f72
[ 0.000000] x13: 657a5f6e6173616b x12: 2820303030373633
[ 0.000000] x11: ffff20000a376ca0 x10: 0000000000000010
[ 0.000000] x9 : 646461207261656e x8 : 696c2d6e6f6e2072
[ 0.000000] x7 : 6f66206465737520 x6 : ffff20000a3741e5
[ 0.000000] x5 : 1fffe4000146ee0e x4 : 1fffe400013de704
[ 0.000000] x3 : 1fffe400013d6003 x2 : 1fffe400013d6003
[ 0.000000] x1 : 0000000000000000 x0 : 0000000000000056
[ 0.000000]
[ 0.000000] ---[ end trace 0000000000000000 ]---
[ 0.000000] Call trace:
[ 0.000000] Exception stack(0xffff200009eb3a50 to 0xffff200009eb3b80)
[ 0.000000] 3a40: ffff20000a367000 0001000000000000
[ 0.000000] 3a60: ffff200009eb3c80 ffff2000080af310 00000000600000c5 000000000000003d
[ 0.000000] 3a80: ffff200009ce1000 ffff2000081c4720 0000000041b58ab3 ffff200009c6cd98
[ 0.000000] 3aa0: ffff2000080818a0 ffff20000a366000 ffff048000000000 ffff20000a366100
[ 0.000000] 3ac0: ffff200009ce1000 ffff047fffffffff ffff200009ce1000 ffff20000abdd000
[ 0.000000] 3ae0: ffff0400013e3ccf ffff20000a3766c0 0000000000000000 0000000000000000
[ 0.000000] 3b00: ffff200009eb3c80 ffff200009eb3c80 ffff200009eb3c40 00000000ffffffc8
[ 0.000000] 3b20: ffff200009eb3b50 ffff2000082cbd3c ffff200009eb3c80 ffff200009eb3c80
[ 0.000000] 3b40: ffff200009eb3c40 00000000ffffffc8 0000000000000056 0000000000000000
[ 0.000000] 3b60: 1fffe400013d6003 1fffe400013d6003 1fffe400013de704 1fffe4000146ee0e
[ 0.000000] [<ffff2000080af310>] __virt_to_phys+0x48/0x68
[ 0.000000] [<ffff200009d734e8>] zero_pud_populate+0x88/0x138
[ 0.000000] [<ffff200009d736f8>] kasan_populate_zero_shadow+0x160/0x18c
[ 0.000000] [<ffff200009d5a048>] kasan_init+0x1f8/0x408
[ 0.000000] [<ffff200009d54000>] setup_arch+0x314/0x948
[ 0.000000] [<ffff200009d50c64>] start_kernel+0xb4/0x54c
[ 0.000000] [<ffff200009d501e0>] __primary_switched+0x64/0x74

[mark@leverpostej:~/src/linux]% uselinaro 15.08 aarch64-linux-gnu-readelf -s vmlinux | grep ffff20000a367000
108184: ffff20000a367000 4096 OBJECT GLOBAL DEFAULT 25 kasan_zero_pmd

[mark@leverpostej:~/src/linux]% uselinaro 15.08 aarch64-linux-gnu-addr2line -ife vmlinux ffff200009d734e8
set_pud
/home/mark/src/linux/./arch/arm64/include/asm/pgtable.h:435
__pud_populate
/home/mark/src/linux/./arch/arm64/include/asm/pgalloc.h:47
pud_populate
/home/mark/src/linux/./arch/arm64/include/asm/pgalloc.h:52
zero_pud_populate
/home/mark/src/linux/mm/kasan/kasan_init.c:95

2016-12-06 19:53:30

by Florian Fainelli

[permalink] [raw]
Subject: [PATCH 0/3] ARM: Add support for CONFIG_DEBUG_VIRTUAL

Hi all,

This patch series builds on top of Laura's [PATCHv4 00/10] CONFIG_DEBUG_VIRTUAL for arm64
to add support for CONFIG_DEBUG_VIRTUAL for ARM.

This was tested on a Brahma B15 platform (ARMv7 + HIGHMEM + LPAE).

There are a number of possible follow up/cleanup patches:

- all SMP implements that pass down the address of secondary_startup to the SMP bringup
operations should use __pa_symbol() instead since they reference in-kernel symbols

Flames, critiques, rotten tomatoes welcome!

Florian Fainelli (3):
ARM: Define KERNEL_START and KERNEL_END
ARM: Utilize __pa_symbol in lieu of __pa
ARM: Add support for CONFIG_DEBUG_VIRTUAL

arch/arm/Kconfig | 1 +
arch/arm/include/asm/memory.h | 23 +++++++++++++++++--
arch/arm/mm/Makefile | 1 +
arch/arm/mm/init.c | 7 ++----
arch/arm/mm/mmu.c | 10 +++------
arch/arm/mm/physaddr.c | 51 +++++++++++++++++++++++++++++++++++++++++++
6 files changed, 79 insertions(+), 14 deletions(-)
create mode 100644 arch/arm/mm/physaddr.c

--
2.9.3

2016-12-06 19:53:35

by Florian Fainelli

[permalink] [raw]
Subject: [PATCH 2/3] ARM: Utilize __pa_symbol in lieu of __pa

Unfold pmd_populate_kernel() to make us use __pa_symbol() instead of
__pa(), pre-requisite to turning on CONFIG_DEBUG_VIRTUAL.

Signed-off-by: Florian Fainelli <[email protected]>
---
arch/arm/mm/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 18ef688a796e..ab7e82085df9 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -394,7 +394,7 @@ void __init early_fixmap_init(void)
!= FIXADDR_TOP >> PMD_SHIFT);

pmd = fixmap_pmd(FIXADDR_TOP);
- pmd_populate_kernel(&init_mm, pmd, bm_pte);
+ __pmd_populate(pmd, __pa_symbol(bm_pte), _PAGE_KERNEL_TABLE);

pte_offset_fixmap = pte_offset_early_fixmap;
}
--
2.9.3

2016-12-06 19:53:44

by Florian Fainelli

[permalink] [raw]
Subject: [PATCH 3/3] ARM: Add support for CONFIG_DEBUG_VIRTUAL

x86 has an option: CONFIG_DEBUG_VIRTUAL to do additional checks on
virt_to_phys calls. The goal is to catch users who are calling
virt_to_phys on non-linear addresses immediately. This includes caller
using __virt_to_phys() on image addresses instead of __pa_symbol(). This
is a generally useful debug feature to spot bad code (particulary in
drivers).

Signed-off-by: Florian Fainelli <[email protected]>
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/memory.h | 16 ++++++++++++--
arch/arm/mm/Makefile | 1 +
arch/arm/mm/physaddr.c | 51 +++++++++++++++++++++++++++++++++++++++++++
4 files changed, 67 insertions(+), 2 deletions(-)
create mode 100644 arch/arm/mm/physaddr.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index b5d529fdffab..5e66173c5787 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -2,6 +2,7 @@ config ARM
bool
default y
select ARCH_CLOCKSOURCE_DATA
+ select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index bee7511c5098..46f192218be7 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -213,7 +213,7 @@ extern const void *__pv_table_begin, *__pv_table_end;
: "r" (x), "I" (__PV_BITS_31_24) \
: "cc")

-static inline phys_addr_t __virt_to_phys(unsigned long x)
+static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x)
{
phys_addr_t t;

@@ -245,7 +245,7 @@ static inline unsigned long __phys_to_virt(phys_addr_t x)
#define PHYS_OFFSET PLAT_PHYS_OFFSET
#define PHYS_PFN_OFFSET ((unsigned long)(PHYS_OFFSET >> PAGE_SHIFT))

-static inline phys_addr_t __virt_to_phys(unsigned long x)
+static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x)
{
return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET;
}
@@ -261,6 +261,16 @@ static inline unsigned long __phys_to_virt(phys_addr_t x)
((((unsigned long)(kaddr) - PAGE_OFFSET) >> PAGE_SHIFT) + \
PHYS_PFN_OFFSET)

+#define __pa_symbol_nodebug(x) ((x) - (unsigned long)KERNEL_START)
+
+#ifdef CONFIG_DEBUG_VIRTUAL
+extern phys_addr_t __virt_to_phys(unsigned long x);
+extern phys_addr_t __phys_addr_symbol(unsigned long x);
+#else
+#define __virt_to_phys(x) __virt_to_phys_nodebug(x)
+#define __phys_addr_symbol(x) __pa_symbol_nodebug(x)
+#endif
+
/*
* These are *only* valid on the kernel direct mapped RAM memory.
* Note: Drivers should NOT use these. They are the wrong
@@ -283,9 +293,11 @@ static inline void *phys_to_virt(phys_addr_t x)
* Drivers should NOT use these either.
*/
#define __pa(x) __virt_to_phys((unsigned long)(x))
+#define __pa_symbol(x) __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
#define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT)

+
extern long long arch_phys_to_idmap_offset;

/*
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index e8698241ece9..b3dea80715b4 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -14,6 +14,7 @@ endif

obj-$(CONFIG_ARM_PTDUMP) += dump.o
obj-$(CONFIG_MODULES) += proc-syms.o
+obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o

obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
obj-$(CONFIG_HIGHMEM) += highmem.o
diff --git a/arch/arm/mm/physaddr.c b/arch/arm/mm/physaddr.c
new file mode 100644
index 000000000000..00f6dcffab8b
--- /dev/null
+++ b/arch/arm/mm/physaddr.c
@@ -0,0 +1,51 @@
+#include <linux/bug.h>
+#include <linux/export.h>
+#include <linux/types.h>
+#include <linux/mmdebug.h>
+#include <linux/mm.h>
+
+#include <asm/sections.h>
+#include <asm/memory.h>
+#include <asm/fixmap.h>
+
+#include "mm.h"
+
+static inline bool __virt_addr_valid(unsigned long x)
+{
+ if (x < PAGE_OFFSET)
+ return false;
+ if (arm_lowmem_limit && is_vmalloc_or_module_addr((void *)x))
+ return false;
+ if (x >= FIXADDR_START && x < FIXADDR_END)
+ return false;
+ return true;
+}
+
+phys_addr_t __virt_to_phys(unsigned long x)
+{
+ WARN(!__virt_addr_valid(x),
+ "virt_to_phys used for non-linear address :%pK\n", (void *)x);
+
+ return __virt_to_phys_nodebug(x);
+}
+EXPORT_SYMBOL(__virt_to_phys);
+
+static inline bool __phys_addr_valid(unsigned long x)
+{
+ /* This is bounds checking against the kernel image only.
+ * __pa_symbol should only be used on kernel symbol addresses.
+ */
+ if (x < (unsigned long)KERNEL_START ||
+ x > (unsigned long)KERNEL_END)
+ return false;
+
+ return true;
+}
+
+phys_addr_t __phys_addr_symbol(unsigned long x)
+{
+ VIRTUAL_BUG_ON(!__phys_addr_valid(x));
+
+ return __pa_symbol_nodebug(x);
+}
+EXPORT_SYMBOL(__phys_addr_symbol);
--
2.9.3

2016-12-06 19:54:14

by Florian Fainelli

[permalink] [raw]
Subject: [PATCH 1/3] ARM: Define KERNEL_START and KERNEL_END

In preparation for adding CONFIG_DEBUG_VIRTUAL support, define a set of
common constants: KERNEL_START and KERNEL_END which abstract
CONFIG_XIP_KERNEL vs. !CONFIG_XIP_KERNEL. Update the code where
relevant.

Signed-off-by: Florian Fainelli <[email protected]>
---
arch/arm/include/asm/memory.h | 7 +++++++
arch/arm/mm/init.c | 7 ++-----
arch/arm/mm/mmu.c | 8 ++------
3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 76cbd9c674df..bee7511c5098 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -111,6 +111,13 @@

#endif /* !CONFIG_MMU */

+#ifdef CONFIG_XIP_KERNEL
+#define KERNEL_START _sdata
+#else
+#define KERNEL_START _stext
+#endif
+#define KERNEL_END _end
+
/*
* We fix the TCM memories max 32 KiB ITCM resp DTCM at these
* locations
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 370581aeb871..c87d0d5b65f2 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -230,11 +230,8 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align)
void __init arm_memblock_init(const struct machine_desc *mdesc)
{
/* Register the kernel text, kernel data and initrd with memblock. */
-#ifdef CONFIG_XIP_KERNEL
- memblock_reserve(__pa(_sdata), _end - _sdata);
-#else
- memblock_reserve(__pa(_stext), _end - _stext);
-#endif
+ memblock_reserve(__pa(KERNEL_START), _end - KERNEL_START);
+
#ifdef CONFIG_BLK_DEV_INITRD
/* FDT scan will populate initrd_start */
if (initrd_start && !phys_initrd_size) {
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4001dd15818d..18ef688a796e 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1437,12 +1437,8 @@ static void __init kmap_init(void)
static void __init map_lowmem(void)
{
struct memblock_region *reg;
-#ifdef CONFIG_XIP_KERNEL
- phys_addr_t kernel_x_start = round_down(__pa(_sdata), SECTION_SIZE);
-#else
- phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
-#endif
- phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+ phys_addr_t kernel_x_start = round_down(__pa(KERNEL_START), SECTION_SIZE);
+ phys_addr_t kernel_x_end = round_down(__pa(_end), SECTION_SIZE);

/* Map all the lowmem memory banks. */
for_each_memblock(memory, reg) {
--
2.9.3

2016-12-06 20:10:55

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCHv4 09/10] mm/usercopy: Switch to using lm_alias

On Tue, Dec 6, 2016 at 10:18 AM, Mark Rutland <[email protected]> wrote:
> On Tue, Nov 29, 2016 at 11:39:44AM -0800, Kees Cook wrote:
>> On Tue, Nov 29, 2016 at 10:55 AM, Laura Abbott <[email protected]> wrote:
>> >
>> > The usercopy checking code currently calls __va(__pa(...)) to check for
>> > aliases on symbols. Switch to using lm_alias instead.
>> >
>> > Signed-off-by: Laura Abbott <[email protected]>
>>
>> Acked-by: Kees Cook <[email protected]>
>>
>> I should probably add a corresponding alias test to lkdtm...
>>
>> -Kees
>
> Something like the below?
>
> It uses lm_alias(), so it depends on Laura's patches. We seem to do the
> right thing, anyhow:

Cool, this looks good. What happens on systems without an alias?

Laura, feel free to add this to your series:

Acked-by: Kees Cook <[email protected]>

-Kees

>
> root@ribbensteg:/home/nanook# echo USERCOPY_KERNEL_ALIAS > /sys/kernel/debug/provoke-crash/DIRECT
> [ 44.493400] usercopy: kernel memory exposure attempt detected from ffff80000031a730 (<linear kernel text>) (4096 bytes)
> [ 44.504263] kernel BUG at mm/usercopy.c:75!
>
> Thanks,
> Mark.
>
> ---->8----
> diff --git a/drivers/misc/lkdtm.h b/drivers/misc/lkdtm.h
> index fdf954c..96d8d76 100644
> --- a/drivers/misc/lkdtm.h
> +++ b/drivers/misc/lkdtm.h
> @@ -56,5 +56,6 @@
> void lkdtm_USERCOPY_STACK_FRAME_FROM(void);
> void lkdtm_USERCOPY_STACK_BEYOND(void);
> void lkdtm_USERCOPY_KERNEL(void);
> +void lkdtm_USERCOPY_KERNEL_ALIAS(void);
>
> #endif
> diff --git a/drivers/misc/lkdtm_core.c b/drivers/misc/lkdtm_core.c
> index f9154b8..f6bc6d6 100644
> --- a/drivers/misc/lkdtm_core.c
> +++ b/drivers/misc/lkdtm_core.c
> @@ -228,6 +228,7 @@ struct crashtype crashtypes[] = {
> CRASHTYPE(USERCOPY_STACK_FRAME_FROM),
> CRASHTYPE(USERCOPY_STACK_BEYOND),
> CRASHTYPE(USERCOPY_KERNEL),
> + CRASHTYPE(USERCOPY_KERNEL_ALIAS),
> };
>
>
> diff --git a/drivers/misc/lkdtm_usercopy.c b/drivers/misc/lkdtm_usercopy.c
> index 1dd6114..955f2dc 100644
> --- a/drivers/misc/lkdtm_usercopy.c
> +++ b/drivers/misc/lkdtm_usercopy.c
> @@ -279,9 +279,16 @@ void lkdtm_USERCOPY_STACK_BEYOND(void)
> do_usercopy_stack(true, false);
> }
>
> -void lkdtm_USERCOPY_KERNEL(void)
> +static void do_usercopy_kernel(bool use_alias)
> {
> unsigned long user_addr;
> + const void *rodata = test_text;
> + void *text = vm_mmap;
> +
> + if (use_alias) {
> + rodata = lm_alias(rodata);
> + text = lm_alias(text);
> + }
>
> user_addr = vm_mmap(NULL, 0, PAGE_SIZE,
> PROT_READ | PROT_WRITE | PROT_EXEC,
> @@ -292,14 +299,14 @@ void lkdtm_USERCOPY_KERNEL(void)
> }
>
> pr_info("attempting good copy_to_user from kernel rodata\n");
> - if (copy_to_user((void __user *)user_addr, test_text,
> + if (copy_to_user((void __user *)user_addr, rodata,
> unconst + sizeof(test_text))) {
> pr_warn("copy_to_user failed unexpectedly?!\n");
> goto free_user;
> }
>
> pr_info("attempting bad copy_to_user from kernel text\n");
> - if (copy_to_user((void __user *)user_addr, vm_mmap,
> + if (copy_to_user((void __user *)user_addr, text,
> unconst + PAGE_SIZE)) {
> pr_warn("copy_to_user failed, but lacked Oops\n");
> goto free_user;
> @@ -309,6 +316,16 @@ void lkdtm_USERCOPY_KERNEL(void)
> vm_munmap(user_addr, PAGE_SIZE);
> }
>
> +void lkdtm_USERCOPY_KERNEL(void)
> +{
> + do_usercopy_kernel(false);
> +}
> +
> +void lkdtm_USERCOPY_KERNEL_ALIAS(void)
> +{
> + do_usercopy_kernel(true);
> +}
> +
> void __init lkdtm_usercopy_init(void)
> {
> /* Prepare cache that lacks SLAB_USERCOPY flag. */



--
Kees Cook
Nexus Security

2016-12-06 20:42:36

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH 3/3] ARM: Add support for CONFIG_DEBUG_VIRTUAL

On 12/06/2016 11:53 AM, Florian Fainelli wrote:
> x86 has an option: CONFIG_DEBUG_VIRTUAL to do additional checks on
> virt_to_phys calls. The goal is to catch users who are calling
> virt_to_phys on non-linear addresses immediately. This includes caller
> using __virt_to_phys() on image addresses instead of __pa_symbol(). This
> is a generally useful debug feature to spot bad code (particulary in
> drivers).
>
> Signed-off-by: Florian Fainelli <[email protected]>
> ---

> @@ -261,6 +261,16 @@ static inline unsigned long __phys_to_virt(phys_addr_t x)
> ((((unsigned long)(kaddr) - PAGE_OFFSET) >> PAGE_SHIFT) + \
> PHYS_PFN_OFFSET)
>
> +#define __pa_symbol_nodebug(x) ((x) - (unsigned long)KERNEL_START)

I don't think I got this one quite right, but I also assume that won't
be the only problem with this patch series.
--
Florian

2016-12-06 22:43:48

by Chris Brandt

[permalink] [raw]
Subject: RE: [PATCH 1/3] ARM: Define KERNEL_START and KERNEL_END

On 12/6/2016, Florian Fainelli wrote:
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 4001dd15818d..18ef688a796e 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -1437,12 +1437,8 @@ static void __init kmap_init(void)
> static void __init map_lowmem(void)
> {
> struct memblock_region *reg;
> -#ifdef CONFIG_XIP_KERNEL
> - phys_addr_t kernel_x_start = round_down(__pa(_sdata), SECTION_SIZE);
> -#else
> - phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
> -#endif
> - phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
> + phys_addr_t kernel_x_start = round_down(__pa(KERNEL_START),
> SECTION_SIZE);
> + phys_addr_t kernel_x_end = round_down(__pa(_end), SECTION_SIZE);

Why are you changing the end of executable kernel (hence the 'x' in
kernel_x_end) from __init_end to _end which basically maps the entire
kernel image including text and data?

Doing so would then change data from MT_MEMORY_RW into MT_MEMORY_RWX.

I would think it would create some type of security risk to allow
data to be executable.

2016-12-06 22:47:06

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH 1/3] ARM: Define KERNEL_START and KERNEL_END

On 12/06/2016 02:43 PM, Chris Brandt wrote:
> On 12/6/2016, Florian Fainelli wrote:
>> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
>> index 4001dd15818d..18ef688a796e 100644
>> --- a/arch/arm/mm/mmu.c
>> +++ b/arch/arm/mm/mmu.c
>> @@ -1437,12 +1437,8 @@ static void __init kmap_init(void)
>> static void __init map_lowmem(void)
>> {
>> struct memblock_region *reg;
>> -#ifdef CONFIG_XIP_KERNEL
>> - phys_addr_t kernel_x_start = round_down(__pa(_sdata), SECTION_SIZE);
>> -#else
>> - phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
>> -#endif
>> - phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
>> + phys_addr_t kernel_x_start = round_down(__pa(KERNEL_START),
>> SECTION_SIZE);
>> + phys_addr_t kernel_x_end = round_down(__pa(_end), SECTION_SIZE);
>
> Why are you changing the end of executable kernel (hence the 'x' in
> kernel_x_end) from __init_end to _end which basically maps the entire
> kernel image including text and data?

That's a typo, was not intentional thanks for spotting it.
--
Florian

2016-12-07 02:01:30

by Laura Abbott

[permalink] [raw]
Subject: Re: [PATCH 3/3] ARM: Add support for CONFIG_DEBUG_VIRTUAL

On 12/06/2016 11:53 AM, Florian Fainelli wrote:
> x86 has an option: CONFIG_DEBUG_VIRTUAL to do additional checks on
> virt_to_phys calls. The goal is to catch users who are calling
> virt_to_phys on non-linear addresses immediately. This includes caller
> using __virt_to_phys() on image addresses instead of __pa_symbol(). This
> is a generally useful debug feature to spot bad code (particulary in
> drivers).
>
> Signed-off-by: Florian Fainelli <[email protected]>
> ---
> arch/arm/Kconfig | 1 +
> arch/arm/include/asm/memory.h | 16 ++++++++++++--
> arch/arm/mm/Makefile | 1 +
> arch/arm/mm/physaddr.c | 51 +++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 67 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm/mm/physaddr.c
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index b5d529fdffab..5e66173c5787 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -2,6 +2,7 @@ config ARM
> bool
> default y
> select ARCH_CLOCKSOURCE_DATA
> + select ARCH_HAS_DEBUG_VIRTUAL
> select ARCH_HAS_DEVMEM_IS_ALLOWED
> select ARCH_HAS_ELF_RANDOMIZE
> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> index bee7511c5098..46f192218be7 100644
> --- a/arch/arm/include/asm/memory.h
> +++ b/arch/arm/include/asm/memory.h
> @@ -213,7 +213,7 @@ extern const void *__pv_table_begin, *__pv_table_end;
> : "r" (x), "I" (__PV_BITS_31_24) \
> : "cc")
>
> -static inline phys_addr_t __virt_to_phys(unsigned long x)
> +static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x)
> {
> phys_addr_t t;
>
> @@ -245,7 +245,7 @@ static inline unsigned long __phys_to_virt(phys_addr_t x)
> #define PHYS_OFFSET PLAT_PHYS_OFFSET
> #define PHYS_PFN_OFFSET ((unsigned long)(PHYS_OFFSET >> PAGE_SHIFT))
>
> -static inline phys_addr_t __virt_to_phys(unsigned long x)
> +static inline phys_addr_t __virt_to_phys_nodebug(unsigned long x)
> {
> return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET;
> }
> @@ -261,6 +261,16 @@ static inline unsigned long __phys_to_virt(phys_addr_t x)
> ((((unsigned long)(kaddr) - PAGE_OFFSET) >> PAGE_SHIFT) + \
> PHYS_PFN_OFFSET)
>
> +#define __pa_symbol_nodebug(x) ((x) - (unsigned long)KERNEL_START)

On arm64 the kernel image lives in a separate linear offset. arm doesn't
do anything like that so __phys_addr_symbol should just be the regular
__virt_to_phys

> +
> +#ifdef CONFIG_DEBUG_VIRTUAL
> +extern phys_addr_t __virt_to_phys(unsigned long x);
> +extern phys_addr_t __phys_addr_symbol(unsigned long x);
> +#else
> +#define __virt_to_phys(x) __virt_to_phys_nodebug(x)
> +#define __phys_addr_symbol(x) __pa_symbol_nodebug(x)
> +#endif
> +
> /*
> * These are *only* valid on the kernel direct mapped RAM memory.
> * Note: Drivers should NOT use these. They are the wrong
> @@ -283,9 +293,11 @@ static inline void *phys_to_virt(phys_addr_t x)
> * Drivers should NOT use these either.
> */
> #define __pa(x) __virt_to_phys((unsigned long)(x))
> +#define __pa_symbol(x) __phys_addr_symbol(RELOC_HIDE((unsigned long)(x), 0))
> #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
> #define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT)
>
> +
> extern long long arch_phys_to_idmap_offset;
>
> /*
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index e8698241ece9..b3dea80715b4 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -14,6 +14,7 @@ endif
>
> obj-$(CONFIG_ARM_PTDUMP) += dump.o
> obj-$(CONFIG_MODULES) += proc-syms.o
> +obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
>
> obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
> obj-$(CONFIG_HIGHMEM) += highmem.o
> diff --git a/arch/arm/mm/physaddr.c b/arch/arm/mm/physaddr.c
> new file mode 100644
> index 000000000000..00f6dcffab8b
> --- /dev/null
> +++ b/arch/arm/mm/physaddr.c
> @@ -0,0 +1,51 @@
> +#include <linux/bug.h>
> +#include <linux/export.h>
> +#include <linux/types.h>
> +#include <linux/mmdebug.h>
> +#include <linux/mm.h>
> +
> +#include <asm/sections.h>
> +#include <asm/memory.h>
> +#include <asm/fixmap.h>
> +
> +#include "mm.h"
> +
> +static inline bool __virt_addr_valid(unsigned long x)
> +{
> + if (x < PAGE_OFFSET)
> + return false;
> + if (arm_lowmem_limit && is_vmalloc_or_module_addr((void *)x))
> + return false;
> + if (x >= FIXADDR_START && x < FIXADDR_END)
> + return false;
> + return true;
> +}

I'd rather see this return true for only the linear range and
reject everything else. asm/memory.h already has

#define virt_addr_valid(kaddr) (((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory) \
&& pfn_valid(virt_to_pfn(kaddr)))

So we can make the check x >= PAGE_OFFSET && x < high_memory

> +
> +phys_addr_t __virt_to_phys(unsigned long x)
> +{
> + WARN(!__virt_addr_valid(x),
> + "virt_to_phys used for non-linear address :%pK\n", (void *)x);
> +
> + return __virt_to_phys_nodebug(x);
> +}
> +EXPORT_SYMBOL(__virt_to_phys);
> +
> +static inline bool __phys_addr_valid(unsigned long x)
> +{
> + /* This is bounds checking against the kernel image only.
> + * __pa_symbol should only be used on kernel symbol addresses.
> + */
> + if (x < (unsigned long)KERNEL_START ||
> + x > (unsigned long)KERNEL_END)
> + return false;
> +
> + return true;
> +}

This is a confusing name for this function, it's not checking if
a physical address is valid, it's checking if a virtual address
corresponding to a kernel symbol is valid.

> +
> +phys_addr_t __phys_addr_symbol(unsigned long x)
> +{
> + VIRTUAL_BUG_ON(!__phys_addr_valid(x));
> +
> + return __pa_symbol_nodebug(x);
> +}
> +EXPORT_SYMBOL(__phys_addr_symbol);
>

Thanks,
Laura

2016-12-07 02:24:39

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH 3/3] ARM: Add support for CONFIG_DEBUG_VIRTUAL

On 12/06/2016 06:00 PM, Laura Abbott wrote:
>> @@ -261,6 +261,16 @@ static inline unsigned long __phys_to_virt(phys_addr_t x)
>> ((((unsigned long)(kaddr) - PAGE_OFFSET) >> PAGE_SHIFT) + \
>> PHYS_PFN_OFFSET)
>>
>> +#define __pa_symbol_nodebug(x) ((x) - (unsigned long)KERNEL_START)
>
> On arm64 the kernel image lives in a separate linear offset. arm doesn't
> do anything like that so __phys_addr_symbol should just be the regular
> __virt_to_phys

Yep, which is what I have queued locally now too, thanks!


>> +static inline bool __virt_addr_valid(unsigned long x)
>> +{
>> + if (x < PAGE_OFFSET)
>> + return false;
>> + if (arm_lowmem_limit && is_vmalloc_or_module_addr((void *)x))
>> + return false;
>> + if (x >= FIXADDR_START && x < FIXADDR_END)
>> + return false;
>> + return true;
>> +}
>
> I'd rather see this return true for only the linear range and
> reject everything else. asm/memory.h already has
>
> #define virt_addr_valid(kaddr) (((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory) \
> && pfn_valid(virt_to_pfn(kaddr)))
>
> So we can make the check x >= PAGE_OFFSET && x < high_memory

OK that's simpler indeed. I did the check this way because we have early
callers of __pa() from drivers/of/fdt.c, in particular MIN_MEMBLOCK_ADDR
there, and we also have pcpu_dfl_fc_alloc which uses DMA_MAX_ADDR (which
is 0xffff_ffff on my platform).

>> +static inline bool __phys_addr_valid(unsigned long x)
>> +{
>> + /* This is bounds checking against the kernel image only.
>> + * __pa_symbol should only be used on kernel symbol addresses.
>> + */
>> + if (x < (unsigned long)KERNEL_START ||
>> + x > (unsigned long)KERNEL_END)
>> + return false;
>> +
>> + return true;
>> +}
>
> This is a confusing name for this function, it's not checking if
> a physical address is valid, it's checking if a virtual address
> corresponding to a kernel symbol is valid.

I have removed it and just moved the check within VIRTUAL_BUG_ON().

Thanks!
--
Florian

2016-12-07 06:12:42

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 1/3] ARM: Define KERNEL_START and KERNEL_END

Hi Florian,

[auto build test WARNING on linus/master]
[also build test WARNING on v4.9-rc8 next-20161206]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Florian-Fainelli/ARM-Add-support-for-CONFIG_DEBUG_VIRTUAL/20161207-071442
config: arm-lart_defconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=arm

All warnings (new ones prefixed by >>):

>> drivers/mtd/devices/lart.c:83:0: warning: "KERNEL_START" redefined
#define KERNEL_START (BLOB_START + BLOB_LEN)

In file included from arch/arm/include/asm/page.h:163:0,
from arch/arm/include/asm/thread_info.h:17,
from include/linux/thread_info.h:58,
from include/asm-generic/preempt.h:4,
from ./arch/arm/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:59,
from include/linux/spinlock.h:50,
from include/linux/seqlock.h:35,
from include/linux/time.h:5,
from include/linux/stat.h:18,
from include/linux/module.h:10,
from drivers/mtd/devices/lart.c:38:
arch/arm/include/asm/memory.h:117:0: note: this is the location of the previous definition
#define KERNEL_START _stext


vim +/KERNEL_START +83 drivers/mtd/devices/lart.c

^1da177e Linus Torvalds 2005-04-16 67
^1da177e Linus Torvalds 2005-04-16 68 /*
^1da177e Linus Torvalds 2005-04-16 69 * These values are specific to LART
^1da177e Linus Torvalds 2005-04-16 70 */
^1da177e Linus Torvalds 2005-04-16 71
^1da177e Linus Torvalds 2005-04-16 72 /* general */
^1da177e Linus Torvalds 2005-04-16 73 #define BUSWIDTH 4 /* don't change this - a lot of the code _will_ break if you change this */
^1da177e Linus Torvalds 2005-04-16 74 #define FLASH_OFFSET 0xe8000000 /* see linux/arch/arm/mach-sa1100/lart.c */
^1da177e Linus Torvalds 2005-04-16 75
^1da177e Linus Torvalds 2005-04-16 76 /* blob */
^1da177e Linus Torvalds 2005-04-16 77 #define NUM_BLOB_BLOCKS FLASH_NUMBLOCKS_16m_PARAM
^1da177e Linus Torvalds 2005-04-16 78 #define BLOB_START 0x00000000
^1da177e Linus Torvalds 2005-04-16 79 #define BLOB_LEN (NUM_BLOB_BLOCKS * FLASH_BLOCKSIZE_PARAM)
^1da177e Linus Torvalds 2005-04-16 80
^1da177e Linus Torvalds 2005-04-16 81 /* kernel */
^1da177e Linus Torvalds 2005-04-16 82 #define NUM_KERNEL_BLOCKS 7
^1da177e Linus Torvalds 2005-04-16 @83 #define KERNEL_START (BLOB_START + BLOB_LEN)
^1da177e Linus Torvalds 2005-04-16 84 #define KERNEL_LEN (NUM_KERNEL_BLOCKS * FLASH_BLOCKSIZE_MAIN)
^1da177e Linus Torvalds 2005-04-16 85
^1da177e Linus Torvalds 2005-04-16 86 /* initial ramdisk */
^1da177e Linus Torvalds 2005-04-16 87 #define NUM_INITRD_BLOCKS 24
^1da177e Linus Torvalds 2005-04-16 88 #define INITRD_START (KERNEL_START + KERNEL_LEN)
^1da177e Linus Torvalds 2005-04-16 89 #define INITRD_LEN (NUM_INITRD_BLOCKS * FLASH_BLOCKSIZE_MAIN)
^1da177e Linus Torvalds 2005-04-16 90
^1da177e Linus Torvalds 2005-04-16 91 /*

:::::: The code at line 83 was first introduced by commit
:::::: 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 Linux-2.6.12-rc2

:::::: TO: Linus Torvalds <[email protected]>
:::::: CC: Linus Torvalds <[email protected]>

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation


Attachments:
(No filename) (3.74 kB)
.config.gz (11.31 kB)
Download all attachments

2016-12-07 13:58:30

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCHv4 09/10] mm/usercopy: Switch to using lm_alias

On Tue, Dec 06, 2016 at 12:10:50PM -0800, Kees Cook wrote:
> On Tue, Dec 6, 2016 at 10:18 AM, Mark Rutland <[email protected]> wrote:
> > On Tue, Nov 29, 2016 at 11:39:44AM -0800, Kees Cook wrote:
> >> On Tue, Nov 29, 2016 at 10:55 AM, Laura Abbott <[email protected]> wrote:
> >> >
> >> > The usercopy checking code currently calls __va(__pa(...)) to check for
> >> > aliases on symbols. Switch to using lm_alias instead.
> >> >
> >> > Signed-off-by: Laura Abbott <[email protected]>
> >>
> >> Acked-by: Kees Cook <[email protected]>
> >>
> >> I should probably add a corresponding alias test to lkdtm...
> >>
> >> -Kees
> >
> > Something like the below?
> >
> > It uses lm_alias(), so it depends on Laura's patches. We seem to do the
> > right thing, anyhow:
>
> Cool, this looks good. What happens on systems without an alias?

In that case, lm_alias() should be an identity function, and we'll just
hit the usual kernel address (i.e. it should be identical to
USERCOPY_KERNEL).

> Laura, feel free to add this to your series:
>
> Acked-by: Kees Cook <[email protected]>

I'm happy with that, or I can resend this as a proper patch once the
rest is in.

Thanks,
Mark.