2014-02-12 20:57:31

by Mark Salter

[permalink] [raw]
Subject: [PATCH v4 0/6] generic early_ioremap support

This patch series takes the common bits from the x86 early ioremap
implementation and creates a generic implementation which may be used
by other architectures. The early ioremap interfaces are intended for
situations where boot code needs to make temporary virtual mappings
before the normal ioremap interfaces are available. Typically, this
means before paging_init() has run.

These patches are layered on top of generic fixmap patches which
were pulled into 3.14-rc with the exception of the arm patch:

https://lkml.org/lkml/2013/11/25/477

The arm fixmap patch is currently in the akpm tree and has been
part of linux-next for a while.

This is version 4 of the patch series. These patches (and underlying
fixmap patches) may be found at:

git://github.com/mosalter/linux.git (early-ioremap-v4 branch)

Changes from version 3:

* Removed dependency on MMU. In the case of no-MMU, the early remap
functions return the address passed in. This helps simplify use of
early_ioremap functions on architectures such as ARM which have
optional MMU support.

* Added L_PTE_XN to arm page flags so mappings are non-executable.

* Include linux/io.h rather than asm/io.h in arm setup.c

* Moved early_ioremap_init() before setup_machine_fdt() in arm
setup_arch().

* Fixed mispelling in config EARLY_IOREMAP help text.

Changes from version 2:

* Added some Acks

* Incorporated a patch from Dave Young to change the signature
of early_memremap() (dropping __iomem from returned pointer)
which is the first patch in a larger series:

https://lkml.org/lkml/2013/12/22/69

This allows the change of just the x86 function signature
to be bisected.

Changes from version 1:

* Moved the generic code into linux/mm instead of linux/lib

* Have early_memremap() return normal pointer instead of __iomem
This is in response to sparse warning cleanups being made in
an unrelated patch series:

https://lkml.org/lkml/2013/12/22/69

* Added arm64 patch to call init_mem_pgprot() earlier so that
the pgprot macros are valid in time for early_ioremap use

* Added validity checking for early_ioremap pgd, pud, and pmd
in arm64

Dave Young (1):
x86/mm: sparse warning fix for early_memremap

Mark Salter (5):
mm: create generic early_ioremap() support
x86: use generic early_ioremap
arm: add early_ioremap support
arm64: initialize pgprot info earlier in boot
arm64: add early_ioremap support

Documentation/arm64/memory.txt | 4 +-
arch/arm/Kconfig | 10 ++
arch/arm/include/asm/Kbuild | 1 +
arch/arm/include/asm/fixmap.h | 20 +++
arch/arm/include/asm/io.h | 1 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 4 +
arch/arm/mm/early_ioremap.c | 93 +++++++++++++
arch/arm/mm/mmu.c | 2 +
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/Kbuild | 1 +
arch/arm64/include/asm/fixmap.h | 67 +++++++++
arch/arm64/include/asm/io.h | 1 +
arch/arm64/include/asm/memory.h | 2 +-
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/kernel/early_printk.c | 8 +-
arch/arm64/kernel/head.S | 9 +-
arch/arm64/kernel/setup.c | 4 +
arch/arm64/mm/ioremap.c | 85 +++++++++++
arch/arm64/mm/mmu.c | 44 +-----
arch/x86/Kconfig | 1 +
arch/x86/include/asm/Kbuild | 1 +
arch/x86/include/asm/fixmap.h | 6 +
arch/x86/include/asm/io.h | 14 +-
arch/x86/mm/ioremap.c | 224 +----------------------------
arch/x86/mm/pgtable_32.c | 2 +-
include/asm-generic/early_ioremap.h | 42 ++++++
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/early_ioremap.c | 271 ++++++++++++++++++++++++++++++++++++
30 files changed, 636 insertions(+), 289 deletions(-)
create mode 100644 arch/arm/mm/early_ioremap.c
create mode 100644 arch/arm64/include/asm/fixmap.h
create mode 100644 include/asm-generic/early_ioremap.h
create mode 100644 mm/early_ioremap.c

--
1.8.5.3


2014-02-12 20:56:57

by Mark Salter

[permalink] [raw]
Subject: [PATCH v4 1/6] x86/mm: sparse warning fix for early_memremap

From: Dave Young <[email protected]>

There's a lot of sparse warnings for code like below:
void *a = early_memremap(phys_addr, size);

early_memremap intend to map kernel memory with ioremap facility, the return
pointer should be a kernel ram pointer instead of iomem one.

For making the function clearer and supressing sparse warnings this patch
do below two things:
1. cast to (__force void *) for the return value of early_memremap
2. add early_memunmap function and pass (__force void __iomem *) to iounmap

>From Boris:
> Ingo told me yesterday, it makes sense too. I'd guess we can try it.
> FWIW, all callers of early_memremap use the memory they get remapped as
> normal memory so we should be safe.

Signed-off-by: Dave Young <[email protected]>
Signed-off-by: Mark Salter <[email protected]>
---
arch/x86/include/asm/io.h | 3 ++-
arch/x86/mm/ioremap.c | 10 +++++++---
2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 34f69cb..1db414f 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -325,9 +325,10 @@ extern void early_ioremap_init(void);
extern void early_ioremap_reset(void);
extern void __iomem *early_ioremap(resource_size_t phys_addr,
unsigned long size);
-extern void __iomem *early_memremap(resource_size_t phys_addr,
+extern void *early_memremap(resource_size_t phys_addr,
unsigned long size);
extern void early_iounmap(void __iomem *addr, unsigned long size);
+extern void early_memunmap(void *addr, unsigned long size);
extern void fixup_early_ioremap(void);
extern bool is_early_ioremap_ptep(pte_t *ptep);

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 799580c..bbb4504 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -562,10 +562,9 @@ early_ioremap(resource_size_t phys_addr, unsigned long size)
}

/* Remap memory */
-void __init __iomem *
-early_memremap(resource_size_t phys_addr, unsigned long size)
+void __init *early_memremap(resource_size_t phys_addr, unsigned long size)
{
- return __early_ioremap(phys_addr, size, PAGE_KERNEL);
+ return (__force void *)__early_ioremap(phys_addr, size, PAGE_KERNEL);
}

void __init early_iounmap(void __iomem *addr, unsigned long size)
@@ -620,3 +619,8 @@ void __init early_iounmap(void __iomem *addr, unsigned long size)
}
prev_map[slot] = NULL;
}
+
+void __init early_memunmap(void *addr, unsigned long size)
+{
+ early_iounmap((__force void __iomem *)addr, size);
+}
--
1.8.5.3

2014-02-12 20:57:11

by Mark Salter

[permalink] [raw]
Subject: [PATCH v4 3/6] x86: use generic early_ioremap

Move x86 over to the generic early ioremap implementation.

Signed-off-by: Mark Salter <[email protected]>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/Kbuild | 1 +
arch/x86/include/asm/fixmap.h | 6 ++
arch/x86/include/asm/io.h | 15 +--
arch/x86/mm/ioremap.c | 228 +-----------------------------------------
arch/x86/mm/pgtable_32.c | 2 +-
6 files changed, 13 insertions(+), 240 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0af5250..fb479bc 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -127,6 +127,7 @@ config X86
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64
select HAVE_CC_STACKPROTECTOR
+ select GENERIC_EARLY_IOREMAP

config INSTRUCTION_DECODER
def_bool y
diff --git a/arch/x86/include/asm/Kbuild b/arch/x86/include/asm/Kbuild
index 7f66985..203f5f9 100644
--- a/arch/x86/include/asm/Kbuild
+++ b/arch/x86/include/asm/Kbuild
@@ -5,3 +5,4 @@ genhdr-y += unistd_64.h
genhdr-y += unistd_x32.h

generic-y += clkdev.h
+generic-y += early_ioremap.h
diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
index 7252cd3..e5f236d 100644
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -177,5 +177,11 @@ static inline void __set_fixmap(enum fixed_addresses idx,

#include <asm-generic/fixmap.h>

+#define __late_set_fixmap(idx, phys, flags) __set_fixmap(idx, phys, flags)
+#define __late_clear_fixmap(idx) __set_fixmap(idx, 0, __pgprot(0))
+
+void __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags);
+
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_X86_FIXMAP_H */
diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 1db414f..aae7010 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -39,6 +39,7 @@
#include <linux/string.h>
#include <linux/compiler.h>
#include <asm/page.h>
+#include <asm/early_ioremap.h>

#define build_mmio_read(name, size, type, reg, barrier) \
static inline type name(const volatile void __iomem *addr) \
@@ -316,20 +317,6 @@ extern int ioremap_change_attr(unsigned long vaddr, unsigned long size,
unsigned long prot_val);
extern void __iomem *ioremap_wc(resource_size_t offset, unsigned long size);

-/*
- * early_ioremap() and early_iounmap() are for temporary early boot-time
- * mappings, before the real ioremap() is functional.
- * A boot-time mapping is currently limited to at most 16 pages.
- */
-extern void early_ioremap_init(void);
-extern void early_ioremap_reset(void);
-extern void __iomem *early_ioremap(resource_size_t phys_addr,
- unsigned long size);
-extern void *early_memremap(resource_size_t phys_addr,
- unsigned long size);
-extern void early_iounmap(void __iomem *addr, unsigned long size);
-extern void early_memunmap(void *addr, unsigned long size);
-extern void fixup_early_ioremap(void);
extern bool is_early_ioremap_ptep(pte_t *ptep);

#ifdef CONFIG_XEN
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index bbb4504..597ac15 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -328,17 +328,6 @@ void unxlate_dev_mem_ptr(unsigned long phys, void *addr)
return;
}

-static int __initdata early_ioremap_debug;
-
-static int __init early_ioremap_debug_setup(char *str)
-{
- early_ioremap_debug = 1;
-
- return 0;
-}
-early_param("early_ioremap_debug", early_ioremap_debug_setup);
-
-static __initdata int after_paging_init;
static pte_t bm_pte[PAGE_SIZE/sizeof(pte_t)] __page_aligned_bss;

static inline pmd_t * __init early_ioremap_pmd(unsigned long addr)
@@ -362,18 +351,11 @@ bool __init is_early_ioremap_ptep(pte_t *ptep)
return ptep >= &bm_pte[0] && ptep < &bm_pte[PAGE_SIZE/sizeof(pte_t)];
}

-static unsigned long slot_virt[FIX_BTMAPS_SLOTS] __initdata;
-
void __init early_ioremap_init(void)
{
pmd_t *pmd;
- int i;

- if (early_ioremap_debug)
- printk(KERN_INFO "early_ioremap_init()\n");
-
- for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
- slot_virt[i] = __fix_to_virt(FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*i);
+ early_ioremap_setup();

pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
memset(bm_pte, 0, sizeof(bm_pte));
@@ -402,13 +384,8 @@ void __init early_ioremap_init(void)
}
}

-void __init early_ioremap_reset(void)
-{
- after_paging_init = 1;
-}
-
-static void __init __early_set_fixmap(enum fixed_addresses idx,
- phys_addr_t phys, pgprot_t flags)
+void __init __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags)
{
unsigned long addr = __fix_to_virt(idx);
pte_t *pte;
@@ -425,202 +402,3 @@ static void __init __early_set_fixmap(enum fixed_addresses idx,
pte_clear(&init_mm, addr, pte);
__flush_tlb_one(addr);
}
-
-static inline void __init early_set_fixmap(enum fixed_addresses idx,
- phys_addr_t phys, pgprot_t prot)
-{
- if (after_paging_init)
- __set_fixmap(idx, phys, prot);
- else
- __early_set_fixmap(idx, phys, prot);
-}
-
-static inline void __init early_clear_fixmap(enum fixed_addresses idx)
-{
- if (after_paging_init)
- clear_fixmap(idx);
- else
- __early_set_fixmap(idx, 0, __pgprot(0));
-}
-
-static void __iomem *prev_map[FIX_BTMAPS_SLOTS] __initdata;
-static unsigned long prev_size[FIX_BTMAPS_SLOTS] __initdata;
-
-void __init fixup_early_ioremap(void)
-{
- int i;
-
- for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
- if (prev_map[i]) {
- WARN_ON(1);
- break;
- }
- }
-
- early_ioremap_init();
-}
-
-static int __init check_early_ioremap_leak(void)
-{
- int count = 0;
- int i;
-
- for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
- if (prev_map[i])
- count++;
-
- if (!count)
- return 0;
- WARN(1, KERN_WARNING
- "Debug warning: early ioremap leak of %d areas detected.\n",
- count);
- printk(KERN_WARNING
- "please boot with early_ioremap_debug and report the dmesg.\n");
-
- return 1;
-}
-late_initcall(check_early_ioremap_leak);
-
-static void __init __iomem *
-__early_ioremap(resource_size_t phys_addr, unsigned long size, pgprot_t prot)
-{
- unsigned long offset;
- resource_size_t last_addr;
- unsigned int nrpages;
- enum fixed_addresses idx;
- int i, slot;
-
- WARN_ON(system_state != SYSTEM_BOOTING);
-
- slot = -1;
- for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
- if (!prev_map[i]) {
- slot = i;
- break;
- }
- }
-
- if (slot < 0) {
- printk(KERN_INFO "%s(%08llx, %08lx) not found slot\n",
- __func__, (u64)phys_addr, size);
- WARN_ON(1);
- return NULL;
- }
-
- if (early_ioremap_debug) {
- printk(KERN_INFO "%s(%08llx, %08lx) [%d] => ",
- __func__, (u64)phys_addr, size, slot);
- dump_stack();
- }
-
- /* Don't allow wraparound or zero size */
- last_addr = phys_addr + size - 1;
- if (!size || last_addr < phys_addr) {
- WARN_ON(1);
- return NULL;
- }
-
- prev_size[slot] = size;
- /*
- * Mappings have to be page-aligned
- */
- offset = phys_addr & ~PAGE_MASK;
- phys_addr &= PAGE_MASK;
- size = PAGE_ALIGN(last_addr + 1) - phys_addr;
-
- /*
- * Mappings have to fit in the FIX_BTMAP area.
- */
- nrpages = size >> PAGE_SHIFT;
- if (nrpages > NR_FIX_BTMAPS) {
- WARN_ON(1);
- return NULL;
- }
-
- /*
- * Ok, go for it..
- */
- idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
- while (nrpages > 0) {
- early_set_fixmap(idx, phys_addr, prot);
- phys_addr += PAGE_SIZE;
- --idx;
- --nrpages;
- }
- if (early_ioremap_debug)
- printk(KERN_CONT "%08lx + %08lx\n", offset, slot_virt[slot]);
-
- prev_map[slot] = (void __iomem *)(offset + slot_virt[slot]);
- return prev_map[slot];
-}
-
-/* Remap an IO device */
-void __init __iomem *
-early_ioremap(resource_size_t phys_addr, unsigned long size)
-{
- return __early_ioremap(phys_addr, size, PAGE_KERNEL_IO);
-}
-
-/* Remap memory */
-void __init *early_memremap(resource_size_t phys_addr, unsigned long size)
-{
- return (__force void *)__early_ioremap(phys_addr, size, PAGE_KERNEL);
-}
-
-void __init early_iounmap(void __iomem *addr, unsigned long size)
-{
- unsigned long virt_addr;
- unsigned long offset;
- unsigned int nrpages;
- enum fixed_addresses idx;
- int i, slot;
-
- slot = -1;
- for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
- if (prev_map[i] == addr) {
- slot = i;
- break;
- }
- }
-
- if (slot < 0) {
- printk(KERN_INFO "early_iounmap(%p, %08lx) not found slot\n",
- addr, size);
- WARN_ON(1);
- return;
- }
-
- if (prev_size[slot] != size) {
- printk(KERN_INFO "early_iounmap(%p, %08lx) [%d] size not consistent %08lx\n",
- addr, size, slot, prev_size[slot]);
- WARN_ON(1);
- return;
- }
-
- if (early_ioremap_debug) {
- printk(KERN_INFO "early_iounmap(%p, %08lx) [%d]\n", addr,
- size, slot);
- dump_stack();
- }
-
- virt_addr = (unsigned long)addr;
- if (virt_addr < fix_to_virt(FIX_BTMAP_BEGIN)) {
- WARN_ON(1);
- return;
- }
- offset = virt_addr & ~PAGE_MASK;
- nrpages = PAGE_ALIGN(offset + size) >> PAGE_SHIFT;
-
- idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
- while (nrpages > 0) {
- early_clear_fixmap(idx);
- --idx;
- --nrpages;
- }
- prev_map[slot] = NULL;
-}
-
-void __init early_memunmap(void *addr, unsigned long size)
-{
- early_iounmap((__force void __iomem *)addr, size);
-}
diff --git a/arch/x86/mm/pgtable_32.c b/arch/x86/mm/pgtable_32.c
index a69bcb8..4dd8cf6 100644
--- a/arch/x86/mm/pgtable_32.c
+++ b/arch/x86/mm/pgtable_32.c
@@ -127,7 +127,7 @@ static int __init parse_reservetop(char *arg)

address = memparse(arg, &arg);
reserve_top_address(address);
- fixup_early_ioremap();
+ early_ioremap_init();
return 0;
}
early_param("reservetop", parse_reservetop);
--
1.8.5.3

2014-02-12 20:57:35

by Mark Salter

[permalink] [raw]
Subject: [PATCH v4 2/6] mm: create generic early_ioremap() support

This patch creates a generic implementation of early_ioremap() support
based on the existing x86 implementation. early_ioremp() is useful for
early boot code which needs to temporarily map I/O or memory regions
before normal mapping functions such as ioremap() are available.

Some architectures have optional MMU. In the no-MMU case, the remap
functions simply return the passed in physical address and the unmap
functions do nothing.

Signed-off-by: Mark Salter <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
---
include/asm-generic/early_ioremap.h | 42 ++++++
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/early_ioremap.c | 271 ++++++++++++++++++++++++++++++++++++
4 files changed, 317 insertions(+)
create mode 100644 include/asm-generic/early_ioremap.h
create mode 100644 mm/early_ioremap.c

diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h
new file mode 100644
index 0000000..a5de55c
--- /dev/null
+++ b/include/asm-generic/early_ioremap.h
@@ -0,0 +1,42 @@
+#ifndef _ASM_EARLY_IOREMAP_H_
+#define _ASM_EARLY_IOREMAP_H_
+
+#include <linux/types.h>
+
+/*
+ * early_ioremap() and early_iounmap() are for temporary early boot-time
+ * mappings, before the real ioremap() is functional.
+ */
+extern void __iomem *early_ioremap(resource_size_t phys_addr,
+ unsigned long size);
+extern void *early_memremap(resource_size_t phys_addr,
+ unsigned long size);
+extern void early_iounmap(void __iomem *addr, unsigned long size);
+extern void early_memunmap(void *addr, unsigned long size);
+
+/*
+ * Weak function called by early_ioremap_reset(). It does nothing, but
+ * architectures may provide their own version to do any needed cleanups.
+ */
+extern void early_ioremap_shutdown(void);
+
+#if defined(CONFIG_GENERIC_EARLY_IOREMAP) && defined(CONFIG_MMU)
+/* Arch-specific initialization */
+extern void early_ioremap_init(void);
+
+/* Generic initialization called by architecture code */
+extern void early_ioremap_setup(void);
+
+/*
+ * Called as last step in paging_init() so library can act
+ * accordingly for subsequent map/unmap requests.
+ */
+extern void early_ioremap_reset(void);
+
+#else
+static inline void early_ioremap_init(void) { }
+static inline void early_ioremap_setup(void) { }
+static inline void early_ioremap_reset(void) { }
+#endif
+
+#endif /* _ASM_EARLY_IOREMAP_H_ */
diff --git a/mm/Kconfig b/mm/Kconfig
index 2d9f150..bf846a2 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -577,3 +577,6 @@ config PGTABLE_MAPPING

You can check speed with zsmalloc benchmark[1].
[1] https://github.com/spartacus06/zsmalloc
+
+config GENERIC_EARLY_IOREMAP
+ bool
diff --git a/mm/Makefile b/mm/Makefile
index 310c90a..9d9c587 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -61,3 +61,4 @@ obj-$(CONFIG_CLEANCACHE) += cleancache.o
obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
obj-$(CONFIG_ZBUD) += zbud.o
obj-$(CONFIG_ZSMALLOC) += zsmalloc.o
+obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c
new file mode 100644
index 0000000..6591759
--- /dev/null
+++ b/mm/early_ioremap.c
@@ -0,0 +1,271 @@
+/*
+ * Provide common bits of early_ioremap() support for architectures needing
+ * temporary mappings during boot before ioremap() is available.
+ *
+ * This is mostly a direct copy of the x86 early_ioremap implementation.
+ *
+ * (C) Copyright 1995 1996, 2014 Linus Torvalds
+ *
+ */
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <asm/fixmap.h>
+
+#ifdef CONFIG_MMU
+static int early_ioremap_debug __initdata;
+
+static int __init early_ioremap_debug_setup(char *str)
+{
+ early_ioremap_debug = 1;
+
+ return 0;
+}
+early_param("early_ioremap_debug", early_ioremap_debug_setup);
+
+static int after_paging_init __initdata;
+
+void __init __attribute__((weak)) early_ioremap_shutdown(void)
+{
+}
+
+void __init early_ioremap_reset(void)
+{
+ early_ioremap_shutdown();
+ after_paging_init = 1;
+}
+
+/*
+ * Generally, ioremap() is available after paging_init() has been called.
+ * Architectures wanting to allow early_ioremap after paging_init() can
+ * define __late_set_fixmap and __late_clear_fixmap to do the right thing.
+ */
+#ifndef __late_set_fixmap
+static inline void __init __late_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t prot)
+{
+ BUG();
+}
+#endif
+
+#ifndef __late_clear_fixmap
+static inline void __init __late_clear_fixmap(enum fixed_addresses idx)
+{
+ BUG();
+}
+#endif
+
+static void __iomem *prev_map[FIX_BTMAPS_SLOTS] __initdata;
+static unsigned long prev_size[FIX_BTMAPS_SLOTS] __initdata;
+static unsigned long slot_virt[FIX_BTMAPS_SLOTS] __initdata;
+
+void __init early_ioremap_setup(void)
+{
+ int i;
+
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
+ if (prev_map[i]) {
+ WARN_ON(1);
+ break;
+ }
+ }
+
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
+ slot_virt[i] = __fix_to_virt(FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*i);
+}
+
+static int __init check_early_ioremap_leak(void)
+{
+ int count = 0;
+ int i;
+
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
+ if (prev_map[i])
+ count++;
+
+ if (!count)
+ return 0;
+ WARN(1, KERN_WARNING
+ "Debug warning: early ioremap leak of %d areas detected.\n",
+ count);
+ pr_warn("please boot with early_ioremap_debug and report the dmesg.\n");
+
+ return 1;
+}
+late_initcall(check_early_ioremap_leak);
+
+static void __init __iomem *
+__early_ioremap(resource_size_t phys_addr, unsigned long size, pgprot_t prot)
+{
+ unsigned long offset;
+ resource_size_t last_addr;
+ unsigned int nrpages;
+ enum fixed_addresses idx;
+ int i, slot;
+
+ WARN_ON(system_state != SYSTEM_BOOTING);
+
+ slot = -1;
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
+ if (!prev_map[i]) {
+ slot = i;
+ break;
+ }
+ }
+
+ if (slot < 0) {
+ pr_info("%s(%08llx, %08lx) not found slot\n",
+ __func__, (u64)phys_addr, size);
+ WARN_ON(1);
+ return NULL;
+ }
+
+ if (early_ioremap_debug) {
+ pr_info("%s(%08llx, %08lx) [%d] => ",
+ __func__, (u64)phys_addr, size, slot);
+ dump_stack();
+ }
+
+ /* Don't allow wraparound or zero size */
+ last_addr = phys_addr + size - 1;
+ if (!size || last_addr < phys_addr) {
+ WARN_ON(1);
+ return NULL;
+ }
+
+ prev_size[slot] = size;
+ /*
+ * Mappings have to be page-aligned
+ */
+ offset = phys_addr & ~PAGE_MASK;
+ phys_addr &= PAGE_MASK;
+ size = PAGE_ALIGN(last_addr + 1) - phys_addr;
+
+ /*
+ * Mappings have to fit in the FIX_BTMAP area.
+ */
+ nrpages = size >> PAGE_SHIFT;
+ if (nrpages > NR_FIX_BTMAPS) {
+ WARN_ON(1);
+ return NULL;
+ }
+
+ /*
+ * Ok, go for it..
+ */
+ idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
+ while (nrpages > 0) {
+ if (after_paging_init)
+ __late_set_fixmap(idx, phys_addr, prot);
+ else
+ __early_set_fixmap(idx, phys_addr, prot);
+ phys_addr += PAGE_SIZE;
+ --idx;
+ --nrpages;
+ }
+ if (early_ioremap_debug)
+ pr_cont("%08lx + %08lx\n", offset, slot_virt[slot]);
+
+ prev_map[slot] = (void __iomem *)(offset + slot_virt[slot]);
+ return prev_map[slot];
+}
+
+void __init early_iounmap(void __iomem *addr, unsigned long size)
+{
+ unsigned long virt_addr;
+ unsigned long offset;
+ unsigned int nrpages;
+ enum fixed_addresses idx;
+ int i, slot;
+
+ slot = -1;
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
+ if (prev_map[i] == addr) {
+ slot = i;
+ break;
+ }
+ }
+
+ if (slot < 0) {
+ pr_info("early_iounmap(%p, %08lx) not found slot\n",
+ addr, size);
+ WARN_ON(1);
+ return;
+ }
+
+ if (prev_size[slot] != size) {
+ pr_info("early_iounmap(%p, %08lx) [%d] size not consistent %08lx\n",
+ addr, size, slot, prev_size[slot]);
+ WARN_ON(1);
+ return;
+ }
+
+ if (early_ioremap_debug) {
+ pr_info("early_iounmap(%p, %08lx) [%d]\n", addr,
+ size, slot);
+ dump_stack();
+ }
+
+ virt_addr = (unsigned long)addr;
+ if (virt_addr < fix_to_virt(FIX_BTMAP_BEGIN)) {
+ WARN_ON(1);
+ return;
+ }
+ offset = virt_addr & ~PAGE_MASK;
+ nrpages = PAGE_ALIGN(offset + size) >> PAGE_SHIFT;
+
+ idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
+ while (nrpages > 0) {
+ if (after_paging_init)
+ __late_clear_fixmap(idx);
+ else
+ __early_set_fixmap(idx, 0, FIXMAP_PAGE_CLEAR);
+ --idx;
+ --nrpages;
+ }
+ prev_map[slot] = NULL;
+}
+
+/* Remap an IO device */
+void __init __iomem *
+early_ioremap(resource_size_t phys_addr, unsigned long size)
+{
+ return __early_ioremap(phys_addr, size, FIXMAP_PAGE_IO);
+}
+
+/* Remap memory */
+void __init *
+early_memremap(resource_size_t phys_addr, unsigned long size)
+{
+ return (__force void *)__early_ioremap(phys_addr, size,
+ FIXMAP_PAGE_NORMAL);
+}
+#else /* CONFIG_MMU */
+
+void __init __iomem *
+early_ioremap(resource_size_t phys_addr, unsigned long size)
+{
+ return (__force void __iomem *)phys_addr;
+}
+
+/* Remap memory */
+void __init *
+early_memremap(resource_size_t phys_addr, unsigned long size)
+{
+ return (void *)phys_addr;
+}
+
+void __init early_iounmap(void __iomem *addr, unsigned long size)
+{
+}
+
+#endif /* CONFIG_MMU */
+
+
+void __init early_memunmap(void *addr, unsigned long size)
+{
+ early_iounmap((__force void __iomem *)addr, size);
+}
--
1.8.5.3

2014-02-12 20:57:45

by Mark Salter

[permalink] [raw]
Subject: [PATCH v4 5/6] arm64: initialize pgprot info earlier in boot

Presently, paging_init() calls init_mem_pgprot() to initialize pgprot
values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The
new fixmap and early_ioremap support also needs to use these macros
before paging_init() is called. This patch moves the init_mem_pgprot()
call out of paging_init() and into setup_arch() so that pgprot_default
gets initialized in time for fixmap and early_ioremap.

Signed-off-by: Mark Salter <[email protected]>
---
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/kernel/setup.c | 2 ++
arch/arm64/mm/mmu.c | 3 +--
3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 2494fc0..f600d40 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -27,5 +27,6 @@ typedef struct {
extern void paging_init(void);
extern void setup_mm_for_reboot(void);
extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
+extern void init_mem_pgprot(void);

#endif
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index c8e9eff..1c66cfb 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -327,6 +327,8 @@ void __init setup_arch(char **cmdline_p)

*cmdline_p = boot_command_line;

+ init_mem_pgprot();
+
parse_early_param();

arm64_memblock_init();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f8dc7e8..ba259a0 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -125,7 +125,7 @@ early_param("cachepolicy", early_cachepolicy);
/*
* Adjust the PMD section entries according to the CPU in use.
*/
-static void __init init_mem_pgprot(void)
+void __init init_mem_pgprot(void)
{
pteval_t default_pgprot;
int i;
@@ -357,7 +357,6 @@ void __init paging_init(void)
{
void *zero_page;

- init_mem_pgprot();
map_mem();

/*
--
1.8.5.3

2014-02-12 20:57:43

by Mark Salter

[permalink] [raw]
Subject: [PATCH v4 4/6] arm: add early_ioremap support

This patch uses the generic early_ioremap code to implement
early_ioremap for ARM. The ARM-specific bits come mostly from
an earlier patch from Leif Lindholm <[email protected]>
here:

https://lkml.org/lkml/2013/10/3/279

Signed-off-by: Mark Salter <[email protected]>
Tested-by: Leif Lindholm <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
---
arch/arm/Kconfig | 10 +++++
arch/arm/include/asm/Kbuild | 1 +
arch/arm/include/asm/fixmap.h | 20 ++++++++++
arch/arm/include/asm/io.h | 1 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 4 ++
arch/arm/mm/early_ioremap.c | 93 +++++++++++++++++++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 2 +
8 files changed, 133 insertions(+)
create mode 100644 arch/arm/mm/early_ioremap.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e254198..7a35ef6 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1874,6 +1874,16 @@ config UACCESS_WITH_MEMCPY
However, if the CPU data cache is using a write-allocate mode,
this option is unlikely to provide any performance gain.

+config EARLY_IOREMAP
+ bool "Provide early_ioremap() support for kernel initialization"
+ select GENERIC_EARLY_IOREMAP
+ help
+ Provide a mechanism for kernel initialisation code to temporarily
+ map, in a highmem-agnostic way, memory pages in before ioremap()
+ and friends are available (before paging_init() has run). It uses
+ the same virtual memory range as kmap so all early mappings must
+ be unmapped before paging_init() is called.
+
config SECCOMP
bool
prompt "Enable seccomp to safely compute untrusted bytecode"
diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
index 3278afe..6fcfd7d 100644
--- a/arch/arm/include/asm/Kbuild
+++ b/arch/arm/include/asm/Kbuild
@@ -4,6 +4,7 @@ generic-y += auxvec.h
generic-y += bitsperlong.h
generic-y += cputime.h
generic-y += current.h
+generic-y += early_ioremap.h
generic-y += emergency-restart.h
generic-y += errno.h
generic-y += exec.h
diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 68ea615..ff8fa3e 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -21,8 +21,28 @@ enum fixed_addresses {
FIX_KMAP_BEGIN,
FIX_KMAP_END = (FIXADDR_TOP - FIXADDR_START) >> PAGE_SHIFT,
__end_of_fixed_addresses
+/*
+ * 224 temporary boot-time mappings, used by early_ioremap(),
+ * before ioremap() is functional.
+ *
+ * (P)re-using the FIXADDR region, which is used for highmem
+ * later on, and statically aligned to 1MB.
+ */
+#define NR_FIX_BTMAPS 32
+#define FIX_BTMAPS_SLOTS 7
+#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
+#define FIX_BTMAP_END FIX_KMAP_BEGIN
+#define FIX_BTMAP_BEGIN (FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1)
};

+#define FIXMAP_PAGE_COMMON (L_PTE_YOUNG | L_PTE_PRESENT | L_PTE_XN)
+
+#define FIXMAP_PAGE_NORMAL (FIXMAP_PAGE_COMMON | L_PTE_MT_WRITEBACK)
+#define FIXMAP_PAGE_IO (FIXMAP_PAGE_COMMON | L_PTE_MT_DEV_NONSHARED)
+
+extern void __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags);
+
#include <asm-generic/fixmap.h>

#endif
diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
index 8aa4cca..637e0cd 100644
--- a/arch/arm/include/asm/io.h
+++ b/arch/arm/include/asm/io.h
@@ -28,6 +28,7 @@
#include <asm/byteorder.h>
#include <asm/memory.h>
#include <asm-generic/pci_iomap.h>
+#include <asm/early_ioremap.h>
#include <xen/xen.h>

/*
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index b0df976..9c8b751 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -30,6 +30,7 @@
#include <linux/bug.h>
#include <linux/compiler.h>
#include <linux/sort.h>
+#include <linux/io.h>

#include <asm/unified.h>
#include <asm/cp15.h>
@@ -880,6 +881,7 @@ void __init setup_arch(char **cmdline_p)
const struct machine_desc *mdesc;

setup_processor();
+ early_ioremap_init();
mdesc = setup_machine_fdt(__atags_pointer);
if (!mdesc)
mdesc = setup_machine_tags(__atags_pointer, __machine_arch_type);
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 7f39ce2..501af98 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -12,6 +12,10 @@ ifneq ($(CONFIG_MMU),y)
obj-y += nommu.o
endif

+ifeq ($(CONFIG_MMU),y)
+obj-$(CONFIG_EARLY_IOREMAP) += early_ioremap.o
+endif
+
obj-$(CONFIG_ARM_PTDUMP) += dump.o
obj-$(CONFIG_MODULES) += proc-syms.o

diff --git a/arch/arm/mm/early_ioremap.c b/arch/arm/mm/early_ioremap.c
new file mode 100644
index 0000000..c3e2bf2
--- /dev/null
+++ b/arch/arm/mm/early_ioremap.c
@@ -0,0 +1,93 @@
+/*
+ * early_ioremap() support for ARM
+ *
+ * Based on existing support in arch/x86/mm/ioremap.c
+ *
+ * Restrictions: currently only functional before paging_init()
+ */
+
+#include <linux/init.h>
+#include <linux/io.h>
+
+#include <asm/fixmap.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/tlbflush.h>
+
+#include <asm/mach/map.h>
+
+static pte_t bm_pte[PTRS_PER_PTE] __aligned(PTE_HWTABLE_SIZE) __initdata;
+
+static inline pmd_t * __init early_ioremap_pmd(unsigned long addr)
+{
+ unsigned int index = pgd_index(addr);
+ pgd_t *pgd = cpu_get_pgd() + index;
+ pud_t *pud = pud_offset(pgd, addr);
+ pmd_t *pmd = pmd_offset(pud, addr);
+
+ return pmd;
+}
+
+static inline pte_t * __init early_ioremap_pte(unsigned long addr)
+{
+ return &bm_pte[pte_index(addr)];
+}
+
+void __init early_ioremap_init(void)
+{
+ pmd_t *pmd;
+
+ pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
+
+ pmd_populate_kernel(NULL, pmd, bm_pte);
+
+ /*
+ * Make sure we don't span multiple pmds.
+ */
+ BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
+ != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
+
+ if (pmd != early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END))) {
+ WARN_ON(1);
+ pr_warn("pmd %p != %p\n",
+ pmd, early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END)));
+ pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n",
+ fix_to_virt(FIX_BTMAP_BEGIN));
+ pr_warn("fix_to_virt(FIX_BTMAP_END): %08lx\n",
+ fix_to_virt(FIX_BTMAP_END));
+ pr_warn("FIX_BTMAP_END: %d\n", FIX_BTMAP_END);
+ pr_warn("FIX_BTMAP_BEGIN: %d\n", FIX_BTMAP_BEGIN);
+ }
+
+ early_ioremap_setup();
+}
+
+void __init __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags)
+{
+ unsigned long addr = __fix_to_virt(idx);
+ pte_t *pte;
+ u64 desc;
+
+ if (idx > FIX_KMAP_END) {
+ BUG();
+ return;
+ }
+ pte = early_ioremap_pte(addr);
+
+ if (pgprot_val(flags))
+ set_pte_at(NULL, 0xfff00000, pte,
+ pfn_pte(phys >> PAGE_SHIFT, flags));
+ else
+ pte_clear(NULL, addr, pte);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ desc = *pte;
+}
+
+void __init
+early_ioremap_shutdown(void)
+{
+ pmd_t *pmd;
+ pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
+ pmd_clear(pmd);
+}
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4f08c13..5067294 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -35,6 +35,7 @@
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/mach/pci.h>
+#include <asm/early_ioremap.h>

#include "mm.h"
#include "tcm.h"
@@ -1489,6 +1490,7 @@ void __init paging_init(const struct machine_desc *mdesc)
{
void *zero_page;

+ early_ioremap_reset();
build_mem_type_table();
prepare_page_table();
map_lowmem();
--
1.8.5.3

2014-02-12 20:58:35

by Mark Salter

[permalink] [raw]
Subject: [PATCH v4 6/6] arm64: add early_ioremap support

Add support for early IO or memory mappings which are needed
before the normal ioremap() is usable. This also adds fixmap
support for permanent fixed mappings such as that used by the
earlyprintk device register region.

Signed-off-by: Mark Salter <[email protected]>
---
Documentation/arm64/memory.txt | 4 +-
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/Kbuild | 1 +
arch/arm64/include/asm/fixmap.h | 67 +++++++++++++++++++++++++++++++
arch/arm64/include/asm/io.h | 1 +
arch/arm64/include/asm/memory.h | 2 +-
arch/arm64/kernel/early_printk.c | 8 +++-
arch/arm64/kernel/head.S | 9 ++---
arch/arm64/kernel/setup.c | 2 +
arch/arm64/mm/ioremap.c | 85 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/mm/mmu.c | 41 -------------------
11 files changed, 169 insertions(+), 52 deletions(-)
create mode 100644 arch/arm64/include/asm/fixmap.h

diff --git a/Documentation/arm64/memory.txt b/Documentation/arm64/memory.txt
index 5e054bf..953c81e 100644
--- a/Documentation/arm64/memory.txt
+++ b/Documentation/arm64/memory.txt
@@ -35,7 +35,7 @@ ffffffbc00000000 ffffffbdffffffff 8GB vmemmap

ffffffbe00000000 ffffffbffbbfffff ~8GB [guard, future vmmemap]

-ffffffbffbc00000 ffffffbffbdfffff 2MB earlyprintk device
+ffffffbffbc00000 ffffffbffbdfffff 2MB fixed mappings

ffffffbffbe00000 ffffffbffbe0ffff 64KB PCI I/O space

@@ -60,7 +60,7 @@ fffffdfc00000000 fffffdfdffffffff 8GB vmemmap

fffffdfe00000000 fffffdfffbbfffff ~8GB [guard, future vmmemap]

-fffffdfffbc00000 fffffdfffbdfffff 2MB earlyprintk device
+fffffdfffbc00000 fffffdfffbdfffff 2MB fixed mappings

fffffdfffbe00000 fffffdfffbe0ffff 64KB PCI I/O space

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 27bbcfc..da4304a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -16,6 +16,7 @@ config ARM64
select DCACHE_WORD_ACCESS
select GENERIC_CLOCKEVENTS
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
+ select GENERIC_EARLY_IOREMAP
select GENERIC_IOMAP
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 71c53ec..27e3c6b 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -10,6 +10,7 @@ generic-y += delay.h
generic-y += div64.h
generic-y += dma.h
generic-y += emergency-restart.h
+generic-y += early_ioremap.h
generic-y += errno.h
generic-y += ftrace.h
generic-y += hw_irq.h
diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
new file mode 100644
index 0000000..9c1fb65
--- /dev/null
+++ b/arch/arm64/include/asm/fixmap.h
@@ -0,0 +1,67 @@
+/*
+ * fixmap.h: compile-time virtual memory allocation
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1998 Ingo Molnar
+ * Copyright (C) 2013 Mark Salter <[email protected]>
+ *
+ * Adapted from arch/x86_64 version.
+ *
+ */
+
+#ifndef _ASM_ARM64_FIXMAP_H
+#define _ASM_ARM64_FIXMAP_H
+
+#ifndef __ASSEMBLY__
+#include <linux/kernel.h>
+#include <asm/page.h>
+
+/*
+ * Here we define all the compile-time 'special' virtual
+ * addresses. The point is to have a constant address at
+ * compile time, but to set the physical address only
+ * in the boot process.
+ *
+ * These 'compile-time allocated' memory buffers are
+ * page-sized. Use set_fixmap(idx,phys) to associate
+ * physical memory with fixmap indices.
+ *
+ */
+enum fixed_addresses {
+ FIX_EARLYCON,
+ __end_of_permanent_fixed_addresses,
+
+ /*
+ * Temporary boot-time mappings, used by early_ioremap(),
+ * before ioremap() is functional.
+ */
+#ifdef CONFIG_ARM64_64K_PAGES
+#define NR_FIX_BTMAPS 4
+#else
+#define NR_FIX_BTMAPS 64
+#endif
+#define FIX_BTMAPS_SLOTS 7
+#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
+
+ FIX_BTMAP_END = __end_of_permanent_fixed_addresses,
+ FIX_BTMAP_BEGIN = FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1,
+ __end_of_fixed_addresses
+};
+
+#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
+#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
+
+#define FIXMAP_PAGE_IO __pgprot(PROT_DEVICE_nGnRE)
+
+extern void __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags);
+
+#define __set_fixmap __early_set_fixmap
+
+#include <asm-generic/fixmap.h>
+
+#endif /* !__ASSEMBLY__ */
+#endif /* _ASM_ARM64_FIXMAP_H */
diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 4cc813e..8fb2152 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -27,6 +27,7 @@
#include <asm/byteorder.h>
#include <asm/barrier.h>
#include <asm/pgtable.h>
+#include <asm/early_ioremap.h>

#include <xen/xen.h>

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 9dc5dc3..e94f945 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -49,7 +49,7 @@
#define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS - 1))
#define MODULES_END (PAGE_OFFSET)
#define MODULES_VADDR (MODULES_END - SZ_64M)
-#define EARLYCON_IOBASE (MODULES_VADDR - SZ_4M)
+#define FIXADDR_TOP (MODULES_VADDR - SZ_2M - PAGE_SIZE)
#define TASK_SIZE_64 (UL(1) << VA_BITS)

#ifdef CONFIG_COMPAT
diff --git a/arch/arm64/kernel/early_printk.c b/arch/arm64/kernel/early_printk.c
index fbb6e18..850d9a4 100644
--- a/arch/arm64/kernel/early_printk.c
+++ b/arch/arm64/kernel/early_printk.c
@@ -26,6 +26,8 @@
#include <linux/amba/serial.h>
#include <linux/serial_reg.h>

+#include <asm/fixmap.h>
+
static void __iomem *early_base;
static void (*printch)(char ch);

@@ -141,8 +143,10 @@ static int __init setup_early_printk(char *buf)
}
/* no options parsing yet */

- if (paddr)
- early_base = early_io_map(paddr, EARLYCON_IOBASE);
+ if (paddr) {
+ set_fixmap_io(FIX_EARLYCON, paddr);
+ early_base = (void __iomem *)fix_to_virt(FIX_EARLYCON);
+ }

printch = match->printch;
early_console = &early_console_dev;
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 0b281ff..c86bfdf 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -412,7 +412,7 @@ ENDPROC(__calc_phys_offset)
* - identity mapping to enable the MMU (low address, TTBR0)
* - first few MB of the kernel linear mapping to jump to once the MMU has
* been enabled, including the FDT blob (TTBR1)
- * - UART mapping if CONFIG_EARLY_PRINTK is enabled (TTBR1)
+ * - pgd entry for fixed mappings (TTBR1)
*/
__create_page_tables:
pgtbl x25, x26, x24 // idmap_pg_dir and swapper_pg_dir addresses
@@ -465,15 +465,12 @@ __create_page_tables:
sub x6, x6, #1 // inclusive range
create_block_map x0, x7, x3, x5, x6
1:
-#ifdef CONFIG_EARLY_PRINTK
/*
- * Create the pgd entry for the UART mapping. The full mapping is done
- * later based earlyprintk kernel parameter.
+ * Create the pgd entry for the fixed mappings.
*/
- ldr x5, =EARLYCON_IOBASE // UART virtual address
+ ldr x5, =FIXADDR_TOP // Fixed mapping virtual address
add x0, x26, #2 * PAGE_SIZE // section table address
create_pgd_entry x26, x0, x5, x6, x7
-#endif
ret
ENDPROC(__create_page_tables)
.ltorg
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 1c66cfb..4d2ac74 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -42,6 +42,7 @@
#include <linux/of_fdt.h>
#include <linux/of_platform.h>

+#include <asm/fixmap.h>
#include <asm/cputype.h>
#include <asm/elf.h>
#include <asm/cputable.h>
@@ -328,6 +329,7 @@ void __init setup_arch(char **cmdline_p)
*cmdline_p = boot_command_line;

init_mem_pgprot();
+ early_ioremap_init();

parse_early_param();

diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
index 2bb1d58..7ec3283 100644
--- a/arch/arm64/mm/ioremap.c
+++ b/arch/arm64/mm/ioremap.c
@@ -25,6 +25,10 @@
#include <linux/vmalloc.h>
#include <linux/io.h>

+#include <asm/fixmap.h>
+#include <asm/tlbflush.h>
+#include <asm/pgalloc.h>
+
static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size,
pgprot_t prot, void *caller)
{
@@ -98,3 +102,84 @@ void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size)
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_cache);
+
+#ifndef CONFIG_ARM64_64K_PAGES
+static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
+#endif
+
+static inline pmd_t * __init early_ioremap_pmd(unsigned long addr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+
+ pgd = pgd_offset_k(addr);
+ BUG_ON(pgd_none(*pgd) || pgd_bad(*pgd));
+
+ pud = pud_offset(pgd, addr);
+ BUG_ON(pud_none(*pud) || pud_bad(*pud));
+
+ return pmd_offset(pud, addr);
+}
+
+static inline pte_t * __init early_ioremap_pte(unsigned long addr)
+{
+ pmd_t *pmd = early_ioremap_pmd(addr);
+
+ BUG_ON(pmd_none(*pmd) || pmd_bad(*pmd));
+
+ return pte_offset_kernel(pmd, addr);
+}
+
+void __init early_ioremap_init(void)
+{
+ pmd_t *pmd;
+
+ pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
+#ifndef CONFIG_ARM64_64K_PAGES
+ /* need to populate pmd for 4k pagesize only */
+ pmd_populate_kernel(&init_mm, pmd, bm_pte);
+#endif
+ /*
+ * The boot-ioremap range spans multiple pmds, for which
+ * we are not prepared:
+ */
+ BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
+ != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
+
+ if (pmd != early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END))) {
+ WARN_ON(1);
+ pr_warn("pmd %p != %p\n",
+ pmd, early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END)));
+ pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n",
+ fix_to_virt(FIX_BTMAP_BEGIN));
+ pr_warn("fix_to_virt(FIX_BTMAP_END): %08lx\n",
+ fix_to_virt(FIX_BTMAP_END));
+
+ pr_warn("FIX_BTMAP_END: %d\n", FIX_BTMAP_END);
+ pr_warn("FIX_BTMAP_BEGIN: %d\n",
+ FIX_BTMAP_BEGIN);
+ }
+
+ early_ioremap_setup();
+}
+
+void __init __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags)
+{
+ unsigned long addr = __fix_to_virt(idx);
+ pte_t *pte;
+
+ if (idx >= __end_of_fixed_addresses) {
+ BUG();
+ return;
+ }
+
+ pte = early_ioremap_pte(addr);
+
+ if (pgprot_val(flags))
+ set_pte(pte, pfn_pte(phys >> PAGE_SHIFT, flags));
+ else {
+ pte_clear(&init_mm, addr, pte);
+ flush_tlb_kernel_range(addr, addr+PAGE_SIZE);
+ }
+}
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ba259a0..6b7e895 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -260,47 +260,6 @@ static void __init create_mapping(phys_addr_t phys, unsigned long virt,
} while (pgd++, addr = next, addr != end);
}

-#ifdef CONFIG_EARLY_PRINTK
-/*
- * Create an early I/O mapping using the pgd/pmd entries already populated
- * in head.S as this function is called too early to allocated any memory. The
- * mapping size is 2MB with 4KB pages or 64KB or 64KB pages.
- */
-void __iomem * __init early_io_map(phys_addr_t phys, unsigned long virt)
-{
- unsigned long size, mask;
- bool page64k = IS_ENABLED(CONFIG_ARM64_64K_PAGES);
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *pte;
-
- /*
- * No early pte entries with !ARM64_64K_PAGES configuration, so using
- * sections (pmd).
- */
- size = page64k ? PAGE_SIZE : SECTION_SIZE;
- mask = ~(size - 1);
-
- pgd = pgd_offset_k(virt);
- pud = pud_offset(pgd, virt);
- if (pud_none(*pud))
- return NULL;
- pmd = pmd_offset(pud, virt);
-
- if (page64k) {
- if (pmd_none(*pmd))
- return NULL;
- pte = pte_offset_kernel(pmd, virt);
- set_pte(pte, __pte((phys & mask) | PROT_DEVICE_nGnRE));
- } else {
- set_pmd(pmd, __pmd((phys & mask) | PROT_SECT_DEVICE_nGnRE));
- }
-
- return (void __iomem *)((virt & mask) + (phys & ~mask));
-}
-#endif
-
static void __init map_mem(void)
{
struct memblock_region *reg;
--
1.8.5.3

2014-02-25 14:40:26

by Mark Salter

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] generic early_ioremap support

On Wed, 2014-02-12 at 15:56 -0500, Mark Salter wrote:
> This patch series takes the common bits from the x86 early ioremap
> implementation and creates a generic implementation which may be used
> by other architectures. The early ioremap interfaces are intended for
> situations where boot code needs to make temporary virtual mappings
> before the normal ioremap interfaces are available. Typically, this
> means before paging_init() has run.
>
> These patches are layered on top of generic fixmap patches which
> were pulled into 3.14-rc with the exception of the arm patch:
>
> https://lkml.org/lkml/2013/11/25/477
>
> The arm fixmap patch is currently in the akpm tree and has been
> part of linux-next for a while.
>
> This is version 4 of the patch series. These patches (and underlying
> fixmap patches) may be found at:
>
> git://github.com/mosalter/linux.git (early-ioremap-v4 branch)

There have been no comments on this patch series over the past
two weeks. I'd like to get it into linux-next for some wider
testing and eventually into 3.15. Is there something I can do
to help it along?

2014-02-25 18:31:40

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] generic early_ioremap support

On Tue, Feb 25, 2014 at 02:10:04PM +0000, Mark Salter wrote:
> On Wed, 2014-02-12 at 15:56 -0500, Mark Salter wrote:
> > This patch series takes the common bits from the x86 early ioremap
> > implementation and creates a generic implementation which may be used
> > by other architectures. The early ioremap interfaces are intended for
> > situations where boot code needs to make temporary virtual mappings
> > before the normal ioremap interfaces are available. Typically, this
> > means before paging_init() has run.
> >
> > These patches are layered on top of generic fixmap patches which
> > were pulled into 3.14-rc with the exception of the arm patch:
> >
> > https://lkml.org/lkml/2013/11/25/477
> >
> > The arm fixmap patch is currently in the akpm tree and has been
> > part of linux-next for a while.
> >
> > This is version 4 of the patch series. These patches (and underlying
> > fixmap patches) may be found at:
> >
> > git://github.com/mosalter/linux.git (early-ioremap-v4 branch)
>
> There have been no comments on this patch series over the past
> two weeks. I'd like to get it into linux-next for some wider
> testing and eventually into 3.15. Is there something I can do
> to help it along?

I'd suggest spitting the core part out from the arch-specific parts. That
way, the core part can merged independently and architectures can move over
as they see fit. It also signals (at least to me) that, "hey, I should
probably review this" whilst my current stance is "there's a whole load of
stuff under mm/ that needs to be acked first".

If you put the whole thing into next, you just run the risk of conflicts
with all the arch trees.

Will

2014-02-25 18:46:32

by Mark Salter

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] generic early_ioremap support

On Tue, 2014-02-25 at 18:30 +0000, Will Deacon wrote:
> I'd suggest spitting the core part out from the arch-specific parts. That
> way, the core part can merged independently and architectures can move over
> as they see fit. It also signals (at least to me) that, "hey, I should
> probably review this" whilst my current stance is "there's a whole load of
> stuff under mm/ that needs to be acked first".
>
> If you put the whole thing into next, you just run the risk of conflicts
> with all the arch trees.

I've been thinking of breaking out the common bits and x86 bits and just
going with that for now. There's no point in just doing the common bits
because it won't get tested without at least one architecture using it.

2014-02-25 19:45:47

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] generic early_ioremap support

On 02/25/2014 10:45 AM, Mark Salter wrote:
> On Tue, 2014-02-25 at 18:30 +0000, Will Deacon wrote:
>> I'd suggest spitting the core part out from the arch-specific parts. That
>> way, the core part can merged independently and architectures can move over
>> as they see fit. It also signals (at least to me) that, "hey, I should
>> probably review this" whilst my current stance is "there's a whole load of
>> stuff under mm/ that needs to be acked first".
>>
>> If you put the whole thing into next, you just run the risk of conflicts
>> with all the arch trees.
>
> I've been thinking of breaking out the common bits and x86 bits and just
> going with that for now. There's no point in just doing the common bits
> because it won't get tested without at least one architecture using it.
>

If you think it makes sense we could take the common bits + x86 and put
them through the -tip tree. The other option would be to put the whole
thread in linux-next with Acks.

As far as x86 is concerned it looks like it is mostly just code
movement, so I'm happy giving my:

Acked-by: H. Peter Anvin <[email protected]>

2014-02-25 23:04:40

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] generic early_ioremap support

On 25 Feb 2014, at 19:42, H. Peter Anvin <[email protected]> wrote:
> On 02/25/2014 10:45 AM, Mark Salter wrote:
>> On Tue, 2014-02-25 at 18:30 +0000, Will Deacon wrote:
>>> I'd suggest spitting the core part out from the arch-specific parts. That
>>> way, the core part can merged independently and architectures can move over
>>> as they see fit. It also signals (at least to me) that, "hey, I should
>>> probably review this" whilst my current stance is "there's a whole load of
>>> stuff under mm/ that needs to be acked first".
>>>
>>> If you put the whole thing into next, you just run the risk of conflicts
>>> with all the arch trees.
>>
>> I've been thinking of breaking out the common bits and x86 bits and just
>> going with that for now. There's no point in just doing the common bits
>> because it won't get tested without at least one architecture using it.
>>
>
> If you think it makes sense we could take the common bits + x86 and put
> them through the -tip tree.

I?m ok with the arm64 patches to go through -tip with my ack on all
patches:

Acked-by: Catalin Marinas <[email protected]>

> The other option would be to put the whole
> thread in linux-next with Acks.
>
> As far as x86 is concerned it looks like it is mostly just code
> movement, so I'm happy giving my:
>
> Acked-by: H. Peter Anvin <[email protected]>

Thanks. Either way works for me.

I think the series still need an ack from rmk at least on the arm patch
(4/6).

Catalin-

2014-02-25 23:06:31

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] generic early_ioremap support

On 02/25/2014 03:04 PM, Catalin Marinas wrote:
>
> I think the series still need an ack from rmk at least on the arm patch
> (4/6).
>

OK. Russell?

-hpa

2014-02-26 05:48:22

by Rob Herring

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] arm: add early_ioremap support

On Wed, Feb 12, 2014 at 2:56 PM, Mark Salter <[email protected]> wrote:
> This patch uses the generic early_ioremap code to implement
> early_ioremap for ARM. The ARM-specific bits come mostly from
> an earlier patch from Leif Lindholm <[email protected]>
> here:

This doesn't actually work for me. The PTE flags are not correct and
accesses to a device fault. See below.

>
> https://lkml.org/lkml/2013/10/3/279
>
> Signed-off-by: Mark Salter <[email protected]>
> Tested-by: Leif Lindholm <[email protected]>
> Acked-by: Catalin Marinas <[email protected]>
> ---
> arch/arm/Kconfig | 10 +++++
> arch/arm/include/asm/Kbuild | 1 +
> arch/arm/include/asm/fixmap.h | 20 ++++++++++
> arch/arm/include/asm/io.h | 1 +
> arch/arm/kernel/setup.c | 2 +
> arch/arm/mm/Makefile | 4 ++
> arch/arm/mm/early_ioremap.c | 93 +++++++++++++++++++++++++++++++++++++++++++
> arch/arm/mm/mmu.c | 2 +
> 8 files changed, 133 insertions(+)
> create mode 100644 arch/arm/mm/early_ioremap.c
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index e254198..7a35ef6 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1874,6 +1874,16 @@ config UACCESS_WITH_MEMCPY
> However, if the CPU data cache is using a write-allocate mode,
> this option is unlikely to provide any performance gain.
>
> +config EARLY_IOREMAP
> + bool "Provide early_ioremap() support for kernel initialization"
> + select GENERIC_EARLY_IOREMAP
> + help
> + Provide a mechanism for kernel initialisation code to temporarily
> + map, in a highmem-agnostic way, memory pages in before ioremap()
> + and friends are available (before paging_init() has run). It uses
> + the same virtual memory range as kmap so all early mappings must
> + be unmapped before paging_init() is called.
> +
> config SECCOMP
> bool
> prompt "Enable seccomp to safely compute untrusted bytecode"
> diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
> index 3278afe..6fcfd7d 100644
> --- a/arch/arm/include/asm/Kbuild
> +++ b/arch/arm/include/asm/Kbuild
> @@ -4,6 +4,7 @@ generic-y += auxvec.h
> generic-y += bitsperlong.h
> generic-y += cputime.h
> generic-y += current.h
> +generic-y += early_ioremap.h
> generic-y += emergency-restart.h
> generic-y += errno.h
> generic-y += exec.h
> diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
> index 68ea615..ff8fa3e 100644
> --- a/arch/arm/include/asm/fixmap.h
> +++ b/arch/arm/include/asm/fixmap.h
> @@ -21,8 +21,28 @@ enum fixed_addresses {
> FIX_KMAP_BEGIN,
> FIX_KMAP_END = (FIXADDR_TOP - FIXADDR_START) >> PAGE_SHIFT,
> __end_of_fixed_addresses
> +/*
> + * 224 temporary boot-time mappings, used by early_ioremap(),
> + * before ioremap() is functional.
> + *
> + * (P)re-using the FIXADDR region, which is used for highmem
> + * later on, and statically aligned to 1MB.
> + */
> +#define NR_FIX_BTMAPS 32
> +#define FIX_BTMAPS_SLOTS 7
> +#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
> +#define FIX_BTMAP_END FIX_KMAP_BEGIN
> +#define FIX_BTMAP_BEGIN (FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1)

Why the different logic from arm64? Specifically, it doesn't make
adding a permanent mapping simple.

> };
>
> +#define FIXMAP_PAGE_COMMON (L_PTE_YOUNG | L_PTE_PRESENT | L_PTE_XN)
> +
> +#define FIXMAP_PAGE_NORMAL (FIXMAP_PAGE_COMMON | L_PTE_MT_WRITEBACK)
> +#define FIXMAP_PAGE_IO (FIXMAP_PAGE_COMMON | L_PTE_MT_DEV_NONSHARED)

This should be L_PTE_MT_DEV_SHARED and also needs L_PTE_DIRTY and
L_PTE_SHARED as we want this to match MT_DEVICE.

These should also be wrapped with __pgprot().

> +
> +extern void __early_set_fixmap(enum fixed_addresses idx,
> + phys_addr_t phys, pgprot_t flags);
> +
> #include <asm-generic/fixmap.h>
>
> #endif
> diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
> index 8aa4cca..637e0cd 100644
> --- a/arch/arm/include/asm/io.h
> +++ b/arch/arm/include/asm/io.h
> @@ -28,6 +28,7 @@
> #include <asm/byteorder.h>
> #include <asm/memory.h>
> #include <asm-generic/pci_iomap.h>
> +#include <asm/early_ioremap.h>
> #include <xen/xen.h>
>
> /*
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index b0df976..9c8b751 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -30,6 +30,7 @@
> #include <linux/bug.h>
> #include <linux/compiler.h>
> #include <linux/sort.h>
> +#include <linux/io.h>
>
> #include <asm/unified.h>
> #include <asm/cp15.h>
> @@ -880,6 +881,7 @@ void __init setup_arch(char **cmdline_p)
> const struct machine_desc *mdesc;
>
> setup_processor();
> + early_ioremap_init();
> mdesc = setup_machine_fdt(__atags_pointer);
> if (!mdesc)
> mdesc = setup_machine_tags(__atags_pointer, __machine_arch_type);
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> index 7f39ce2..501af98 100644
> --- a/arch/arm/mm/Makefile
> +++ b/arch/arm/mm/Makefile
> @@ -12,6 +12,10 @@ ifneq ($(CONFIG_MMU),y)
> obj-y += nommu.o
> endif
>
> +ifeq ($(CONFIG_MMU),y)
> +obj-$(CONFIG_EARLY_IOREMAP) += early_ioremap.o
> +endif
> +
> obj-$(CONFIG_ARM_PTDUMP) += dump.o
> obj-$(CONFIG_MODULES) += proc-syms.o
>
> diff --git a/arch/arm/mm/early_ioremap.c b/arch/arm/mm/early_ioremap.c
> new file mode 100644
> index 0000000..c3e2bf2
> --- /dev/null
> +++ b/arch/arm/mm/early_ioremap.c
> @@ -0,0 +1,93 @@
> +/*
> + * early_ioremap() support for ARM
> + *
> + * Based on existing support in arch/x86/mm/ioremap.c
> + *
> + * Restrictions: currently only functional before paging_init()
> + */
> +
> +#include <linux/init.h>
> +#include <linux/io.h>

io.h doesn't appear to be needed.

> +
> +#include <asm/fixmap.h>
> +#include <asm/pgalloc.h>
> +#include <asm/pgtable.h>
> +#include <asm/tlbflush.h>
> +
> +#include <asm/mach/map.h>
> +
> +static pte_t bm_pte[PTRS_PER_PTE] __aligned(PTE_HWTABLE_SIZE) __initdata;
> +
> +static inline pmd_t * __init early_ioremap_pmd(unsigned long addr)
> +{
> + unsigned int index = pgd_index(addr);
> + pgd_t *pgd = cpu_get_pgd() + index;
> + pud_t *pud = pud_offset(pgd, addr);
> + pmd_t *pmd = pmd_offset(pud, addr);
> +
> + return pmd;
> +}
> +
> +static inline pte_t * __init early_ioremap_pte(unsigned long addr)
> +{
> + return &bm_pte[pte_index(addr)];
> +}
> +
> +void __init early_ioremap_init(void)
> +{
> + pmd_t *pmd;
> +
> + pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
> +
> + pmd_populate_kernel(NULL, pmd, bm_pte);
> +
> + /*
> + * Make sure we don't span multiple pmds.
> + */
> + BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
> + != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
> +
> + if (pmd != early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END))) {
> + WARN_ON(1);
> + pr_warn("pmd %p != %p\n",
> + pmd, early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END)));
> + pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n",
> + fix_to_virt(FIX_BTMAP_BEGIN));
> + pr_warn("fix_to_virt(FIX_BTMAP_END): %08lx\n",
> + fix_to_virt(FIX_BTMAP_END));
> + pr_warn("FIX_BTMAP_END: %d\n", FIX_BTMAP_END);
> + pr_warn("FIX_BTMAP_BEGIN: %d\n", FIX_BTMAP_BEGIN);
> + }
> +
> + early_ioremap_setup();
> +}
> +
> +void __init __early_set_fixmap(enum fixed_addresses idx,
> + phys_addr_t phys, pgprot_t flags)
> +{
> + unsigned long addr = __fix_to_virt(idx);
> + pte_t *pte;
> + u64 desc;
> +
> + if (idx > FIX_KMAP_END) {
> + BUG();
> + return;
> + }
> + pte = early_ioremap_pte(addr);
> +
> + if (pgprot_val(flags))
> + set_pte_at(NULL, 0xfff00000, pte,

Couldn't you use addr here instead of 0xfff00000? It's not really used
other than a check against TASK_SIZE.

> + pfn_pte(phys >> PAGE_SHIFT, flags));

phys_to_pfn(phys)

> + else
> + pte_clear(NULL, addr, pte);
> + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> + desc = *pte;
> +}
> +
> +void __init
> +early_ioremap_shutdown(void)
> +{
> + pmd_t *pmd;
> + pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
> + pmd_clear(pmd);

This is redundant with the clearing done in devicemaps_init. Not a big
deal but this is probably something we don't want with permanent
mappings. I'm still trying to figure out how to do those. Leaving this
page table in place doesn't seem to work, so I think we'll have to
copy mappings to the new page tables.

Rob

2014-02-26 09:37:17

by Leif Lindholm

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] arm: add early_ioremap support

Hi Rob,

Thanks for having a look.
Since I'm at least partially responsible for the below, I'll respond
before Mark wakes up.

On Tue, Feb 25, 2014 at 11:48:19PM -0600, Rob Herring wrote:
> On Wed, Feb 12, 2014 at 2:56 PM, Mark Salter <[email protected]> wrote:
> > This patch uses the generic early_ioremap code to implement
> > early_ioremap for ARM. The ARM-specific bits come mostly from
> > an earlier patch from Leif Lindholm <[email protected]>
> > here:
>
> This doesn't actually work for me. The PTE flags are not correct and
> accesses to a device fault. See below.

Do they fault before paging_init()?
If not, see below.

> >
> > https://lkml.org/lkml/2013/10/3/279
> >
> > Signed-off-by: Mark Salter <[email protected]>
> > Tested-by: Leif Lindholm <[email protected]>
> > Acked-by: Catalin Marinas <[email protected]>
> > ---
> > arch/arm/Kconfig | 10 +++++
> > arch/arm/include/asm/Kbuild | 1 +
> > arch/arm/include/asm/fixmap.h | 20 ++++++++++
> > arch/arm/include/asm/io.h | 1 +
> > arch/arm/kernel/setup.c | 2 +
> > arch/arm/mm/Makefile | 4 ++
> > arch/arm/mm/early_ioremap.c | 93 +++++++++++++++++++++++++++++++++++++++++++
> > arch/arm/mm/mmu.c | 2 +
> > 8 files changed, 133 insertions(+)
> > create mode 100644 arch/arm/mm/early_ioremap.c
> >
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index e254198..7a35ef6 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -1874,6 +1874,16 @@ config UACCESS_WITH_MEMCPY
> > However, if the CPU data cache is using a write-allocate mode,
> > this option is unlikely to provide any performance gain.
> >
> > +config EARLY_IOREMAP
> > + bool "Provide early_ioremap() support for kernel initialization"
> > + select GENERIC_EARLY_IOREMAP
> > + help
> > + Provide a mechanism for kernel initialisation code to temporarily
> > + map, in a highmem-agnostic way, memory pages in before ioremap()
> > + and friends are available (before paging_init() has run). It uses
> > + the same virtual memory range as kmap so all early mappings must
> > + be unmapped before paging_init() is called.
> > +

^^

> > config SECCOMP
> > bool
> > prompt "Enable seccomp to safely compute untrusted bytecode"
> > diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
> > index 3278afe..6fcfd7d 100644
> > --- a/arch/arm/include/asm/Kbuild
> > +++ b/arch/arm/include/asm/Kbuild
> > @@ -4,6 +4,7 @@ generic-y += auxvec.h
> > generic-y += bitsperlong.h
> > generic-y += cputime.h
> > generic-y += current.h
> > +generic-y += early_ioremap.h
> > generic-y += emergency-restart.h
> > generic-y += errno.h
> > generic-y += exec.h
> > diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
> > index 68ea615..ff8fa3e 100644
> > --- a/arch/arm/include/asm/fixmap.h
> > +++ b/arch/arm/include/asm/fixmap.h
> > @@ -21,8 +21,28 @@ enum fixed_addresses {
> > FIX_KMAP_BEGIN,
> > FIX_KMAP_END = (FIXADDR_TOP - FIXADDR_START) >> PAGE_SHIFT,
> > __end_of_fixed_addresses
> > +/*
> > + * 224 temporary boot-time mappings, used by early_ioremap(),
> > + * before ioremap() is functional.
> > + *
> > + * (P)re-using the FIXADDR region, which is used for highmem
> > + * later on, and statically aligned to 1MB.
> > + */
> > +#define NR_FIX_BTMAPS 32
> > +#define FIX_BTMAPS_SLOTS 7
> > +#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
> > +#define FIX_BTMAP_END FIX_KMAP_BEGIN
> > +#define FIX_BTMAP_BEGIN (FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1)
>
> Why the different logic from arm64? Specifically, it doesn't make
> adding a permanent mapping simple.

Making a permanent mapping using this would require either:
- Not using the fixmap region.
- Rewriting arm kmap.

> > };
> >
> > +#define FIXMAP_PAGE_COMMON (L_PTE_YOUNG | L_PTE_PRESENT | L_PTE_XN)
> > +
> > +#define FIXMAP_PAGE_NORMAL (FIXMAP_PAGE_COMMON | L_PTE_MT_WRITEBACK)
> > +#define FIXMAP_PAGE_IO (FIXMAP_PAGE_COMMON | L_PTE_MT_DEV_NONSHARED)
>
> This should be L_PTE_MT_DEV_SHARED and also needs L_PTE_DIRTY and
> L_PTE_SHARED as we want this to match MT_DEVICE.
>
> These should also be wrapped with __pgprot().

Ok.

> > +
> > +extern void __early_set_fixmap(enum fixed_addresses idx,
> > + phys_addr_t phys, pgprot_t flags);
> > +
> > #include <asm-generic/fixmap.h>
> >
> > #endif
> > diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
> > index 8aa4cca..637e0cd 100644
> > --- a/arch/arm/include/asm/io.h
> > +++ b/arch/arm/include/asm/io.h
> > @@ -28,6 +28,7 @@
> > #include <asm/byteorder.h>
> > #include <asm/memory.h>
> > #include <asm-generic/pci_iomap.h>
> > +#include <asm/early_ioremap.h>
> > #include <xen/xen.h>
> >
> > /*
> > diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> > index b0df976..9c8b751 100644
> > --- a/arch/arm/kernel/setup.c
> > +++ b/arch/arm/kernel/setup.c
> > @@ -30,6 +30,7 @@
> > #include <linux/bug.h>
> > #include <linux/compiler.h>
> > #include <linux/sort.h>
> > +#include <linux/io.h>
> >
> > #include <asm/unified.h>
> > #include <asm/cp15.h>
> > @@ -880,6 +881,7 @@ void __init setup_arch(char **cmdline_p)
> > const struct machine_desc *mdesc;
> >
> > setup_processor();
> > + early_ioremap_init();
> > mdesc = setup_machine_fdt(__atags_pointer);
> > if (!mdesc)
> > mdesc = setup_machine_tags(__atags_pointer, __machine_arch_type);
> > diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> > index 7f39ce2..501af98 100644
> > --- a/arch/arm/mm/Makefile
> > +++ b/arch/arm/mm/Makefile
> > @@ -12,6 +12,10 @@ ifneq ($(CONFIG_MMU),y)
> > obj-y += nommu.o
> > endif
> >
> > +ifeq ($(CONFIG_MMU),y)
> > +obj-$(CONFIG_EARLY_IOREMAP) += early_ioremap.o
> > +endif
> > +
> > obj-$(CONFIG_ARM_PTDUMP) += dump.o
> > obj-$(CONFIG_MODULES) += proc-syms.o
> >
> > diff --git a/arch/arm/mm/early_ioremap.c b/arch/arm/mm/early_ioremap.c
> > new file mode 100644
> > index 0000000..c3e2bf2
> > --- /dev/null
> > +++ b/arch/arm/mm/early_ioremap.c
> > @@ -0,0 +1,93 @@
> > +/*
> > + * early_ioremap() support for ARM
> > + *
> > + * Based on existing support in arch/x86/mm/ioremap.c
> > + *
> > + * Restrictions: currently only functional before paging_init()
> > + */
> > +
> > +#include <linux/init.h>
> > +#include <linux/io.h>
>
> io.h doesn't appear to be needed.

No, not in this version.

> > +
> > +#include <asm/fixmap.h>
> > +#include <asm/pgalloc.h>
> > +#include <asm/pgtable.h>
> > +#include <asm/tlbflush.h>
> > +
> > +#include <asm/mach/map.h>
> > +
> > +static pte_t bm_pte[PTRS_PER_PTE] __aligned(PTE_HWTABLE_SIZE) __initdata;
> > +
> > +static inline pmd_t * __init early_ioremap_pmd(unsigned long addr)
> > +{
> > + unsigned int index = pgd_index(addr);
> > + pgd_t *pgd = cpu_get_pgd() + index;
> > + pud_t *pud = pud_offset(pgd, addr);
> > + pmd_t *pmd = pmd_offset(pud, addr);
> > +
> > + return pmd;
> > +}
> > +
> > +static inline pte_t * __init early_ioremap_pte(unsigned long addr)
> > +{
> > + return &bm_pte[pte_index(addr)];
> > +}
> > +
> > +void __init early_ioremap_init(void)
> > +{
> > + pmd_t *pmd;
> > +
> > + pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
> > +
> > + pmd_populate_kernel(NULL, pmd, bm_pte);
> > +
> > + /*
> > + * Make sure we don't span multiple pmds.
> > + */
> > + BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
> > + != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
> > +
> > + if (pmd != early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END))) {
> > + WARN_ON(1);
> > + pr_warn("pmd %p != %p\n",
> > + pmd, early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END)));
> > + pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n",
> > + fix_to_virt(FIX_BTMAP_BEGIN));
> > + pr_warn("fix_to_virt(FIX_BTMAP_END): %08lx\n",
> > + fix_to_virt(FIX_BTMAP_END));
> > + pr_warn("FIX_BTMAP_END: %d\n", FIX_BTMAP_END);
> > + pr_warn("FIX_BTMAP_BEGIN: %d\n", FIX_BTMAP_BEGIN);
> > + }
> > +
> > + early_ioremap_setup();
> > +}
> > +
> > +void __init __early_set_fixmap(enum fixed_addresses idx,
> > + phys_addr_t phys, pgprot_t flags)
> > +{
> > + unsigned long addr = __fix_to_virt(idx);
> > + pte_t *pte;
> > + u64 desc;
> > +
> > + if (idx > FIX_KMAP_END) {
> > + BUG();
> > + return;
> > + }
> > + pte = early_ioremap_pte(addr);
> > +
> > + if (pgprot_val(flags))
> > + set_pte_at(NULL, 0xfff00000, pte,
>
> Couldn't you use addr here instead of 0xfff00000? It's not really used
> other than a check against TASK_SIZE.

Sure.

> > + pfn_pte(phys >> PAGE_SHIFT, flags));
>
> phys_to_pfn(phys)

Stolen like that from x86 :)
Sure.

> > + else
> > + pte_clear(NULL, addr, pte);
> > + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > + desc = *pte;
> > +}
> > +
> > +void __init
> > +early_ioremap_shutdown(void)
> > +{
> > + pmd_t *pmd;
> > + pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
> > + pmd_clear(pmd);
>
> This is redundant with the clearing done in devicemaps_init. Not a big
> deal but this is probably something we don't want with permanent
> mappings. I'm still trying to figure out how to do those. Leaving this
> page table in place doesn't seem to work, so I think we'll have to
> copy mappings to the new page tables.

As described in the Kconfig option, and more explicitly in the
documentation included with my last submission (last summer), these
mappings don't stick around.

/
Leif

2014-02-26 15:00:01

by Mark Salter

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] arm: add early_ioremap support

On Tue, 2014-02-25 at 23:48 -0600, Rob Herring wrote:
> > +#define NR_FIX_BTMAPS 32
> > +#define FIX_BTMAPS_SLOTS 7
> > +#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
> > +#define FIX_BTMAP_END FIX_KMAP_BEGIN
> > +#define FIX_BTMAP_BEGIN (FIX_BTMAP_END +
> TOTAL_FIX_BTMAPS - 1)
>
> Why the different logic from arm64? Specifically, it doesn't make
> adding a permanent mapping simple.

I looked at adding support for permanent mappings but it was going to
take more time than I had. Also on ARM, we have to deal with kmap's
needs as well. Working that out was going to take more time than I
had. I think getting the patch in now to support early_ioremap is
the way to go. Adding support for permanent mappings can be done
later with the early console support for which it is needed.

2014-02-26 15:56:26

by Rob Herring

[permalink] [raw]
Subject: Re: [PATCH v4 4/6] arm: add early_ioremap support

On Wed, Feb 26, 2014 at 8:59 AM, Mark Salter <[email protected]> wrote:
> On Tue, 2014-02-25 at 23:48 -0600, Rob Herring wrote:
>> > +#define NR_FIX_BTMAPS 32
>> > +#define FIX_BTMAPS_SLOTS 7
>> > +#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
>> > +#define FIX_BTMAP_END FIX_KMAP_BEGIN
>> > +#define FIX_BTMAP_BEGIN (FIX_BTMAP_END +
>> TOTAL_FIX_BTMAPS - 1)
>>
>> Why the different logic from arm64? Specifically, it doesn't make
>> adding a permanent mapping simple.
>
> I looked at adding support for permanent mappings but it was going to
> take more time than I had. Also on ARM, we have to deal with kmap's
> needs as well. Working that out was going to take more time than I
> had. I think getting the patch in now to support early_ioremap is
> the way to go. Adding support for permanent mappings can be done
> later with the early console support for which it is needed.

I'm not saying that you should add permanent mappings, but just align
the definitions across arches more. So make arm look something like
this:

enum fixed_addresses {
__end_of_permanent_fixed_addresses,

/*
* Temporary boot-time mappings, used by early_ioremap(),
* before ioremap() is functional.
*/
#define NR_FIX_BTMAPS 32
#define FIX_BTMAPS_SLOTS 7
#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)

FIX_BTMAP_END = __end_of_permanent_fixed_addresses,
FIX_BTMAP_BEGIN = FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1,
__end_of_fixed_addresses
FIX_KMAP_BEGIN = FIX_BTMAP_END,
FIX_KMAP_END = FIX_BTMAP_BEGIN,
};

These could then go into the generic header:

#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)

Rob