2020-01-08 15:29:25

by Sergey Dyasli

[permalink] [raw]
Subject: [PATCH v1 0/4] basic KASAN support for Xen PV domains

This series allows to boot and run Xen PV kernels (Dom0 and DomU) with
CONFIG_KASAN=y. It has been used internally for some time now with good
results for finding memory corruption issues in Dom0 kernel.

Only Outline instrumentation is supported at the moment.

Sergey Dyasli (2):
kasan: introduce set_pmd_early_shadow()
x86/xen: add basic KASAN support for PV kernel

Ross Lagerwall (2):
xen: teach KASAN about grant tables
xen/netback: Fix grant copy across page boundary with KASAN

arch/x86/mm/kasan_init_64.c | 12 +++++++
arch/x86/xen/Makefile | 7 ++++
arch/x86/xen/enlighten_pv.c | 3 ++
arch/x86/xen/mmu_pv.c | 39 ++++++++++++++++++++
drivers/net/xen-netback/common.h | 2 +-
drivers/net/xen-netback/netback.c | 59 +++++++++++++++++++++++++------
drivers/xen/Makefile | 2 ++
drivers/xen/grant-table.c | 5 ++-
include/xen/xen-ops.h | 4 +++
kernel/Makefile | 2 ++
lib/Kconfig.kasan | 3 +-
mm/kasan/init.c | 25 ++++++++-----
12 files changed, 141 insertions(+), 22 deletions(-)

--
2.17.1


2020-01-08 15:29:32

by Sergey Dyasli

[permalink] [raw]
Subject: [PATCH v1 4/4] xen/netback: Fix grant copy across page boundary with KASAN

From: Ross Lagerwall <[email protected]>

When KASAN (or SLUB_DEBUG) is turned on, the normal expectation that
allocations are aligned to the next power of 2 of the size does not
hold. Therefore, handle grant copies that cross page boundaries.

Signed-off-by: Ross Lagerwall <[email protected]>
Signed-off-by: Sergey Dyasli <[email protected]>
---
RFC --> v1:
- Added BUILD_BUG_ON to the netback patch
- xenvif_idx_release() now located outside the loop

CC: Wei Liu <[email protected]>
CC: Paul Durrant <[email protected]>
---
drivers/net/xen-netback/common.h | 2 +-
drivers/net/xen-netback/netback.c | 59 +++++++++++++++++++++++++------
2 files changed, 49 insertions(+), 12 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 05847eb91a1b..e57684415edd 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -155,7 +155,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
grant_handle_t grant_tx_handle[MAX_PENDING_REQS];

- struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
+ struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS * 2];
struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
/* passed to gnttab_[un]map_refs with pages under (un)mapping */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 0020b2e8c279..33b8f8d043e6 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -320,6 +320,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue,

struct xenvif_tx_cb {
u16 pending_idx;
+ u8 copies;
};

#define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb)
@@ -439,6 +440,7 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
{
struct gnttab_map_grant_ref *gop_map = *gopp_map;
u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx;
+ u8 copies = XENVIF_TX_CB(skb)->copies;
/* This always points to the shinfo of the skb being checked, which
* could be either the first or the one on the frag_list
*/
@@ -450,23 +452,26 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
int nr_frags = shinfo->nr_frags;
const bool sharedslot = nr_frags &&
frag_get_pending_idx(&shinfo->frags[0]) == pending_idx;
- int i, err;
+ int i, err = 0;

- /* Check status of header. */
- err = (*gopp_copy)->status;
- if (unlikely(err)) {
- if (net_ratelimit())
- netdev_dbg(queue->vif->dev,
+ while (copies) {
+ /* Check status of header. */
+ int newerr = (*gopp_copy)->status;
+ if (unlikely(newerr)) {
+ if (net_ratelimit())
+ netdev_dbg(queue->vif->dev,
"Grant copy of header failed! status: %d pending_idx: %u ref: %u\n",
(*gopp_copy)->status,
pending_idx,
(*gopp_copy)->source.u.ref);
- /* The first frag might still have this slot mapped */
- if (!sharedslot)
- xenvif_idx_release(queue, pending_idx,
- XEN_NETIF_RSP_ERROR);
+ err = newerr;
+ }
+ (*gopp_copy)++;
+ copies--;
}
- (*gopp_copy)++;
+ /* The first frag might still have this slot mapped */
+ if (unlikely(err) && !sharedslot)
+ xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);

check_frags:
for (i = 0; i < nr_frags; i++, gop_map++) {
@@ -910,6 +915,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
xenvif_tx_err(queue, &txreq, extra_count, idx);
break;
}
+ XENVIF_TX_CB(skb)->copies = 0;

skb_shinfo(skb)->nr_frags = ret;
if (data_len < txreq.size)
@@ -933,6 +939,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
"Can't allocate the frag_list skb.\n");
break;
}
+ XENVIF_TX_CB(nskb)->copies = 0;
}

if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
@@ -990,6 +997,31 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,

queue->tx_copy_ops[*copy_ops].len = data_len;
queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
+ XENVIF_TX_CB(skb)->copies++;
+
+ if (offset_in_page(skb->data) + data_len > XEN_PAGE_SIZE) {
+ unsigned int extra_len = offset_in_page(skb->data) +
+ data_len - XEN_PAGE_SIZE;
+
+ queue->tx_copy_ops[*copy_ops].len -= extra_len;
+ (*copy_ops)++;
+
+ queue->tx_copy_ops[*copy_ops].source.u.ref = txreq.gref;
+ queue->tx_copy_ops[*copy_ops].source.domid =
+ queue->vif->domid;
+ queue->tx_copy_ops[*copy_ops].source.offset =
+ txreq.offset + data_len - extra_len;
+
+ queue->tx_copy_ops[*copy_ops].dest.u.gmfn =
+ virt_to_gfn(skb->data + data_len - extra_len);
+ queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF;
+ queue->tx_copy_ops[*copy_ops].dest.offset = 0;
+
+ queue->tx_copy_ops[*copy_ops].len = extra_len;
+ queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
+
+ XENVIF_TX_CB(skb)->copies++;
+ }

(*copy_ops)++;

@@ -1674,5 +1706,10 @@ static void __exit netback_fini(void)
}
module_exit(netback_fini);

+static void __init __maybe_unused build_assertions(void)
+{
+ BUILD_BUG_ON(sizeof(struct xenvif_tx_cb) > 48);
+}
+
MODULE_LICENSE("Dual BSD/GPL");
MODULE_ALIAS("xen-backend:vif");
--
2.17.1

2020-01-08 15:30:52

by Sergey Dyasli

[permalink] [raw]
Subject: [PATCH v1 3/4] xen: teach KASAN about grant tables

From: Ross Lagerwall <[email protected]>

Otherwise it produces lots of false positives when a guest starts using
PV I/O devices.

Signed-off-by: Ross Lagerwall <[email protected]>
Signed-off-by: Sergey Dyasli <[email protected]>
---
RFC --> v1:
- Slightly clarified the commit message
---
drivers/xen/grant-table.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 7b36b51cdb9f..ce95f7232de6 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1048,6 +1048,7 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
foreign = xen_page_foreign(pages[i]);
foreign->domid = map_ops[i].dom;
foreign->gref = map_ops[i].ref;
+ kasan_alloc_pages(pages[i], 0);
break;
}

@@ -1084,8 +1085,10 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
if (ret)
return ret;

- for (i = 0; i < count; i++)
+ for (i = 0; i < count; i++) {
ClearPageForeign(pages[i]);
+ kasan_free_pages(pages[i], 0);
+ }

return clear_foreign_p2m_mapping(unmap_ops, kunmap_ops, pages, count);
}
--
2.17.1

2020-01-08 18:03:44

by Sergey Dyasli

[permalink] [raw]
Subject: [PATCH v1 1/4] kasan: introduce set_pmd_early_shadow()

It is incorrect to call pmd_populate_kernel() multiple times for the
same page table. Xen notices it during kasan_populate_early_shadow():

(XEN) mm.c:3222:d155v0 mfn 3704b already pinned

This happens for kasan_early_shadow_pte when USE_SPLIT_PTE_PTLOCKS is
enabled. Fix this by introducing set_pmd_early_shadow() which calls
pmd_populate_kernel() only once and uses set_pmd() afterwards.

Signed-off-by: Sergey Dyasli <[email protected]>
---
RFC --> v1:
- New patch
---
mm/kasan/init.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index ce45c491ebcd..a4077320777f 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -81,6 +81,19 @@ static inline bool kasan_early_shadow_page_entry(pte_t pte)
return pte_page(pte) == virt_to_page(lm_alias(kasan_early_shadow_page));
}

+static inline void set_pmd_early_shadow(pmd_t *pmd)
+{
+ static bool pmd_populated = false;
+ pte_t *early_shadow = lm_alias(kasan_early_shadow_pte);
+
+ if (likely(pmd_populated)) {
+ set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
+ } else {
+ pmd_populate_kernel(&init_mm, pmd, early_shadow);
+ pmd_populated = true;
+ }
+}
+
static __init void *early_alloc(size_t size, int node)
{
void *ptr = memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
@@ -120,8 +133,7 @@ static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr,
next = pmd_addr_end(addr, end);

if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) {
- pmd_populate_kernel(&init_mm, pmd,
- lm_alias(kasan_early_shadow_pte));
+ set_pmd_early_shadow(pmd);
continue;
}

@@ -157,8 +169,7 @@ static int __ref zero_pud_populate(p4d_t *p4d, unsigned long addr,
pud_populate(&init_mm, pud,
lm_alias(kasan_early_shadow_pmd));
pmd = pmd_offset(pud, addr);
- pmd_populate_kernel(&init_mm, pmd,
- lm_alias(kasan_early_shadow_pte));
+ set_pmd_early_shadow(pmd);
continue;
}

@@ -198,8 +209,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
pud_populate(&init_mm, pud,
lm_alias(kasan_early_shadow_pmd));
pmd = pmd_offset(pud, addr);
- pmd_populate_kernel(&init_mm, pmd,
- lm_alias(kasan_early_shadow_pte));
+ set_pmd_early_shadow(pmd);
continue;
}

@@ -271,8 +281,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
pud_populate(&init_mm, pud,
lm_alias(kasan_early_shadow_pmd));
pmd = pmd_offset(pud, addr);
- pmd_populate_kernel(&init_mm, pmd,
- lm_alias(kasan_early_shadow_pte));
+ set_pmd_early_shadow(pmd);
continue;
}

--
2.17.1

2020-01-08 18:03:44

by Sergey Dyasli

[permalink] [raw]
Subject: [PATCH v1 2/4] x86/xen: add basic KASAN support for PV kernel

This enables to use Outline instrumentation for Xen PV kernels.

KASAN_INLINE and KASAN_VMALLOC options currently lead to boot crashes
and hence disabled.

Signed-off-by: Sergey Dyasli <[email protected]>
---
RFC --> v1:
- New functions with declarations in xen/xen-ops.h
- Fixed the issue with free_kernel_image_pages() with the help of
xen_pv_kasan_unpin_pgd()
---
arch/x86/mm/kasan_init_64.c | 12 ++++++++++++
arch/x86/xen/Makefile | 7 +++++++
arch/x86/xen/enlighten_pv.c | 3 +++
arch/x86/xen/mmu_pv.c | 39 +++++++++++++++++++++++++++++++++++++
drivers/xen/Makefile | 2 ++
include/xen/xen-ops.h | 4 ++++
kernel/Makefile | 2 ++
lib/Kconfig.kasan | 3 ++-
8 files changed, 71 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index cf5bc37c90ac..902a6a152d33 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -13,6 +13,9 @@
#include <linux/sched/task.h>
#include <linux/vmalloc.h>

+#include <xen/xen.h>
+#include <xen/xen-ops.h>
+
#include <asm/e820/types.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
@@ -332,6 +335,11 @@ void __init kasan_early_init(void)
for (i = 0; pgtable_l5_enabled() && i < PTRS_PER_P4D; i++)
kasan_early_shadow_p4d[i] = __p4d(p4d_val);

+ if (xen_pv_domain()) {
+ pgd_t *pv_top_pgt = xen_pv_kasan_early_init();
+ kasan_map_early_shadow(pv_top_pgt);
+ }
+
kasan_map_early_shadow(early_top_pgt);
kasan_map_early_shadow(init_top_pgt);
}
@@ -369,6 +377,8 @@ void __init kasan_init(void)
__pgd(__pa(tmp_p4d_table) | _KERNPG_TABLE));
}

+ xen_pv_kasan_pin_pgd(early_top_pgt);
+
load_cr3(early_top_pgt);
__flush_tlb_all();

@@ -433,6 +443,8 @@ void __init kasan_init(void)
load_cr3(init_top_pgt);
__flush_tlb_all();

+ xen_pv_kasan_unpin_pgd(early_top_pgt);
+
/*
* kasan_early_shadow_page has been used as early shadow memory, thus
* it may contain some garbage. Now we can clear and write protect it,
diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile
index 084de77a109e..102fad0b0bca 100644
--- a/arch/x86/xen/Makefile
+++ b/arch/x86/xen/Makefile
@@ -1,3 +1,10 @@
+KASAN_SANITIZE_enlighten_pv.o := n
+KASAN_SANITIZE_enlighten.o := n
+KASAN_SANITIZE_irq.o := n
+KASAN_SANITIZE_mmu_pv.o := n
+KASAN_SANITIZE_p2m.o := n
+KASAN_SANITIZE_multicalls.o := n
+
# SPDX-License-Identifier: GPL-2.0
OBJECT_FILES_NON_STANDARD_xen-asm_$(BITS).o := y

diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index ae4a41ca19f6..27de55699f24 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -72,6 +72,7 @@
#include <asm/mwait.h>
#include <asm/pci_x86.h>
#include <asm/cpu.h>
+#include <asm/kasan.h>

#ifdef CONFIG_ACPI
#include <linux/acpi.h>
@@ -1231,6 +1232,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
/* Get mfn list */
xen_build_dynamic_phys_to_machine();

+ kasan_early_init();
+
/*
* Set up kernel GDT and segment registers, mainly so that
* -fstack-protector code can be executed.
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index c8dbee62ec2a..cf6ff214d9ea 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -1771,6 +1771,36 @@ static void __init set_page_prot(void *addr, pgprot_t prot)
{
return set_page_prot_flags(addr, prot, UVMF_NONE);
}
+
+pgd_t * __init xen_pv_kasan_early_init(void)
+{
+ /* PV page tables must be read-only */
+ set_page_prot(kasan_early_shadow_pud, PAGE_KERNEL_RO);
+ set_page_prot(kasan_early_shadow_pmd, PAGE_KERNEL_RO);
+ set_page_prot(kasan_early_shadow_pte, PAGE_KERNEL_RO);
+
+ /* Return a pointer to the initial PV page tables */
+ return (pgd_t *)xen_start_info->pt_base;
+}
+
+void __init xen_pv_kasan_pin_pgd(pgd_t *pgd)
+{
+ if (!xen_pv_domain())
+ return;
+
+ set_page_prot(pgd, PAGE_KERNEL_RO);
+ pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE, PFN_DOWN(__pa_symbol(pgd)));
+}
+
+void __init xen_pv_kasan_unpin_pgd(pgd_t *pgd)
+{
+ if (!xen_pv_domain())
+ return;
+
+ pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa_symbol(pgd)));
+ set_page_prot(pgd, PAGE_KERNEL);
+}
+
#ifdef CONFIG_X86_32
static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
{
@@ -1943,6 +1973,15 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
if (i && i < pgd_index(__START_KERNEL_map))
init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];

+#ifdef CONFIG_KASAN
+ /*
+ * Copy KASAN mappings
+ * ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
+ */
+ for (i = 0xec0 >> 3; i < 0xfc0 >> 3; i++)
+ init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
+#endif
+
/* Make pagetable pieces RO */
set_page_prot(init_top_pgt, PAGE_KERNEL_RO);
set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 0c4efa6fe450..1e9e1e41c0a8 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -1,4 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
+KASAN_SANITIZE_features.o := n
+
obj-$(CONFIG_HOTPLUG_CPU) += cpu_hotplug.o
obj-y += grant-table.o features.o balloon.o manage.o preempt.o time.o
obj-y += mem-reservation.o
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index d89969aa9942..91d66520f0a3 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -241,4 +241,8 @@ static inline void xen_preemptible_hcall_end(void)

#endif /* CONFIG_PREEMPT */

+pgd_t *xen_pv_kasan_early_init(void);
+void xen_pv_kasan_pin_pgd(pgd_t *pgd);
+void xen_pv_kasan_unpin_pgd(pgd_t *pgd);
+
#endif /* INCLUDE_XEN_OPS_H */
diff --git a/kernel/Makefile b/kernel/Makefile
index f2cc0d118a0b..1da6fd93c00c 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -12,6 +12,8 @@ obj-y = fork.o exec_domain.o panic.o \
notifier.o ksysfs.o cred.o reboot.o \
async.o range.o smpboot.o ucount.o

+KASAN_SANITIZE_cpu.o := n
+
obj-$(CONFIG_MODULES) += kmod.o
obj-$(CONFIG_MULTIUSER) += groups.o

diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 81f5464ea9e1..429a638625ea 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -98,6 +98,7 @@ config KASAN_OUTLINE

config KASAN_INLINE
bool "Inline instrumentation"
+ depends on !XEN_PV
help
Compiler directly inserts code checking shadow memory before
memory accesses. This is faster than outline (in some workloads
@@ -147,7 +148,7 @@ config KASAN_SW_TAGS_IDENTIFY

config KASAN_VMALLOC
bool "Back mappings in vmalloc space with real shadow memory"
- depends on KASAN && HAVE_ARCH_KASAN_VMALLOC
+ depends on KASAN && HAVE_ARCH_KASAN_VMALLOC && !XEN_PV
help
By default, the shadow region for vmalloc space is the read-only
zero page. This means that KASAN cannot detect errors involving
--
2.17.1

2020-01-09 10:47:23

by Jürgen Groß

[permalink] [raw]
Subject: Re: [PATCH v1 2/4] x86/xen: add basic KASAN support for PV kernel

On 08.01.20 16:20, Sergey Dyasli wrote:
> This enables to use Outline instrumentation for Xen PV kernels.
>
> KASAN_INLINE and KASAN_VMALLOC options currently lead to boot crashes
> and hence disabled.
>
> Signed-off-by: Sergey Dyasli <[email protected]>
> ---
> RFC --> v1:
> - New functions with declarations in xen/xen-ops.h
> - Fixed the issue with free_kernel_image_pages() with the help of
> xen_pv_kasan_unpin_pgd()
> ---
> arch/x86/mm/kasan_init_64.c | 12 ++++++++++++
> arch/x86/xen/Makefile | 7 +++++++
> arch/x86/xen/enlighten_pv.c | 3 +++
> arch/x86/xen/mmu_pv.c | 39 +++++++++++++++++++++++++++++++++++++
> drivers/xen/Makefile | 2 ++
> include/xen/xen-ops.h | 4 ++++
> kernel/Makefile | 2 ++
> lib/Kconfig.kasan | 3 ++-
> 8 files changed, 71 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
> index cf5bc37c90ac..902a6a152d33 100644
> --- a/arch/x86/mm/kasan_init_64.c
> +++ b/arch/x86/mm/kasan_init_64.c
> @@ -13,6 +13,9 @@
> #include <linux/sched/task.h>
> #include <linux/vmalloc.h>
>
> +#include <xen/xen.h>
> +#include <xen/xen-ops.h>
> +
> #include <asm/e820/types.h>
> #include <asm/pgalloc.h>
> #include <asm/tlbflush.h>
> @@ -332,6 +335,11 @@ void __init kasan_early_init(void)
> for (i = 0; pgtable_l5_enabled() && i < PTRS_PER_P4D; i++)
> kasan_early_shadow_p4d[i] = __p4d(p4d_val);
>
> + if (xen_pv_domain()) {
> + pgd_t *pv_top_pgt = xen_pv_kasan_early_init();

You are breaking the build with CONFIG_XEN_PV undefined here.

> + kasan_map_early_shadow(pv_top_pgt);
> + }
> +
> kasan_map_early_shadow(early_top_pgt);
> kasan_map_early_shadow(init_top_pgt);
> }
> @@ -369,6 +377,8 @@ void __init kasan_init(void)
> __pgd(__pa(tmp_p4d_table) | _KERNPG_TABLE));
> }
>
> + xen_pv_kasan_pin_pgd(early_top_pgt);

Same here (and below). For the pin/unpin variants I'd rather have
an inline wrapper containing the "if (xen_pv_domain())" in xen-ops.h
which can easily contain the needed #ifdef CONFIG_XEN_PV.


Juergen

2020-01-09 12:09:34

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v1 4/4] xen/netback: Fix grant copy across page boundary with KASAN

On 1/8/20 4:21 PM, Sergey Dyasli wrote:
> From: Ross Lagerwall <[email protected]>
>
> When KASAN (or SLUB_DEBUG) is turned on, the normal expectation that
> allocations are aligned to the next power of 2 of the size does not
> hold.

Hmm, really? They should after 59bb47985c1d ("mm, sl[aou]b: guarantee
natural alignment for kmalloc(power-of-two)"), i.e. since 5.4.

But actually the guarantee is only for precise power of two sizes given
to kmalloc(). Allocations of sizes that end up using the 96 or 192 bytes
kmalloc cache have no such guarantee. But those might then cross page
boundary also without SLUB_DEBUG.

> Therefore, handle grant copies that cross page boundaries.
>
> Signed-off-by: Ross Lagerwall <[email protected]>
> Signed-off-by: Sergey Dyasli <[email protected]>
> ---
> RFC --> v1:
> - Added BUILD_BUG_ON to the netback patch
> - xenvif_idx_release() now located outside the loop
>
> CC: Wei Liu <[email protected]>
> CC: Paul Durrant <[email protected]>
> ---
> drivers/net/xen-netback/common.h | 2 +-
> drivers/net/xen-netback/netback.c | 59 +++++++++++++++++++++++++------
> 2 files changed, 49 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 05847eb91a1b..e57684415edd 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -155,7 +155,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
> struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
> grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
>
> - struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
> + struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS * 2];
> struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
> struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
> /* passed to gnttab_[un]map_refs with pages under (un)mapping */
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 0020b2e8c279..33b8f8d043e6 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -320,6 +320,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
>
> struct xenvif_tx_cb {
> u16 pending_idx;
> + u8 copies;
> };
>
> #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb)
> @@ -439,6 +440,7 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
> {
> struct gnttab_map_grant_ref *gop_map = *gopp_map;
> u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx;
> + u8 copies = XENVIF_TX_CB(skb)->copies;
> /* This always points to the shinfo of the skb being checked, which
> * could be either the first or the one on the frag_list
> */
> @@ -450,23 +452,26 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
> int nr_frags = shinfo->nr_frags;
> const bool sharedslot = nr_frags &&
> frag_get_pending_idx(&shinfo->frags[0]) == pending_idx;
> - int i, err;
> + int i, err = 0;
>
> - /* Check status of header. */
> - err = (*gopp_copy)->status;
> - if (unlikely(err)) {
> - if (net_ratelimit())
> - netdev_dbg(queue->vif->dev,
> + while (copies) {
> + /* Check status of header. */
> + int newerr = (*gopp_copy)->status;
> + if (unlikely(newerr)) {
> + if (net_ratelimit())
> + netdev_dbg(queue->vif->dev,
> "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n",
> (*gopp_copy)->status,
> pending_idx,
> (*gopp_copy)->source.u.ref);
> - /* The first frag might still have this slot mapped */
> - if (!sharedslot)
> - xenvif_idx_release(queue, pending_idx,
> - XEN_NETIF_RSP_ERROR);
> + err = newerr;
> + }
> + (*gopp_copy)++;
> + copies--;
> }
> - (*gopp_copy)++;
> + /* The first frag might still have this slot mapped */
> + if (unlikely(err) && !sharedslot)
> + xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
>
> check_frags:
> for (i = 0; i < nr_frags; i++, gop_map++) {
> @@ -910,6 +915,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
> xenvif_tx_err(queue, &txreq, extra_count, idx);
> break;
> }
> + XENVIF_TX_CB(skb)->copies = 0;
>
> skb_shinfo(skb)->nr_frags = ret;
> if (data_len < txreq.size)
> @@ -933,6 +939,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
> "Can't allocate the frag_list skb.\n");
> break;
> }
> + XENVIF_TX_CB(nskb)->copies = 0;
> }
>
> if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
> @@ -990,6 +997,31 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
>
> queue->tx_copy_ops[*copy_ops].len = data_len;
> queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
> + XENVIF_TX_CB(skb)->copies++;
> +
> + if (offset_in_page(skb->data) + data_len > XEN_PAGE_SIZE) {
> + unsigned int extra_len = offset_in_page(skb->data) +
> + data_len - XEN_PAGE_SIZE;
> +
> + queue->tx_copy_ops[*copy_ops].len -= extra_len;
> + (*copy_ops)++;
> +
> + queue->tx_copy_ops[*copy_ops].source.u.ref = txreq.gref;
> + queue->tx_copy_ops[*copy_ops].source.domid =
> + queue->vif->domid;
> + queue->tx_copy_ops[*copy_ops].source.offset =
> + txreq.offset + data_len - extra_len;
> +
> + queue->tx_copy_ops[*copy_ops].dest.u.gmfn =
> + virt_to_gfn(skb->data + data_len - extra_len);
> + queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF;
> + queue->tx_copy_ops[*copy_ops].dest.offset = 0;
> +
> + queue->tx_copy_ops[*copy_ops].len = extra_len;
> + queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
> +
> + XENVIF_TX_CB(skb)->copies++;
> + }
>
> (*copy_ops)++;
>
> @@ -1674,5 +1706,10 @@ static void __exit netback_fini(void)
> }
> module_exit(netback_fini);
>
> +static void __init __maybe_unused build_assertions(void)
> +{
> + BUILD_BUG_ON(sizeof(struct xenvif_tx_cb) > 48);
> +}
> +
> MODULE_LICENSE("Dual BSD/GPL");
> MODULE_ALIAS("xen-backend:vif");
>

2020-01-09 13:43:25

by Paul Durrant

[permalink] [raw]
Subject: Re: [PATCH v1 4/4] xen/netback: Fix grant copy across page boundary with KASAN

On Wed, 8 Jan 2020 at 15:21, Sergey Dyasli <[email protected]> wrote:
>
> From: Ross Lagerwall <[email protected]>
>
> When KASAN (or SLUB_DEBUG) is turned on, the normal expectation that
> allocations are aligned to the next power of 2 of the size does not
> hold. Therefore, handle grant copies that cross page boundaries.
>
> Signed-off-by: Ross Lagerwall <[email protected]>
> Signed-off-by: Sergey Dyasli <[email protected]>
> ---
> RFC --> v1:
> - Added BUILD_BUG_ON to the netback patch
> - xenvif_idx_release() now located outside the loop
>
> CC: Wei Liu <[email protected]>
> CC: Paul Durrant <[email protected]>
[snip]
>
> +static void __init __maybe_unused build_assertions(void)
> +{
> + BUILD_BUG_ON(sizeof(struct xenvif_tx_cb) > 48);

FIELD_SIZEOF(struct sk_buff, cb) rather than a magic '48' I think.

Paul

> +}
> +
> MODULE_LICENSE("Dual BSD/GPL");
> MODULE_ALIAS("xen-backend:vif");
> --
> 2.17.1
>

2020-01-09 23:32:11

by Boris Ostrovsky

[permalink] [raw]
Subject: Re: [PATCH v1 2/4] x86/xen: add basic KASAN support for PV kernel



On 1/8/20 10:20 AM, Sergey Dyasli wrote:
> @@ -1943,6 +1973,15 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> if (i && i < pgd_index(__START_KERNEL_map))
> init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
>
> +#ifdef CONFIG_KASAN
> + /*
> + * Copy KASAN mappings
> + * ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
> + */
> + for (i = 0xec0 >> 3; i < 0xfc0 >> 3; i++)

Are you referring here to  KASAN_SHADOW_START and KASAN_SHADOW_END? If
so, can you use them instead?

-boris

> + init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
> +#endif
> +
>

2020-01-10 11:08:53

by Sergey Dyasli

[permalink] [raw]
Subject: Re: [PATCH v1 2/4] x86/xen: add basic KASAN support for PV kernel

On 09/01/2020 09:15, Jürgen Groß wrote:
> On 08.01.20 16:20, Sergey Dyasli wrote:
>> This enables to use Outline instrumentation for Xen PV kernels.
>>
>> KASAN_INLINE and KASAN_VMALLOC options currently lead to boot crashes
>> and hence disabled.
>>
>> Signed-off-by: Sergey Dyasli <[email protected]>
>> ---
>> RFC --> v1:
>> - New functions with declarations in xen/xen-ops.h
>> - Fixed the issue with free_kernel_image_pages() with the help of
>> xen_pv_kasan_unpin_pgd()
>> ---
>> arch/x86/mm/kasan_init_64.c | 12 ++++++++++++
>> arch/x86/xen/Makefile | 7 +++++++
>> arch/x86/xen/enlighten_pv.c | 3 +++
>> arch/x86/xen/mmu_pv.c | 39 +++++++++++++++++++++++++++++++++++++
>> drivers/xen/Makefile | 2 ++
>> include/xen/xen-ops.h | 4 ++++
>> kernel/Makefile | 2 ++
>> lib/Kconfig.kasan | 3 ++-
>> 8 files changed, 71 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
>> index cf5bc37c90ac..902a6a152d33 100644
>> --- a/arch/x86/mm/kasan_init_64.c
>> +++ b/arch/x86/mm/kasan_init_64.c
>> @@ -13,6 +13,9 @@
>> #include <linux/sched/task.h>
>> #include <linux/vmalloc.h>
>> +#include <xen/xen.h>
>> +#include <xen/xen-ops.h>
>> +
>> #include <asm/e820/types.h>
>> #include <asm/pgalloc.h>
>> #include <asm/tlbflush.h>
>> @@ -332,6 +335,11 @@ void __init kasan_early_init(void)
>> for (i = 0; pgtable_l5_enabled() && i < PTRS_PER_P4D; i++)
>> kasan_early_shadow_p4d[i] = __p4d(p4d_val);
>> + if (xen_pv_domain()) {
>> + pgd_t *pv_top_pgt = xen_pv_kasan_early_init();
>
> You are breaking the build with CONFIG_XEN_PV undefined here.

Right, the following is needed:

diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 91d66520f0a3..3d20f000af12 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -241,8 +241,14 @@ static inline void xen_preemptible_hcall_end(void)

#endif /* CONFIG_PREEMPT */

+#if defined(CONFIG_XEN_PV)
pgd_t *xen_pv_kasan_early_init(void);
void xen_pv_kasan_pin_pgd(pgd_t *pgd);
void xen_pv_kasan_unpin_pgd(pgd_t *pgd);
+#else
+static inline pgd_t *xen_pv_kasan_early_init(void) { return NULL; }
+static inline void xen_pv_kasan_pin_pgd(pgd_t *pgd) { }
+static inline void xen_pv_kasan_unpin_pgd(pgd_t *pgd) { }
+#endif /* defined(CONFIG_XEN_PV) */

#endif /* INCLUDE_XEN_OPS_H */

--
Thanks,
Sergey

2020-01-10 11:49:07

by Sergey Dyasli

[permalink] [raw]
Subject: Re: [PATCH v1 2/4] x86/xen: add basic KASAN support for PV kernel

On 09/01/2020 23:27, Boris Ostrovsky wrote:
>
>
> On 1/8/20 10:20 AM, Sergey Dyasli wrote:
>> @@ -1943,6 +1973,15 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>>       if (i && i < pgd_index(__START_KERNEL_map))
>>           init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
>>   +#ifdef CONFIG_KASAN
>> +    /*
>> +     * Copy KASAN mappings
>> +     * ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
>> +     */
>> +    for (i = 0xec0 >> 3; i < 0xfc0 >> 3; i++)
>
> Are you referring here to  KASAN_SHADOW_START and KASAN_SHADOW_END? If so, can you use them instead?

Indeed, the following macros make the code neater:

#ifdef CONFIG_KASAN
/* Copy KASAN mappings */
for (i = pgd_index(KASAN_SHADOW_START);
i < pgd_index(KASAN_SHADOW_END);
i++)
init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
#endif /* ifdef CONFIG_KASAN */

--
Thanks,
Sergey

2020-01-10 13:08:38

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v1 2/4] x86/xen: add basic KASAN support for PV kernel

Hi Sergey,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on net-next/master]
[also build test ERROR on net/master linus/master v5.5-rc5 next-20200109]
[cannot apply to xen-tip/linux-next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Sergey-Dyasli/basic-KASAN-support-for-Xen-PV-domains/20200110-042623
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 4a4a52d49d11f5c4a0df8b9806c8c5563801f753
config: x86_64-randconfig-b003-20200109 (attached as .config)
compiler: gcc-7 (Debian 7.5.0-3) 7.5.0
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All errors (new ones prefixed by >>):

ld: arch/x86/mm/kasan_init_64.o: in function `kasan_init':
>> kasan_init_64.c:(.init.text+0x83c): undefined reference to `xen_pv_kasan_pin_pgd'
>> ld: kasan_init_64.c:(.init.text+0xc38): undefined reference to `xen_pv_kasan_unpin_pgd'

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation


Attachments:
(No filename) (1.42 kB)
.config.gz (38.14 kB)
Download all attachments

2020-01-10 14:28:54

by Sergey Dyasli

[permalink] [raw]
Subject: Re: [PATCH v1 4/4] xen/netback: Fix grant copy across page boundary with KASAN

On 09/01/2020 13:36, Paul Durrant wrote:
> On Wed, 8 Jan 2020 at 15:21, Sergey Dyasli <[email protected]> wrote:
>>
>> From: Ross Lagerwall <[email protected]>
>>
>> When KASAN (or SLUB_DEBUG) is turned on, the normal expectation that
>> allocations are aligned to the next power of 2 of the size does not
>> hold. Therefore, handle grant copies that cross page boundaries.
>>
>> Signed-off-by: Ross Lagerwall <[email protected]>
>> Signed-off-by: Sergey Dyasli <[email protected]>
>> ---
>> RFC --> v1:
>> - Added BUILD_BUG_ON to the netback patch
>> - xenvif_idx_release() now located outside the loop
>>
>> CC: Wei Liu <[email protected]>
>> CC: Paul Durrant <[email protected]>
> [snip]
>>
>> +static void __init __maybe_unused build_assertions(void)
>> +{
>> + BUILD_BUG_ON(sizeof(struct xenvif_tx_cb) > 48);
>
> FIELD_SIZEOF(struct sk_buff, cb) rather than a magic '48' I think.

The macro got renamed recently, so now it should be:

sizeof_field(struct sk_buff, cb))

Thanks for the suggestion.

--
Sergey

2020-01-10 14:43:13

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v1 1/4] kasan: introduce set_pmd_early_shadow()

Hi Sergey,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on net-next/master]
[also build test ERROR on net/master linus/master v5.5-rc5 next-20200109]
[cannot apply to xen-tip/linux-next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Sergey-Dyasli/basic-KASAN-support-for-Xen-PV-domains/20200110-042623
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 4a4a52d49d11f5c4a0df8b9806c8c5563801f753
config: arm64-randconfig-a001-20200109 (attached as .config)
compiler: aarch64-linux-gcc (GCC) 7.5.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.5.0 make.cross ARCH=arm64

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All error/warnings (new ones prefixed by >>):

In file included from arch/arm64/include/asm/kasan.h:9:0,
from arch/arm64/include/asm/processor.h:34,
from include/asm-generic/qrwlock.h:14,
from ./arch/arm64/include/generated/asm/qrwlock.h:1,
from arch/arm64/include/asm/spinlock.h:8,
from include/linux/spinlock.h:89,
from include/linux/mmzone.h:8,
from include/linux/gfp.h:6,
from include/linux/mm.h:10,
from include/linux/memblock.h:13,
from mm/kasan/init.c:14:
mm/kasan/init.c: In function 'set_pmd_early_shadow':
>> mm/kasan/init.c:90:43: error: '_PAGE_TABLE' undeclared (first use in this function); did you mean 'NR_PAGETABLE'?
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^
arch/arm64/include/asm/pgtable-types.h:40:30: note: in definition of macro '__pgd'
#define __pgd(x) ((pgd_t) { (x) } )
^
>> include/asm-generic/pgtable-nopmd.h:50:32: note: in expansion of macro '__pud'
#define __pmd(x) ((pmd_t) { __pud(x) } )
^~~~~
>> mm/kasan/init.c:90:16: note: in expansion of macro '__pmd'
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^~~~~
mm/kasan/init.c:90:43: note: each undeclared identifier is reported only once for each function it appears in
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^
arch/arm64/include/asm/pgtable-types.h:40:30: note: in definition of macro '__pgd'
#define __pgd(x) ((pgd_t) { (x) } )
^
>> include/asm-generic/pgtable-nopmd.h:50:32: note: in expansion of macro '__pud'
#define __pmd(x) ((pmd_t) { __pud(x) } )
^~~~~
>> mm/kasan/init.c:90:16: note: in expansion of macro '__pmd'
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^~~~~
--
In file included from arch/arm64/include/asm/kasan.h:9:0,
from arch/arm64/include/asm/processor.h:34,
from include/asm-generic/qrwlock.h:14,
from ./arch/arm64/include/generated/asm/qrwlock.h:1,
from arch/arm64/include/asm/spinlock.h:8,
from include/linux/spinlock.h:89,
from include/linux/mmzone.h:8,
from include/linux/gfp.h:6,
from include/linux/mm.h:10,
from include/linux/memblock.h:13,
from mm//kasan/init.c:14:
mm//kasan/init.c: In function 'set_pmd_early_shadow':
mm//kasan/init.c:90:43: error: '_PAGE_TABLE' undeclared (first use in this function); did you mean 'NR_PAGETABLE'?
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^
arch/arm64/include/asm/pgtable-types.h:40:30: note: in definition of macro '__pgd'
#define __pgd(x) ((pgd_t) { (x) } )
^
>> include/asm-generic/pgtable-nopmd.h:50:32: note: in expansion of macro '__pud'
#define __pmd(x) ((pmd_t) { __pud(x) } )
^~~~~
mm//kasan/init.c:90:16: note: in expansion of macro '__pmd'
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^~~~~
mm//kasan/init.c:90:43: note: each undeclared identifier is reported only once for each function it appears in
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^
arch/arm64/include/asm/pgtable-types.h:40:30: note: in definition of macro '__pgd'
#define __pgd(x) ((pgd_t) { (x) } )
^
>> include/asm-generic/pgtable-nopmd.h:50:32: note: in expansion of macro '__pud'
#define __pmd(x) ((pmd_t) { __pud(x) } )
^~~~~
mm//kasan/init.c:90:16: note: in expansion of macro '__pmd'
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^~~~~

vim +90 mm/kasan/init.c

> 14 #include <linux/memblock.h>
15 #include <linux/init.h>
16 #include <linux/kasan.h>
17 #include <linux/kernel.h>
18 #include <linux/mm.h>
19 #include <linux/pfn.h>
20 #include <linux/slab.h>
21
22 #include <asm/page.h>
23 #include <asm/pgalloc.h>
24
25 #include "kasan.h"
26
27 /*
28 * This page serves two purposes:
29 * - It used as early shadow memory. The entire shadow region populated
30 * with this page, before we will be able to setup normal shadow memory.
31 * - Latter it reused it as zero shadow to cover large ranges of memory
32 * that allowed to access, but not handled by kasan (vmalloc/vmemmap ...).
33 */
34 unsigned char kasan_early_shadow_page[PAGE_SIZE] __page_aligned_bss;
35
36 #if CONFIG_PGTABLE_LEVELS > 4
37 p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D] __page_aligned_bss;
38 static inline bool kasan_p4d_table(pgd_t pgd)
39 {
40 return pgd_page(pgd) == virt_to_page(lm_alias(kasan_early_shadow_p4d));
41 }
42 #else
43 static inline bool kasan_p4d_table(pgd_t pgd)
44 {
45 return false;
46 }
47 #endif
48 #if CONFIG_PGTABLE_LEVELS > 3
49 pud_t kasan_early_shadow_pud[PTRS_PER_PUD] __page_aligned_bss;
50 static inline bool kasan_pud_table(p4d_t p4d)
51 {
52 return p4d_page(p4d) == virt_to_page(lm_alias(kasan_early_shadow_pud));
53 }
54 #else
55 static inline bool kasan_pud_table(p4d_t p4d)
56 {
57 return false;
58 }
59 #endif
60 #if CONFIG_PGTABLE_LEVELS > 2
61 pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD] __page_aligned_bss;
62 static inline bool kasan_pmd_table(pud_t pud)
63 {
64 return pud_page(pud) == virt_to_page(lm_alias(kasan_early_shadow_pmd));
65 }
66 #else
67 static inline bool kasan_pmd_table(pud_t pud)
68 {
69 return false;
70 }
71 #endif
72 pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss;
73
74 static inline bool kasan_pte_table(pmd_t pmd)
75 {
76 return pmd_page(pmd) == virt_to_page(lm_alias(kasan_early_shadow_pte));
77 }
78
79 static inline bool kasan_early_shadow_page_entry(pte_t pte)
80 {
81 return pte_page(pte) == virt_to_page(lm_alias(kasan_early_shadow_page));
82 }
83
84 static inline void set_pmd_early_shadow(pmd_t *pmd)
85 {
86 static bool pmd_populated = false;
87 pte_t *early_shadow = lm_alias(kasan_early_shadow_pte);
88
89 if (likely(pmd_populated)) {
> 90 set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
91 } else {
92 pmd_populate_kernel(&init_mm, pmd, early_shadow);
93 pmd_populated = true;
94 }
95 }
96

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation


Attachments:
(No filename) (8.32 kB)
.config.gz (31.62 kB)
Download all attachments

2020-01-10 17:22:13

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v1 2/4] x86/xen: add basic KASAN support for PV kernel

Hi Sergey,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on net-next/master]
[also build test ERROR on net/master linus/master v5.5-rc5 next-20200109]
[cannot apply to xen-tip/linux-next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Sergey-Dyasli/basic-KASAN-support-for-Xen-PV-domains/20200110-042623
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 4a4a52d49d11f5c4a0df8b9806c8c5563801f753
config: x86_64-rhel (attached as .config)
compiler: gcc-7 (Debian 7.5.0-3) 7.5.0
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All errors (new ones prefixed by >>):

arch/x86/xen/mmu_pv.c: In function 'xen_pv_kasan_early_init':
>> arch/x86/xen/mmu_pv.c:1778:16: error: 'kasan_early_shadow_pud' undeclared (first use in this function); did you mean 'kasan_free_shadow'?
set_page_prot(kasan_early_shadow_pud, PAGE_KERNEL_RO);
^~~~~~~~~~~~~~~~~~~~~~
kasan_free_shadow
arch/x86/xen/mmu_pv.c:1778:16: note: each undeclared identifier is reported only once for each function it appears in
>> arch/x86/xen/mmu_pv.c:1779:16: error: 'kasan_early_shadow_pmd' undeclared (first use in this function); did you mean 'kasan_early_shadow_pud'?
set_page_prot(kasan_early_shadow_pmd, PAGE_KERNEL_RO);
^~~~~~~~~~~~~~~~~~~~~~
kasan_early_shadow_pud
>> arch/x86/xen/mmu_pv.c:1780:16: error: 'kasan_early_shadow_pte' undeclared (first use in this function); did you mean 'kasan_early_shadow_pmd'?
set_page_prot(kasan_early_shadow_pte, PAGE_KERNEL_RO);
^~~~~~~~~~~~~~~~~~~~~~
kasan_early_shadow_pmd

vim +1778 arch/x86/xen/mmu_pv.c

1774
1775 pgd_t * __init xen_pv_kasan_early_init(void)
1776 {
1777 /* PV page tables must be read-only */
> 1778 set_page_prot(kasan_early_shadow_pud, PAGE_KERNEL_RO);
> 1779 set_page_prot(kasan_early_shadow_pmd, PAGE_KERNEL_RO);
> 1780 set_page_prot(kasan_early_shadow_pte, PAGE_KERNEL_RO);
1781
1782 /* Return a pointer to the initial PV page tables */
1783 return (pgd_t *)xen_start_info->pt_base;
1784 }
1785

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation


Attachments:
(No filename) (2.68 kB)
.config.gz (43.14 kB)
Download all attachments

2020-01-11 05:22:42

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v1 1/4] kasan: introduce set_pmd_early_shadow()

Hi Sergey,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on net-next/master]
[also build test ERROR on net/master linus/master v5.5-rc5 next-20200109]
[cannot apply to xen-tip/linux-next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Sergey-Dyasli/basic-KASAN-support-for-Xen-PV-domains/20200110-042623
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 4a4a52d49d11f5c4a0df8b9806c8c5563801f753
config: s390-allmodconfig (attached as .config)
compiler: s390-linux-gcc (GCC) 7.5.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.5.0 make.cross ARCH=s390

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All errors (new ones prefixed by >>):

mm//kasan/init.c: In function 'set_pmd_early_shadow':
>> mm//kasan/init.c:90:3: error: implicit declaration of function 'set_pmd'; did you mean 'get_pid'? [-Werror=implicit-function-declaration]
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^~~~~~~
get_pid
In file included from arch/s390/include/asm/thread_info.h:26:0,
from include/linux/thread_info.h:38,
from arch/s390/include/asm/preempt.h:6,
from include/linux/preempt.h:78,
from include/linux/spinlock.h:51,
from include/linux/mmzone.h:8,
from include/linux/gfp.h:6,
from include/linux/mm.h:10,
from include/linux/memblock.h:13,
from mm//kasan/init.c:14:
mm//kasan/init.c:90:43: error: '_PAGE_TABLE' undeclared (first use in this function); did you mean 'NR_PAGETABLE'?
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^
arch/s390/include/asm/page.h:96:37: note: in definition of macro '__pmd'
#define __pmd(x) ((pmd_t) { (x) } )
^
mm//kasan/init.c:90:43: note: each undeclared identifier is reported only once for each function it appears in
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
^
arch/s390/include/asm/page.h:96:37: note: in definition of macro '__pmd'
#define __pmd(x) ((pmd_t) { (x) } )
^
cc1: some warnings being treated as errors

vim +90 mm//kasan/init.c

83
84 static inline void set_pmd_early_shadow(pmd_t *pmd)
85 {
86 static bool pmd_populated = false;
87 pte_t *early_shadow = lm_alias(kasan_early_shadow_pte);
88
89 if (likely(pmd_populated)) {
> 90 set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
91 } else {
92 pmd_populate_kernel(&init_mm, pmd, early_shadow);
93 pmd_populated = true;
94 }
95 }
96

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation


Attachments:
(No filename) (3.43 kB)
.config.gz (55.88 kB)
Download all attachments

2020-01-15 10:56:01

by Sergey Dyasli

[permalink] [raw]
Subject: Re: [PATCH v1 1/4] kasan: introduce set_pmd_early_shadow()

Hi Juergen,

On 08/01/2020 15:20, Sergey Dyasli wrote:
> It is incorrect to call pmd_populate_kernel() multiple times for the
> same page table. Xen notices it during kasan_populate_early_shadow():
>
> (XEN) mm.c:3222:d155v0 mfn 3704b already pinned
>
> This happens for kasan_early_shadow_pte when USE_SPLIT_PTE_PTLOCKS is
> enabled. Fix this by introducing set_pmd_early_shadow() which calls
> pmd_populate_kernel() only once and uses set_pmd() afterwards.
>
> Signed-off-by: Sergey Dyasli <[email protected]>

Looks like the plan to use set_pmd() directly has failed: it's an
arch-specific function and can't be used in arch-independent code
(as kbuild test robot has proven).

Do you see any way out of this other than disabling SPLIT_PTE_PTLOCKS
for PV KASAN?

--
Thanks,
Sergey

2020-01-15 11:04:29

by Sergey Dyasli

[permalink] [raw]
Subject: Re: [PATCH v1 4/4] xen/netback: Fix grant copy across page boundary with KASAN

On 09/01/2020 10:33, Vlastimil Babka wrote:
> On 1/8/20 4:21 PM, Sergey Dyasli wrote:
>> From: Ross Lagerwall <[email protected]>
>>
>> When KASAN (or SLUB_DEBUG) is turned on, the normal expectation that
>> allocations are aligned to the next power of 2 of the size does not
>> hold.
>
> Hmm, really? They should after 59bb47985c1d ("mm, sl[aou]b: guarantee
> natural alignment for kmalloc(power-of-two)"), i.e. since 5.4.
>
> But actually the guarantee is only for precise power of two sizes given
> to kmalloc(). Allocations of sizes that end up using the 96 or 192 bytes
> kmalloc cache have no such guarantee. But those might then cross page
> boundary also without SLUB_DEBUG.

That's interesting to know. It's certainly not the case for 4.19 kernel
for which PV KASAN was initially developed. But I guess this means that
only patch description needs updating.

>
>> Therefore, handle grant copies that cross page boundaries.
>>
>> Signed-off-by: Ross Lagerwall <[email protected]>
>> Signed-off-by: Sergey Dyasli <[email protected]>

--
Thanks,
Sergey

2020-01-15 11:11:17

by Jürgen Groß

[permalink] [raw]
Subject: Re: [PATCH v1 1/4] kasan: introduce set_pmd_early_shadow()

On 15.01.20 11:54, Sergey Dyasli wrote:
> Hi Juergen,
>
> On 08/01/2020 15:20, Sergey Dyasli wrote:
>> It is incorrect to call pmd_populate_kernel() multiple times for the
>> same page table. Xen notices it during kasan_populate_early_shadow():
>>
>> (XEN) mm.c:3222:d155v0 mfn 3704b already pinned
>>
>> This happens for kasan_early_shadow_pte when USE_SPLIT_PTE_PTLOCKS is
>> enabled. Fix this by introducing set_pmd_early_shadow() which calls
>> pmd_populate_kernel() only once and uses set_pmd() afterwards.
>>
>> Signed-off-by: Sergey Dyasli <[email protected]>
>
> Looks like the plan to use set_pmd() directly has failed: it's an
> arch-specific function and can't be used in arch-independent code
> (as kbuild test robot has proven).
>
> Do you see any way out of this other than disabling SPLIT_PTE_PTLOCKS
> for PV KASAN?

Change set_pmd_early_shadow() like the following:

#ifdef CONFIG_XEN_PV
static inline void set_pmd_early_shadow(pmd_t *pmd, pte_t *early_shadow)
{
static bool pmd_populated = false;

if (likely(pmd_populated)) {
set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
} else {
pmd_populate_kernel(&init_mm, pmd, early_shadow);
pmd_populated = true;
}
}
#else
static inline void set_pmd_early_shadow(pmd_t *pmd, pte_t *early_shadow)
{
pmd_populate_kernel(&init_mm, pmd, early_shadow);
}
#endif

... and move it to include/xen/xen-ops.h and call it with
lm_alias(kasan_early_shadow_pte) as the second parameter.


Juergen

2020-01-15 16:34:35

by Sergey Dyasli

[permalink] [raw]
Subject: Re: [PATCH v1 1/4] kasan: introduce set_pmd_early_shadow()

On 15/01/2020 11:09, Jürgen Groß wrote:
> On 15.01.20 11:54, Sergey Dyasli wrote:
>> Hi Juergen,
>>
>> On 08/01/2020 15:20, Sergey Dyasli wrote:
>>> It is incorrect to call pmd_populate_kernel() multiple times for the
>>> same page table. Xen notices it during kasan_populate_early_shadow():
>>>
>>> (XEN) mm.c:3222:d155v0 mfn 3704b already pinned
>>>
>>> This happens for kasan_early_shadow_pte when USE_SPLIT_PTE_PTLOCKS is
>>> enabled. Fix this by introducing set_pmd_early_shadow() which calls
>>> pmd_populate_kernel() only once and uses set_pmd() afterwards.
>>>
>>> Signed-off-by: Sergey Dyasli <[email protected]>
>>
>> Looks like the plan to use set_pmd() directly has failed: it's an
>> arch-specific function and can't be used in arch-independent code
>> (as kbuild test robot has proven).
>>
>> Do you see any way out of this other than disabling SPLIT_PTE_PTLOCKS
>> for PV KASAN?
>
> Change set_pmd_early_shadow() like the following:
>
> #ifdef CONFIG_XEN_PV
> static inline void set_pmd_early_shadow(pmd_t *pmd, pte_t *early_shadow)
> {
> static bool pmd_populated = false;
>
> if (likely(pmd_populated)) {
> set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
> } else {
> pmd_populate_kernel(&init_mm, pmd, early_shadow);
> pmd_populated = true;
> }
> }
> #else
> static inline void set_pmd_early_shadow(pmd_t *pmd, pte_t *early_shadow)
> {
> pmd_populate_kernel(&init_mm, pmd, early_shadow);
> }
> #endif
>
> ... and move it to include/xen/xen-ops.h and call it with
> lm_alias(kasan_early_shadow_pte) as the second parameter.

Your suggestion to use ifdef is really good, especially now when I
figured out that CONFIG_XEN_PV implies X86. But I don't like the idea
of kasan code calling a non-empty function from xen-ops.h when
CONFIG_XEN_PV is not defined. I'd prefer to keep set_pmd_early_shadow()
in mm/kasan/init.c with the suggested ifdef.

--
Thanks,
Sergey

2020-01-16 09:29:55

by Jürgen Groß

[permalink] [raw]
Subject: Re: [PATCH v1 1/4] kasan: introduce set_pmd_early_shadow()

On 15.01.20 17:32, Sergey Dyasli wrote:
> On 15/01/2020 11:09, Jürgen Groß wrote:
>> On 15.01.20 11:54, Sergey Dyasli wrote:
>>> Hi Juergen,
>>>
>>> On 08/01/2020 15:20, Sergey Dyasli wrote:
>>>> It is incorrect to call pmd_populate_kernel() multiple times for the
>>>> same page table. Xen notices it during kasan_populate_early_shadow():
>>>>
>>>> (XEN) mm.c:3222:d155v0 mfn 3704b already pinned
>>>>
>>>> This happens for kasan_early_shadow_pte when USE_SPLIT_PTE_PTLOCKS is
>>>> enabled. Fix this by introducing set_pmd_early_shadow() which calls
>>>> pmd_populate_kernel() only once and uses set_pmd() afterwards.
>>>>
>>>> Signed-off-by: Sergey Dyasli <[email protected]>
>>>
>>> Looks like the plan to use set_pmd() directly has failed: it's an
>>> arch-specific function and can't be used in arch-independent code
>>> (as kbuild test robot has proven).
>>>
>>> Do you see any way out of this other than disabling SPLIT_PTE_PTLOCKS
>>> for PV KASAN?
>>
>> Change set_pmd_early_shadow() like the following:
>>
>> #ifdef CONFIG_XEN_PV
>> static inline void set_pmd_early_shadow(pmd_t *pmd, pte_t *early_shadow)
>> {
>> static bool pmd_populated = false;
>>
>> if (likely(pmd_populated)) {
>> set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
>> } else {
>> pmd_populate_kernel(&init_mm, pmd, early_shadow);
>> pmd_populated = true;
>> }
>> }
>> #else
>> static inline void set_pmd_early_shadow(pmd_t *pmd, pte_t *early_shadow)
>> {
>> pmd_populate_kernel(&init_mm, pmd, early_shadow);
>> }
>> #endif
>>
>> ... and move it to include/xen/xen-ops.h and call it with
>> lm_alias(kasan_early_shadow_pte) as the second parameter.
>
> Your suggestion to use ifdef is really good, especially now when I
> figured out that CONFIG_XEN_PV implies X86. But I don't like the idea
> of kasan code calling a non-empty function from xen-ops.h when
> CONFIG_XEN_PV is not defined. I'd prefer to keep set_pmd_early_shadow()
> in mm/kasan/init.c with the suggested ifdef.

Fine with me.


Juergen