2021-08-25 09:18:24

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH 0/4] ARM: Support KFENCE feature

The patch 1~3 is to support KFENCE feature on ARM.

NOTE:
The context of patch2/3 changes in arch/arm/mm/fault.c is based on link[1],
which make some refactor and cleanup about page fault.

kfence_test is not useful when kfence is not enabled, skip kfence test
when kfence not enabled in patch4.

I tested the kfence_test on ARM QEMU with or without ARM_LPAE and all passed.

[1] https://lore.kernel.org/linux-arm-kernel/[email protected]/

Kefeng Wang (4):
ARM: mm: Provide set_memory_valid()
ARM: mm: Provide is_write_fault()
ARM: Support KFENCE for ARM
mm: kfence: Only load kfence_test when kfence is enabled

arch/arm/Kconfig | 1 +
arch/arm/include/asm/kfence.h | 52 +++++++++++++++++++++++++++++++
arch/arm/include/asm/set_memory.h | 5 +++
arch/arm/mm/fault.c | 16 ++++++++--
arch/arm/mm/pageattr.c | 41 ++++++++++++++++++------
include/linux/kfence.h | 2 ++
mm/kfence/core.c | 8 +++++
mm/kfence/kfence_test.c | 2 ++
8 files changed, 114 insertions(+), 13 deletions(-)
create mode 100644 arch/arm/include/asm/kfence.h

--
2.26.2


2021-08-25 09:18:28

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH 4/4] mm: kfence: Only load kfence_test when kfence is enabled

Provide kfence_is_enabled() helper, only load kfence_test module
when kfence is enabled.

Signed-off-by: Kefeng Wang <[email protected]>
---
include/linux/kfence.h | 2 ++
mm/kfence/core.c | 8 ++++++++
mm/kfence/kfence_test.c | 2 ++
3 files changed, 12 insertions(+)

diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 3fe6dd8a18c1..f08f24e8a726 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -22,6 +22,8 @@
#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE)
extern char *__kfence_pool;

+bool kfence_is_enabled(void);
+
#ifdef CONFIG_KFENCE_STATIC_KEYS
#include <linux/static_key.h>
DECLARE_STATIC_KEY_FALSE(kfence_allocation_key);
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 7a97db8bc8e7..f1aaa7ebdcad 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -51,6 +51,14 @@ static unsigned long kfence_sample_interval __read_mostly = CONFIG_KFENCE_SAMPLE
#endif
#define MODULE_PARAM_PREFIX "kfence."

+bool kfence_is_enabled(void)
+{
+ if (!kfence_sample_interval || !READ_ONCE(kfence_enabled))
+ return false;
+ return true;
+}
+EXPORT_SYMBOL_GPL(kfence_is_enabled);
+
static int param_set_sample_interval(const char *val, const struct kernel_param *kp)
{
unsigned long num;
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index eb6307c199ea..4087f9f1497e 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -847,6 +847,8 @@ static void unregister_tracepoints(struct tracepoint *tp, void *ignore)
*/
static int __init kfence_test_init(void)
{
+ if (!kfence_is_enabled())
+ return 0;
/*
* Because we want to be able to build the test as a module, we need to
* iterate through all known tracepoints, since the static registration
--
2.26.2

2021-08-25 09:18:54

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH 2/4] ARM: mm: Provide is_write_fault()

The function will check whether the fault is caused by a write access.

Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/mm/fault.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index bc8779d54a64..f7ab6dabe89f 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -207,6 +207,11 @@ static inline bool is_permission_fault(unsigned int fsr)
return false;
}

+static inline bool is_write_fault(unsigned int fsr)
+{
+ return (fsr & FSR_WRITE) && !(fsr & FSR_CM);
+}
+
static vm_fault_t __kprobes
__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
unsigned long vma_flags, struct pt_regs *regs)
@@ -261,7 +266,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
if (user_mode(regs))
flags |= FAULT_FLAG_USER;

- if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) {
+ if (is_write_fault(fsr)) {
flags |= FAULT_FLAG_WRITE;
vm_flags = VM_WRITE;
}
--
2.26.2

2021-08-25 09:19:29

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH 1/4] ARM: mm: Provide set_memory_valid()

This function validates and invalidates PTE entries.

Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/include/asm/set_memory.h | 5 ++++
arch/arm/mm/pageattr.c | 41 +++++++++++++++++++++++--------
2 files changed, 36 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/set_memory.h b/arch/arm/include/asm/set_memory.h
index ec17fc0fda7a..bf1728e1af1d 100644
--- a/arch/arm/include/asm/set_memory.h
+++ b/arch/arm/include/asm/set_memory.h
@@ -11,11 +11,16 @@ int set_memory_ro(unsigned long addr, int numpages);
int set_memory_rw(unsigned long addr, int numpages);
int set_memory_x(unsigned long addr, int numpages);
int set_memory_nx(unsigned long addr, int numpages);
+int set_memory_valid(unsigned long addr, int numpages, int enable);
#else
static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
+static inline int set_memory_valid(unsigned long addr, int numpages, int enable)
+{
+ return 0;
+}
#endif

#endif
diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c
index 9790ae3a8c68..7612a1c6b614 100644
--- a/arch/arm/mm/pageattr.c
+++ b/arch/arm/mm/pageattr.c
@@ -31,6 +31,24 @@ static bool in_range(unsigned long start, unsigned long size,
return start >= range_start && start < range_end &&
size <= range_end - start;
}
+/*
+ * This function assumes that the range is mapped with PAGE_SIZE pages.
+ */
+static int __change_memory_common(unsigned long start, unsigned long size,
+ pgprot_t set_mask, pgprot_t clear_mask)
+{
+ struct page_change_data data;
+ int ret;
+
+ data.set_mask = set_mask;
+ data.clear_mask = clear_mask;
+
+ ret = apply_to_page_range(&init_mm, start, size, change_page_range,
+ &data);
+
+ flush_tlb_kernel_range(start, start + size);
+ return ret;
+}

static int change_memory_common(unsigned long addr, int numpages,
pgprot_t set_mask, pgprot_t clear_mask)
@@ -38,8 +56,6 @@ static int change_memory_common(unsigned long addr, int numpages,
unsigned long start = addr & PAGE_MASK;
unsigned long end = PAGE_ALIGN(addr) + numpages * PAGE_SIZE;
unsigned long size = end - start;
- int ret;
- struct page_change_data data;

WARN_ON_ONCE(start != addr);

@@ -50,14 +66,7 @@ static int change_memory_common(unsigned long addr, int numpages,
!in_range(start, size, VMALLOC_START, VMALLOC_END))
return -EINVAL;

- data.set_mask = set_mask;
- data.clear_mask = clear_mask;
-
- ret = apply_to_page_range(&init_mm, start, size, change_page_range,
- &data);
-
- flush_tlb_kernel_range(start, end);
- return ret;
+ return __change_memory_common(start, size, set_mask, clear_mask);
}

int set_memory_ro(unsigned long addr, int numpages)
@@ -87,3 +96,15 @@ int set_memory_x(unsigned long addr, int numpages)
__pgprot(0),
__pgprot(L_PTE_XN));
}
+
+int set_memory_valid(unsigned long addr, int numpages, int enable)
+{
+ if (enable)
+ return __change_memory_common(addr, PAGE_SIZE * numpages,
+ __pgprot(L_PTE_VALID),
+ __pgprot(0));
+ else
+ return __change_memory_common(addr, PAGE_SIZE * numpages,
+ __pgprot(0),
+ __pgprot(L_PTE_VALID));
+}
--
2.26.2

2021-08-25 09:19:46

by Kefeng Wang

[permalink] [raw]
Subject: [PATCH 3/4] ARM: Support KFENCE for ARM

Add architecture specific implementation details for KFENCE and enable
KFENCE on ARM. In particular, this implements the required interface in
<asm/kfence.h>.

KFENCE requires that attributes for pages from its memory pool can
individually be set. Therefore, force the kfence pool to be mapped
at page granularity.

Testing this patch using the testcases in kfence_test.c and all passed
with or without ARM_LPAE.

Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/kfence.h | 52 +++++++++++++++++++++++++++++++++++
arch/arm/mm/fault.c | 9 ++++--
3 files changed, 60 insertions(+), 2 deletions(-)
create mode 100644 arch/arm/include/asm/kfence.h

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 7a8059ff6bb0..3798f82a0c0d 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -73,6 +73,7 @@ config ARM
select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_KFENCE if MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/kfence.h b/arch/arm/include/asm/kfence.h
new file mode 100644
index 000000000000..eae7a12ab2a9
--- /dev/null
+++ b/arch/arm/include/asm/kfence.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_ARM_KFENCE_H
+#define __ASM_ARM_KFENCE_H
+
+#include <linux/kfence.h>
+#include <asm/set_memory.h>
+#include <asm/pgalloc.h>
+
+static inline int split_pmd_page(pmd_t *pmd, unsigned long addr)
+{
+ int i;
+ unsigned long pfn = PFN_DOWN(__pa((addr & PMD_MASK)));
+ pte_t *pte = pte_alloc_one_kernel(&init_mm);
+
+ if (!pte)
+ return -ENOMEM;
+
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_ext(pte + i, pfn_pte(pfn + i, PAGE_KERNEL), 0);
+ pmd_populate_kernel(&init_mm, pmd, pte);
+
+ flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+ return 0;
+}
+
+static inline bool arch_kfence_init_pool(void)
+{
+ unsigned long addr;
+ pmd_t *pmd;
+
+ for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr);
+ addr += PAGE_SIZE) {
+ pmd = pmd_off_k(addr);
+
+ if (pmd_leaf(*pmd)) {
+ if (split_pmd_page(pmd, addr))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static inline bool kfence_protect_page(unsigned long addr, bool protect)
+{
+ set_memory_valid(addr, 1, !protect);
+
+ return true;
+}
+
+#endif /* __ASM_ARM_KFENCE_H */
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index f7ab6dabe89f..9fa221ffa1b9 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -17,6 +17,7 @@
#include <linux/sched/debug.h>
#include <linux/highmem.h>
#include <linux/perf_event.h>
+#include <linux/kfence.h>

#include <asm/system_misc.h>
#include <asm/system_info.h>
@@ -131,10 +132,14 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
/*
* No handler, we'll have to terminate things with extreme prejudice.
*/
- if (addr < PAGE_SIZE)
+ if (addr < PAGE_SIZE) {
msg = "NULL pointer dereference";
- else
+ } else {
+ if (kfence_handle_page_fault(addr, is_write_fault(fsr), regs))
+ return;
+
msg = "paging request";
+ }

die_kernel_fault(msg, mm, addr, fsr, regs);
}
--
2.26.2

2021-08-25 09:33:13

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm: kfence: Only load kfence_test when kfence is enabled

On Wed, Aug 25, 2021 at 11:17 AM Kefeng Wang <[email protected]> wrote:
>
> Provide kfence_is_enabled() helper, only load kfence_test module
> when kfence is enabled.

What's wrong with the current behavior?
I think we need at least some way to tell the developer that KFENCE
does not work, and a failing test seems to be the perfect one.

> Signed-off-by: Kefeng Wang <[email protected]>
> ---
> include/linux/kfence.h | 2 ++
> mm/kfence/core.c | 8 ++++++++
> mm/kfence/kfence_test.c | 2 ++
> 3 files changed, 12 insertions(+)
>
> diff --git a/include/linux/kfence.h b/include/linux/kfence.h
> index 3fe6dd8a18c1..f08f24e8a726 100644
> --- a/include/linux/kfence.h
> +++ b/include/linux/kfence.h
> @@ -22,6 +22,8 @@
> #define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE)
> extern char *__kfence_pool;
>
> +bool kfence_is_enabled(void);
> +
> #ifdef CONFIG_KFENCE_STATIC_KEYS
> #include <linux/static_key.h>
> DECLARE_STATIC_KEY_FALSE(kfence_allocation_key);
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 7a97db8bc8e7..f1aaa7ebdcad 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -51,6 +51,14 @@ static unsigned long kfence_sample_interval __read_mostly = CONFIG_KFENCE_SAMPLE
> #endif
> #define MODULE_PARAM_PREFIX "kfence."
>
> +bool kfence_is_enabled(void)
> +{
> + if (!kfence_sample_interval || !READ_ONCE(kfence_enabled))
> + return false;
> + return true;
> +}
> +EXPORT_SYMBOL_GPL(kfence_is_enabled);
> +
> static int param_set_sample_interval(const char *val, const struct kernel_param *kp)
> {
> unsigned long num;
> diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
> index eb6307c199ea..4087f9f1497e 100644
> --- a/mm/kfence/kfence_test.c
> +++ b/mm/kfence/kfence_test.c
> @@ -847,6 +847,8 @@ static void unregister_tracepoints(struct tracepoint *tp, void *ignore)
> */
> static int __init kfence_test_init(void)
> {
> + if (!kfence_is_enabled())
> + return 0;
> /*
> * Because we want to be able to build the test as a module, we need to
> * iterate through all known tracepoints, since the static registration
> --
> 2.26.2
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20210825092116.149975-5-wangkefeng.wang%40huawei.com.



--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

2021-08-25 09:57:17

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm: kfence: Only load kfence_test when kfence is enabled

On Wed, 25 Aug 2021 at 11:31, Alexander Potapenko <[email protected]> wrote:
> On Wed, Aug 25, 2021 at 11:17 AM Kefeng Wang <[email protected]> wrote:
> >
> > Provide kfence_is_enabled() helper, only load kfence_test module
> > when kfence is enabled.
>
> What's wrong with the current behavior?
> I think we need at least some way to tell the developer that KFENCE
> does not work, and a failing test seems to be the perfect one.

Like Alex said, this is working as intended. The KFENCE test verifies
that everything is working as expected, *including* that KFENCE was
enabled, and has already helped us identify issues where we expected
it to be enabled! Trying to run the test when KFENCE was intentionally
disabled is therefore not a useful usecase.

Please drop this patch.

2021-08-25 09:59:18

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm: kfence: Only load kfence_test when kfence is enabled


On 2021/8/25 17:31, Alexander Potapenko wrote:
> On Wed, Aug 25, 2021 at 11:17 AM Kefeng Wang <[email protected]> wrote:
>> Provide kfence_is_enabled() helper, only load kfence_test module
>> when kfence is enabled.
> What's wrong with the current behavior?
> I think we need at least some way to tell the developer that KFENCE
> does not work, and a failing test seems to be the perfect one.

If the kfence is not enabled, eg kfence.sample_interval=0, kfence_test
spend too much time,

and all tests will fails. It is meaningless. so better to just skip it ;)

>> diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
>> index eb6307c199ea..4087f9f1497e 100644
>> --- a/mm/kfence/kfence_test.c
>> +++ b/mm/kfence/kfence_test.c
>> @@ -847,6 +847,8 @@ static void unregister_tracepoints(struct tracepoint *tp, void *ignore)
>> */
>> static int __init kfence_test_init(void)
>> {
>> + if (!kfence_is_enabled())

Add a print info here?

>> + return 0;
>> /*
>> * Because we want to be able to build the test as a module, we need to
>> * iterate through all known tracepoints, since the static registration
>> --
>> 2.26.2
>>
>> --
>> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20210825092116.149975-5-wangkefeng.wang%40huawei.com.
>
>

2021-08-25 10:03:38

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm: kfence: Only load kfence_test when kfence is enabled

On Wed, 25 Aug 2021 at 11:55, Kefeng Wang <[email protected]> wrote:
> On 2021/8/25 17:31, Alexander Potapenko wrote:
> > On Wed, Aug 25, 2021 at 11:17 AM Kefeng Wang <[email protected]> wrote:
> >> Provide kfence_is_enabled() helper, only load kfence_test module
> >> when kfence is enabled.
> > What's wrong with the current behavior?
> > I think we need at least some way to tell the developer that KFENCE
> > does not work, and a failing test seems to be the perfect one.
>
> If the kfence is not enabled, eg kfence.sample_interval=0, kfence_test
> spend too much time,
>
> and all tests will fails. It is meaningless. so better to just skip it ;)

But what is your usecase?

I'd like to avoid the export of a new function that is pretty much unused.

2021-08-25 10:19:19

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 0/4] ARM: Support KFENCE feature

On Wed, 25 Aug 2021 at 11:17, Kefeng Wang <[email protected]> wrote:
> The patch 1~3 is to support KFENCE feature on ARM.
>
> NOTE:
> The context of patch2/3 changes in arch/arm/mm/fault.c is based on link[1],
> which make some refactor and cleanup about page fault.
>
> kfence_test is not useful when kfence is not enabled, skip kfence test
> when kfence not enabled in patch4.
>
> I tested the kfence_test on ARM QEMU with or without ARM_LPAE and all passed.

Thank you for enabling KFENCE on ARM -- I'll leave arch-code review to
an ARM maintainer.

However, as said on the patch, please drop the change to the
kfence_test and associated changes. This is working as intended; while
you claim that it takes a long time to run when disabled, when running
manually you just should not run it when disabled. There are CI
systems that rely on the KUnit test output and the fact that the
various test cases say "not ok" etc. Changing that would mean such CI
systems would no longer fail if KFENCE was accidentally disabled (once
KFENCE is enabled on various CI, which we'd like to do at some point).
There are ways to fail the test faster, but they all complicate the
test for no good reason. (And the addition of a new exported function
that is essentially useless.)

Thanks,
-- Marco

2021-08-25 11:27:25

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 0/4] ARM: Support KFENCE feature

On Wed, Aug 25, 2021 at 12:14PM +0200, Marco Elver wrote:
> On Wed, 25 Aug 2021 at 11:17, Kefeng Wang <[email protected]> wrote:
> > The patch 1~3 is to support KFENCE feature on ARM.
> >
> > NOTE:
> > The context of patch2/3 changes in arch/arm/mm/fault.c is based on link[1],
> > which make some refactor and cleanup about page fault.
> >
> > kfence_test is not useful when kfence is not enabled, skip kfence test
> > when kfence not enabled in patch4.
> >
> > I tested the kfence_test on ARM QEMU with or without ARM_LPAE and all passed.
>
> Thank you for enabling KFENCE on ARM -- I'll leave arch-code review to
> an ARM maintainer.
>
> However, as said on the patch, please drop the change to the
> kfence_test and associated changes. This is working as intended; while
> you claim that it takes a long time to run when disabled, when running
> manually you just should not run it when disabled. There are CI
> systems that rely on the KUnit test output and the fact that the
> various test cases say "not ok" etc. Changing that would mean such CI
> systems would no longer fail if KFENCE was accidentally disabled (once
> KFENCE is enabled on various CI, which we'd like to do at some point).
> There are ways to fail the test faster, but they all complicate the
> test for no good reason. (And the addition of a new exported function
> that is essentially useless.)

I spoke too soon -- we export __kfence_pool, and that's good enough to
fail the test fast if KFENCE was disabled at boot:

https://lkml.kernel.org/r/[email protected]

will do the trick. So please drop your patch 4/4 here.

Thanks,
-- Marco

2021-08-25 14:17:49

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH 0/4] ARM: Support KFENCE feature


On 2021/8/25 18:57, Marco Elver wrote:
> On Wed, Aug 25, 2021 at 12:14PM +0200, Marco Elver wrote:
>> On Wed, 25 Aug 2021 at 11:17, Kefeng Wang <[email protected]> wrote:
>>> The patch 1~3 is to support KFENCE feature on ARM.
>>>
>>> NOTE:
>>> The context of patch2/3 changes in arch/arm/mm/fault.c is based on link[1],
>>> which make some refactor and cleanup about page fault.
>>>
>>> kfence_test is not useful when kfence is not enabled, skip kfence test
>>> when kfence not enabled in patch4.
>>>
>>> I tested the kfence_test on ARM QEMU with or without ARM_LPAE and all passed.
>> Thank you for enabling KFENCE on ARM -- I'll leave arch-code review to
>> an ARM maintainer.
>>
>> However, as said on the patch, please drop the change to the
>> kfence_test and associated changes. This is working as intended; while
>> you claim that it takes a long time to run when disabled, when running
>> manually you just should not run it when disabled. There are CI
>> systems that rely on the KUnit test output and the fact that the
>> various test cases say "not ok" etc. Changing that would mean such CI
>> systems would no longer fail if KFENCE was accidentally disabled (once
>> KFENCE is enabled on various CI, which we'd like to do at some point).
>> There are ways to fail the test faster, but they all complicate the
>> test for no good reason. (And the addition of a new exported function
>> that is essentially useless.)
> I spoke too soon -- we export __kfence_pool, and that's good enough to
> fail the test fast if KFENCE was disabled at boot:
>
> https://lkml.kernel.org/r/[email protected]
>
> will do the trick. So please drop your patch 4/4 here.
Sure , please ignore it.
>
> Thanks,
> -- Marco
> .
>

2021-08-25 15:49:33

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH 0/4] ARM: Support KFENCE feature


On 2021/8/25 18:57, Marco Elver wrote:
> I spoke too soon -- we export __kfence_pool, and that's good enough to
> fail the test fast if KFENCE was disabled at boot:
>
> https://lkml.kernel.org/r/[email protected]

I haven't received the mail, don't know why.

Whatever,  I tested it, this patch is good and it save a lot of times, 
so feel free

to add my tested-by, thanks.


>
> will do the trick. So please drop your patch 4/4 here.
>
> Thanks,
> -- Marco
> .
>

2021-08-25 15:50:33

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH 3/4] ARM: Support KFENCE for ARM


On 2021/8/25 21:18, ownia wrote:
> On 2021/8/25 17:21, Kefeng Wang wrote:
>> Add architecture specific implementation details for KFENCE and enable
>> KFENCE on ARM. In particular, this implements the required interface in
>> <asm/kfence.h>.
>>
>> KFENCE requires that attributes for pages from its memory pool can
>> individually be set. Therefore, force the kfence pool to be mapped
>> at page granularity.
>>
>> Testing this patch using the testcases in kfence_test.c and all passed
>> with or without ARM_LPAE.
>>
>> Signed-off-by: Kefeng Wang <[email protected]>
...
>> +#endif /* __ASM_ARM_KFENCE_H */
>> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
>> index f7ab6dabe89f..9fa221ffa1b9 100644
>> --- a/arch/arm/mm/fault.c
>> +++ b/arch/arm/mm/fault.c
>> @@ -17,6 +17,7 @@
>> #include <linux/sched/debug.h>
>> #include <linux/highmem.h>
>> #include <linux/perf_event.h>
>> +#include <linux/kfence.h>
>>
>> #include <asm/system_misc.h>
>> #include <asm/system_info.h>
>> @@ -131,10 +132,14 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
>> /*
>> * No handler, we'll have to terminate things with extreme prejudice.
>> */
>> - if (addr < PAGE_SIZE)
>> + if (addr < PAGE_SIZE) {
>> msg = "NULL pointer dereference";
>> - else
>> + } else {
>> + if (kfence_handle_page_fault(addr, is_write_fault(fsr), regs))
>> + return;
>> +
>> msg = "paging request";
>> + }
>
> I think here should do some fixup to follow upstream mainline code.

Yes, the fixup is still there, as the cover-letter said,

NOTE:
The context of patch2/3 changes in arch/arm/mm/fault.c is based on link[1],
which make some refactor and cleanup about page fault.

...

[1]https://lore.kernel.org/linux-arm-kernel/[email protected]/

>
>>
>> die_kernel_fault(msg, mm, addr, fsr, regs);
>> }
> .
>