2021-03-08 16:16:21

by Vincenzo Frascino

[permalink] [raw]
Subject: [PATCH v14 0/8] arm64: ARMv8.5-A: MTE: Add async mode support

This patchset implements the asynchronous mode support for ARMv8.5-A
Memory Tagging Extension (MTE), which is a debugging feature that allows
to detect with the help of the architecture the C and C++ programmatic
memory errors like buffer overflow, use-after-free, use-after-return, etc.

MTE is built on top of the AArch64 v8.0 virtual address tagging TBI
(Top Byte Ignore) feature and allows a task to set a 4 bit tag on any
subset of its address space that is multiple of a 16 bytes granule. MTE
is based on a lock-key mechanism where the lock is the tag associated to
the physical memory and the key is the tag associated to the virtual
address.
When MTE is enabled and tags are set for ranges of address space of a task,
the PE will compare the tag related to the physical memory with the tag
related to the virtual address (tag check operation). Access to the memory
is granted only if the two tags match. In case of mismatch the PE will raise
an exception.

The exception can be handled synchronously or asynchronously. When the
asynchronous mode is enabled:
- Upon fault the PE updates the TFSR_EL1 register.
- The kernel detects the change during one of the following:
- Context switching
- Return to user/EL0
- Kernel entry from EL1
- Kernel exit to EL1
- If the register has been updated by the PE the kernel clears it and
reports the error.

The series is based on linux-next/akpm.

To simplify the testing a tree with the new patches on top has been made
available at [1].

[1] https://git.gitlab.arm.com/linux-arm/linux-vf.git mte/v12.async.akpm

Changes:
--------
v14:
- Rebase on the latest linux-next/akpm.
- Address review comments.
- Drop a patch that prevented to running the KUNIT tests
in async mode.
- Add kselftest to verify that TCO is enabled in
load_unaligned_zeropad().
v13:
- Rebase on the latest linux-next/akpm.
- Address review comments.
v12:
- Fixed a bug affecting kernel functions allowed to read
beyond buffer boundaries.
- Added support for save/restore of TFSR_EL1 register
during suspend/resume operations.
- Rebased on latest linux-next/akpm.
v11:
- Added patch that disables KUNIT tests in async mode
v10:
- Rebase on the latest linux-next/akpm
- Address review comments.
v9:
- Rebase on the latest linux-next/akpm
- Address review comments.
v8:
- Address review comments.
v7:
- Fix a warning reported by kernel test robot. This
time for real.
v6:
- Drop patches that forbid KASAN KUNIT tests when async
mode is enabled.
- Fix a warning reported by kernel test robot.
- Address review comments.
v5:
- Rebase the series on linux-next/akpm.
- Forbid execution for KASAN KUNIT tests when async
mode is enabled.
- Dropped patch to inline mte_assign_mem_tag_range().
- Address review comments.
v4:
- Added support for kasan.mode (sync/async) kernel
command line parameter.
- Addressed review comments.
v3:
- Exposed kasan_hw_tags_mode to convert the internal
KASAN represenetation.
- Added dsb() for kernel exit paths in arm64.
- Addressed review comments.
v2:
- Fixed a compilation issue reported by krobot.
- General cleanup.

Cc: Andrew Morton <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Evgenii Stepanov <[email protected]>
Cc: Branislav Rankov <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Lorenzo Pieralisi <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>

Vincenzo Frascino (8):
arm64: mte: Add asynchronous mode support
kasan: Add KASAN mode kernel parameter
arm64: mte: Drop arch_enable_tagging()
kasan: Add report for async mode
arm64: mte: Enable TCO in functions that can read beyond buffer limits
arm64: mte: Enable async tag check fault
arm64: mte: Report async tag faults before suspend
kselftest/arm64: Verify that TCO is enabled in
load_unaligned_zeropad()

Documentation/dev-tools/kasan.rst | 9 ++
arch/arm64/include/asm/memory.h | 3 +-
arch/arm64/include/asm/mte-kasan.h | 9 +-
arch/arm64/include/asm/mte.h | 36 ++++++++
arch/arm64/include/asm/uaccess.h | 24 +++++
arch/arm64/include/asm/word-at-a-time.h | 4 +
arch/arm64/kernel/entry-common.c | 6 ++
arch/arm64/kernel/mte.c | 90 ++++++++++++++++++-
arch/arm64/kernel/suspend.c | 3 +
include/linux/kasan.h | 6 ++
lib/test_kasan.c | 2 +-
mm/kasan/hw_tags.c | 66 +++++++++++++-
mm/kasan/kasan.h | 29 +++++-
mm/kasan/report.c | 17 +++-
.../arm64/mte/check_read_beyond_buffer.c | 78 ++++++++++++++++
15 files changed, 367 insertions(+), 15 deletions(-)
create mode 100644 tools/testing/selftests/arm64/mte/check_read_beyond_buffer.c

--
2.30.0


2021-03-08 16:16:33

by Vincenzo Frascino

[permalink] [raw]
Subject: [PATCH v14 2/8] kasan: Add KASAN mode kernel parameter

Architectures supported by KASAN_HW_TAGS can provide a sync or async mode
of execution. On an MTE enabled arm64 hw for example this can be identified
with the synchronous or asynchronous tagging mode of execution.
In synchronous mode, an exception is triggered if a tag check fault occurs.
In asynchronous mode, if a tag check fault occurs, the TFSR_EL1 register is
updated asynchronously. The kernel checks the corresponding bits
periodically.

KASAN requires a specific kernel command line parameter to make use of this
hw features.

Add KASAN HW execution mode kernel command line parameter.

Note: This patch adds the kasan.mode kernel parameter and the
sync/async kernel command line options to enable the described features.

Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
[ Add a new var instead of exposing kasan_arg_mode to be consistent with
flags for other command line arguments. ]
Signed-off-by: Andrey Konovalov <[email protected]>
---
Documentation/dev-tools/kasan.rst | 9 +++++
lib/test_kasan.c | 2 +-
mm/kasan/hw_tags.c | 66 +++++++++++++++++++++++++++++--
mm/kasan/kasan.h | 13 ++++--
4 files changed, 81 insertions(+), 9 deletions(-)

diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index ddf4239a5890..6f6ab3ed7b79 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -161,6 +161,15 @@ particular KASAN features.

- ``kasan=off`` or ``=on`` controls whether KASAN is enabled (default: ``on``).

+- ``kasan.mode=sync`` or ``=async`` controls whether KASAN is configured in
+ synchronous or asynchronous mode of execution (default: ``sync``).
+ Synchronous mode: a bad access is detected immediately when a tag
+ check fault occurs.
+ Asynchronous mode: a bad access detection is delayed. When a tag check
+ fault occurs, the information is stored in hardware (in the TFSR_EL1
+ register for arm64). The kernel periodically checks the hardware and
+ only reports tag faults during these checks.
+
- ``kasan.stacktrace=off`` or ``=on`` disables or enables alloc and free stack
traces collection (default: ``on``).

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index e5647d147b35..479c31a5dc21 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -97,7 +97,7 @@ static void kasan_test_exit(struct kunit *test)
READ_ONCE(fail_data.report_found)); \
if (IS_ENABLED(CONFIG_KASAN_HW_TAGS)) { \
if (READ_ONCE(fail_data.report_found)) \
- kasan_enable_tagging(); \
+ kasan_enable_tagging_sync(); \
migrate_enable(); \
} \
} while (0)
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 2aad21fda156..6d3eca5bb784 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -25,6 +25,12 @@ enum kasan_arg {
KASAN_ARG_ON,
};

+enum kasan_arg_mode {
+ KASAN_ARG_MODE_DEFAULT,
+ KASAN_ARG_MODE_SYNC,
+ KASAN_ARG_MODE_ASYNC,
+};
+
enum kasan_arg_stacktrace {
KASAN_ARG_STACKTRACE_DEFAULT,
KASAN_ARG_STACKTRACE_OFF,
@@ -38,6 +44,7 @@ enum kasan_arg_fault {
};

static enum kasan_arg kasan_arg __ro_after_init;
+static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init;
static enum kasan_arg_fault kasan_arg_fault __ro_after_init;

@@ -45,6 +52,10 @@ static enum kasan_arg_fault kasan_arg_fault __ro_after_init;
DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
EXPORT_SYMBOL(kasan_flag_enabled);

+/* Whether the asynchronous mode is enabled. */
+bool kasan_flag_async __ro_after_init;
+EXPORT_SYMBOL_GPL(kasan_flag_async);
+
/* Whether to collect alloc/free stack traces. */
DEFINE_STATIC_KEY_FALSE(kasan_flag_stacktrace);

@@ -68,6 +79,23 @@ static int __init early_kasan_flag(char *arg)
}
early_param("kasan", early_kasan_flag);

+/* kasan.mode=sync/async */
+static int __init early_kasan_mode(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "sync"))
+ kasan_arg_mode = KASAN_ARG_MODE_SYNC;
+ else if (!strcmp(arg, "async"))
+ kasan_arg_mode = KASAN_ARG_MODE_ASYNC;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.mode", early_kasan_mode);
+
/* kasan.stacktrace=off/on */
static int __init early_kasan_flag_stacktrace(char *arg)
{
@@ -115,7 +143,15 @@ void kasan_init_hw_tags_cpu(void)
return;

hw_init_tags(KASAN_TAG_MAX);
- hw_enable_tagging();
+
+ /*
+ * Enable async mode only when explicitly requested through
+ * the command line.
+ */
+ if (kasan_arg_mode == KASAN_ARG_MODE_ASYNC)
+ hw_enable_tagging_async();
+ else
+ hw_enable_tagging_sync();
}

/* kasan_init_hw_tags() is called once on boot CPU. */
@@ -132,6 +168,22 @@ void __init kasan_init_hw_tags(void)
/* Enable KASAN. */
static_branch_enable(&kasan_flag_enabled);

+ switch (kasan_arg_mode) {
+ case KASAN_ARG_MODE_DEFAULT:
+ /*
+ * Default to sync mode.
+ * Do nothing, kasan_flag_async keeps its default value.
+ */
+ break;
+ case KASAN_ARG_MODE_SYNC:
+ /* Do nothing, kasan_flag_async keeps its default value. */
+ break;
+ case KASAN_ARG_MODE_ASYNC:
+ /* Async mode enabled. */
+ kasan_flag_async = true;
+ break;
+ }
+
switch (kasan_arg_stacktrace) {
case KASAN_ARG_STACKTRACE_DEFAULT:
/* Default to enabling stack trace collection. */
@@ -194,10 +246,16 @@ void kasan_set_tagging_report_once(bool state)
}
EXPORT_SYMBOL_GPL(kasan_set_tagging_report_once);

-void kasan_enable_tagging(void)
+void kasan_enable_tagging_sync(void)
+{
+ hw_enable_tagging_sync();
+}
+EXPORT_SYMBOL_GPL(kasan_enable_tagging_sync);
+
+void kasan_enable_tagging_async(void)
{
- hw_enable_tagging();
+ hw_enable_tagging_async();
}
-EXPORT_SYMBOL_GPL(kasan_enable_tagging);
+EXPORT_SYMBOL_GPL(kasan_enable_tagging_async);

#endif
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 3436c6bf7c0c..2118c2ac9c37 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -21,6 +21,7 @@ static inline bool kasan_stack_collection_enabled(void)
#endif

extern bool kasan_flag_panic __ro_after_init;
+extern bool kasan_flag_async __ro_after_init;

#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_GRANULE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
@@ -294,7 +295,8 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define arch_set_mem_tag_range(addr, size, tag) ((void *)(addr))
#endif

-#define hw_enable_tagging() arch_enable_tagging()
+#define hw_enable_tagging_sync() arch_enable_tagging_sync()
+#define hw_enable_tagging_async() arch_enable_tagging_async()
#define hw_init_tags(max_tag) arch_init_tags(max_tag)
#define hw_set_tagging_report_once(state) arch_set_tagging_report_once(state)
#define hw_get_random_tag() arch_get_random_tag()
@@ -303,7 +305,8 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)

#else /* CONFIG_KASAN_HW_TAGS */

-#define hw_enable_tagging()
+#define hw_enable_tagging_sync()
+#define hw_enable_tagging_async()
#define hw_set_tagging_report_once(state)

#endif /* CONFIG_KASAN_HW_TAGS */
@@ -311,12 +314,14 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#if defined(CONFIG_KASAN_HW_TAGS) && IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)

void kasan_set_tagging_report_once(bool state);
-void kasan_enable_tagging(void);
+void kasan_enable_tagging_sync(void);
+void kasan_enable_tagging_async(void);

#else /* CONFIG_KASAN_HW_TAGS || CONFIG_KASAN_KUNIT_TEST */

static inline void kasan_set_tagging_report_once(bool state) { }
-static inline void kasan_enable_tagging(void) { }
+static inline void kasan_enable_tagging_sync(void) { }
+static inline void kasan_enable_tagging_async(void) { }

#endif /* CONFIG_KASAN_HW_TAGS || CONFIG_KASAN_KUNIT_TEST */

--
2.30.0

2021-03-08 16:16:55

by Vincenzo Frascino

[permalink] [raw]
Subject: [PATCH v14 3/8] arm64: mte: Drop arch_enable_tagging()

arch_enable_tagging() was left in memory.h after the introduction of
async mode to not break the bysectability of the KASAN KUNIT tests.

Remove the function now that KASAN has been fully converted.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
---
arch/arm64/include/asm/memory.h | 1 -
1 file changed, 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 076b913caa65..91515383d763 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -245,7 +245,6 @@ static inline const void *__tag_set(const void *addr, u8 tag)
#ifdef CONFIG_KASAN_HW_TAGS
#define arch_enable_tagging_sync() mte_enable_kernel_sync()
#define arch_enable_tagging_async() mte_enable_kernel_async()
-#define arch_enable_tagging() arch_enable_tagging_sync()
#define arch_set_tagging_report_once(state) mte_set_report_once(state)
#define arch_init_tags(max_tag) mte_init_tags(max_tag)
#define arch_get_random_tag() mte_get_random_tag()
--
2.30.0

2021-03-08 16:16:58

by Vincenzo Frascino

[permalink] [raw]
Subject: [PATCH v14 4/8] kasan: Add report for async mode

KASAN provides an asynchronous mode of execution.

Add reporting functionality for this mode.

Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 6 ++++++
mm/kasan/kasan.h | 16 ++++++++++++++++
mm/kasan/report.c | 17 ++++++++++++++++-
3 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 14f72ec96492..d53ea3c047bc 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -376,6 +376,12 @@ static inline void *kasan_reset_tag(const void *addr)

#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS*/

+#ifdef CONFIG_KASAN_HW_TAGS
+
+void kasan_report_async(void);
+
+#endif /* CONFIG_KASAN_HW_TAGS */
+
#ifdef CONFIG_KASAN_SW_TAGS
void __init kasan_init_sw_tags(void);
#else
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2118c2ac9c37..91a3d4ec309d 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -7,17 +7,33 @@
#include <linux/stackdepot.h>

#ifdef CONFIG_KASAN_HW_TAGS
+
#include <linux/static_key.h>
+
DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace);
+extern bool kasan_flag_async __ro_after_init;
+
static inline bool kasan_stack_collection_enabled(void)
{
return static_branch_unlikely(&kasan_flag_stacktrace);
}
+
+static inline bool kasan_async_mode_enabled(void)
+{
+ return kasan_flag_async;
+}
#else
+
static inline bool kasan_stack_collection_enabled(void)
{
return true;
}
+
+static inline bool kasan_async_mode_enabled(void)
+{
+ return false;
+}
+
#endif

extern bool kasan_flag_panic __ro_after_init;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 87b271206163..8b0843a2cdd7 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -87,7 +87,8 @@ static void start_report(unsigned long *flags)

static void end_report(unsigned long *flags, unsigned long addr)
{
- trace_error_report_end(ERROR_DETECTOR_KASAN, addr);
+ if (!kasan_async_mode_enabled())
+ trace_error_report_end(ERROR_DETECTOR_KASAN, addr);
pr_err("==================================================================\n");
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
spin_unlock_irqrestore(&report_lock, *flags);
@@ -360,6 +361,20 @@ void kasan_report_invalid_free(void *object, unsigned long ip)
end_report(&flags, (unsigned long)object);
}

+#ifdef CONFIG_KASAN_HW_TAGS
+void kasan_report_async(void)
+{
+ unsigned long flags;
+
+ start_report(&flags);
+ pr_err("BUG: KASAN: invalid-access\n");
+ pr_err("Asynchronous mode enabled: no access details available\n");
+ pr_err("\n");
+ dump_stack();
+ end_report(&flags, 0);
+}
+#endif /* CONFIG_KASAN_HW_TAGS */
+
static void __kasan_report(unsigned long addr, size_t size, bool is_write,
unsigned long ip)
{
--
2.30.0

2021-03-08 16:16:58

by Vincenzo Frascino

[permalink] [raw]
Subject: [PATCH v14 5/8] arm64: mte: Enable TCO in functions that can read beyond buffer limits

load_unaligned_zeropad() and __get/put_kernel_nofault() functions can
read passed some buffer limits which may include some MTE granule with a
different tag.

When MTE async mode is enable, the load operation crosses the boundaries
and the next granule has a different tag the PE sets the TFSR_EL1.TF1 bit
as if an asynchronous tag fault is happened.

Enable Tag Check Override (TCO) in these functions before the load and
disable it afterwards to prevent this to happen.

Note: The same condition can be hit in MTE sync mode but we deal with it
through the exception handling.
In the current implementation, mte_async_mode flag is set only at boot
time but in future kasan might acquire some runtime features that
that change the mode dynamically, hence we disable it when sync mode is
selected for future proof.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Reported-by: Branislav Rankov <[email protected]>
Tested-by: Branislav Rankov <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
---
arch/arm64/include/asm/uaccess.h | 24 ++++++++++++++++++++++++
arch/arm64/include/asm/word-at-a-time.h | 4 ++++
arch/arm64/kernel/mte.c | 22 ++++++++++++++++++++++
3 files changed, 50 insertions(+)

diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 0deb88467111..a857f8f82aeb 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -188,6 +188,26 @@ static inline void __uaccess_enable_tco(void)
ARM64_MTE, CONFIG_KASAN_HW_TAGS));
}

+/* Whether the MTE asynchronous mode is enabled. */
+DECLARE_STATIC_KEY_FALSE(mte_async_mode);
+
+/*
+ * These functions disable tag checking only if in MTE async mode
+ * since the sync mode generates exceptions synchronously and the
+ * nofault or load_unaligned_zeropad can handle them.
+ */
+static inline void __uaccess_disable_tco_async(void)
+{
+ if (static_branch_unlikely(&mte_async_mode))
+ __uaccess_disable_tco();
+}
+
+static inline void __uaccess_enable_tco_async(void)
+{
+ if (static_branch_unlikely(&mte_async_mode))
+ __uaccess_enable_tco();
+}
+
static inline void uaccess_disable_privileged(void)
{
__uaccess_disable_tco();
@@ -307,8 +327,10 @@ do { \
do { \
int __gkn_err = 0; \
\
+ __uaccess_enable_tco_async(); \
__raw_get_mem("ldr", *((type *)(dst)), \
(__force type *)(src), __gkn_err); \
+ __uaccess_disable_tco_async(); \
if (unlikely(__gkn_err)) \
goto err_label; \
} while (0)
@@ -380,8 +402,10 @@ do { \
do { \
int __pkn_err = 0; \
\
+ __uaccess_enable_tco_async(); \
__raw_put_mem("str", *((type *)(src)), \
(__force type *)(dst), __pkn_err); \
+ __uaccess_disable_tco_async(); \
if (unlikely(__pkn_err)) \
goto err_label; \
} while(0)
diff --git a/arch/arm64/include/asm/word-at-a-time.h b/arch/arm64/include/asm/word-at-a-time.h
index 3333950b5909..c62d9fa791aa 100644
--- a/arch/arm64/include/asm/word-at-a-time.h
+++ b/arch/arm64/include/asm/word-at-a-time.h
@@ -55,6 +55,8 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
{
unsigned long ret, offset;

+ __uaccess_enable_tco_async();
+
/* Load word from unaligned pointer addr */
asm(
"1: ldr %0, %3\n"
@@ -76,6 +78,8 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
: "=&r" (ret), "=&r" (offset)
: "r" (addr), "Q" (*(unsigned long *)addr));

+ __uaccess_disable_tco_async();
+
return ret;
}

diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index fa755cf94e01..1ad9be4c8376 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -26,6 +26,10 @@ u64 gcr_kernel_excl __ro_after_init;

static bool report_fault_once = true;

+/* Whether the MTE asynchronous mode is enabled. */
+DEFINE_STATIC_KEY_FALSE(mte_async_mode);
+EXPORT_SYMBOL_GPL(mte_async_mode);
+
static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
{
pte_t old_pte = READ_ONCE(*ptep);
@@ -118,12 +122,30 @@ static inline void __mte_enable_kernel(const char *mode, unsigned long tcf)

void mte_enable_kernel_sync(void)
{
+ /*
+ * Make sure we enter this function when no PE has set
+ * async mode previously.
+ */
+ WARN_ONCE(static_key_enabled(&mte_async_mode),
+ "MTE async mode enabled system wide!");
+
__mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC);
}

void mte_enable_kernel_async(void)
{
__mte_enable_kernel("asynchronous", SCTLR_ELx_TCF_ASYNC);
+
+ /*
+ * MTE async mode is set system wide by the first PE that
+ * executes this function.
+ *
+ * Note: If in future KASAN acquires a runtime switching
+ * mode in between sync and async, this strategy needs
+ * to be reviewed.
+ */
+ if (!static_branch_unlikely(&mte_async_mode))
+ static_branch_enable(&mte_async_mode);
}

void mte_set_report_once(bool state)
--
2.30.0

2021-03-08 16:17:15

by Vincenzo Frascino

[permalink] [raw]
Subject: [PATCH v14 6/8] arm64: mte: Enable async tag check fault

MTE provides a mode that asynchronously updates the TFSR_EL1 register
when a tag check exception is detected.

To take advantage of this mode the kernel has to verify the status of
the register at:
1. Context switching
2. Return to user/EL0 (Not required in entry from EL0 since the kernel
did not run)
3. Kernel entry from EL1
4. Kernel exit to EL1

If the register is non-zero a trace is reported.

Add the required features for EL1 detection and reporting.

Note: ITFSB bit is set in the SCTLR_EL1 register hence it guaranties that
the indirect writes to TFSR_EL1 are synchronized at exception entry to
EL1. On the context switch path the synchronization is guarantied by the
dsb() in __switch_to().
The dsb(nsh) in mte_check_tfsr_exit() is provisional pending
confirmation by the architects.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: Andrey Konovalov <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
---
arch/arm64/include/asm/mte.h | 32 ++++++++++++++++++++++++++++
arch/arm64/kernel/entry-common.c | 6 ++++++
arch/arm64/kernel/mte.c | 36 ++++++++++++++++++++++++++++++++
3 files changed, 74 insertions(+)

diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 9b557a457f24..43169b978cd3 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -90,5 +90,37 @@ static inline void mte_assign_mem_tag_range(void *addr, size_t size)

#endif /* CONFIG_ARM64_MTE */

+#ifdef CONFIG_KASAN_HW_TAGS
+void mte_check_tfsr_el1(void);
+
+static inline void mte_check_tfsr_entry(void)
+{
+ mte_check_tfsr_el1();
+}
+
+static inline void mte_check_tfsr_exit(void)
+{
+ /*
+ * The asynchronous faults are sync'ed automatically with
+ * TFSR_EL1 on kernel entry but for exit an explicit dsb()
+ * is required.
+ */
+ dsb(nsh);
+ isb();
+
+ mte_check_tfsr_el1();
+}
+#else
+static inline void mte_check_tfsr_el1(void)
+{
+}
+static inline void mte_check_tfsr_entry(void)
+{
+}
+static inline void mte_check_tfsr_exit(void)
+{
+}
+#endif /* CONFIG_KASAN_HW_TAGS */
+
#endif /* __ASSEMBLY__ */
#endif /* __ASM_MTE_H */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 9d3588450473..a1ec351c36bd 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -37,6 +37,8 @@ static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
lockdep_hardirqs_off(CALLER_ADDR0);
rcu_irq_enter_check_tick();
trace_hardirqs_off_finish();
+
+ mte_check_tfsr_entry();
}

/*
@@ -47,6 +49,8 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs)
{
lockdep_assert_irqs_disabled();

+ mte_check_tfsr_exit();
+
if (interrupts_enabled(regs)) {
if (regs->exit_rcu) {
trace_hardirqs_on_prepare();
@@ -293,6 +297,8 @@ asmlinkage void noinstr enter_from_user_mode(void)

asmlinkage void noinstr exit_to_user_mode(void)
{
+ mte_check_tfsr_exit();
+
trace_hardirqs_on_prepare();
lockdep_hardirqs_on_prepare(CALLER_ADDR0);
user_enter_irqoff();
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 1ad9be4c8376..d6456f2d2306 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -158,6 +158,29 @@ bool mte_report_once(void)
return READ_ONCE(report_fault_once);
}

+#ifdef CONFIG_KASAN_HW_TAGS
+void mte_check_tfsr_el1(void)
+{
+ u64 tfsr_el1;
+
+ if (!system_supports_mte())
+ return;
+
+ tfsr_el1 = read_sysreg_s(SYS_TFSR_EL1);
+
+ if (unlikely(tfsr_el1 & SYS_TFSR_EL1_TF1)) {
+ /*
+ * Note: isb() is not required after this direct write
+ * because there is no indirect read subsequent to it
+ * (per ARM DDI 0487F.c table D13-1).
+ */
+ write_sysreg_s(0, SYS_TFSR_EL1);
+
+ kasan_report_async();
+ }
+}
+#endif
+
static void update_sctlr_el1_tcf0(u64 tcf0)
{
/* ISB required for the kernel uaccess routines */
@@ -223,6 +246,19 @@ void mte_thread_switch(struct task_struct *next)
/* avoid expensive SCTLR_EL1 accesses if no change */
if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0)
update_sctlr_el1_tcf0(next->thread.sctlr_tcf0);
+ else
+ isb();
+
+ /*
+ * Check if an async tag exception occurred at EL1.
+ *
+ * Note: On the context switch path we rely on the dsb() present
+ * in __switch_to() to guarantee that the indirect writes to TFSR_EL1
+ * are synchronized before this point.
+ * isb() above is required for the same reason.
+ *
+ */
+ mte_check_tfsr_el1();
}

void mte_suspend_exit(void)
--
2.30.0

2021-03-08 16:17:26

by Vincenzo Frascino

[permalink] [raw]
Subject: [PATCH v14 7/8] arm64: mte: Report async tag faults before suspend

When MTE async mode is enabled TFSR_EL1 contains the accumulative
asynchronous tag check faults for EL1 and EL0.

During the suspend/resume operations the firmware might perform some
operations that could change the state of the register resulting in
a spurious tag check fault report.

Report asynchronous tag faults before suspend and clear the TFSR_EL1
register after resume to prevent this to happen.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Lorenzo Pieralisi <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Reviewed-by: Lorenzo Pieralisi <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
---
arch/arm64/include/asm/mte.h | 4 ++++
arch/arm64/kernel/mte.c | 16 ++++++++++++++++
arch/arm64/kernel/suspend.c | 3 +++
3 files changed, 23 insertions(+)

diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 43169b978cd3..33e88a470357 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -41,6 +41,7 @@ void mte_sync_tags(pte_t *ptep, pte_t pte);
void mte_copy_page_tags(void *kto, const void *kfrom);
void flush_mte_state(void);
void mte_thread_switch(struct task_struct *next);
+void mte_suspend_enter(void);
void mte_suspend_exit(void);
long set_mte_ctrl(struct task_struct *task, unsigned long arg);
long get_mte_ctrl(struct task_struct *task);
@@ -66,6 +67,9 @@ static inline void flush_mte_state(void)
static inline void mte_thread_switch(struct task_struct *next)
{
}
+static inline void mte_suspend_enter(void)
+{
+}
static inline void mte_suspend_exit(void)
{
}
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index d6456f2d2306..1979bd9ad09b 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -261,6 +261,22 @@ void mte_thread_switch(struct task_struct *next)
mte_check_tfsr_el1();
}

+void mte_suspend_enter(void)
+{
+ if (!system_supports_mte())
+ return;
+
+ /*
+ * The barriers are required to guarantee that the indirect writes
+ * to TFSR_EL1 are synchronized before we report the state.
+ */
+ dsb(nsh);
+ isb();
+
+ /* Report SYS_TFSR_EL1 before suspend entry */
+ mte_check_tfsr_el1();
+}
+
void mte_suspend_exit(void)
{
if (!system_supports_mte())
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index d7564891ffe1..6fdc8292b4f5 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -91,6 +91,9 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
unsigned long flags;
struct sleep_stack_data state;

+ /* Report any MTE async fault before going to suspend */
+ mte_suspend_enter();
+
/*
* From this point debug exceptions are disabled to prevent
* updates to mdscr register (saved and restored along with
--
2.30.0

2021-03-08 16:19:06

by Vincenzo Frascino

[permalink] [raw]
Subject: [PATCH v14 8/8] kselftest/arm64: Verify that TCO is enabled in load_unaligned_zeropad()

load_unaligned_zeropad() and __get/put_kernel_nofault() functions can
read passed some buffer limits which may include some MTE granule with a
different tag.

When MTE async mode is enable, the load operation crosses the boundaries
and the next granule has a different tag the PE sets the TFSR_EL1.TF1
bit as if an asynchronous tag fault is happened:

==================================================================
BUG: KASAN: invalid-access
Asynchronous mode enabled: no access details available

CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc1-ge1045c86620d-dirty #8
Hardware name: FVP Base RevC (DT)
Call trace:
dump_backtrace+0x0/0x1c0
show_stack+0x18/0x24
dump_stack+0xcc/0x14c
kasan_report_async+0x54/0x70
mte_check_tfsr_el1+0x48/0x4c
exit_to_user_mode+0x18/0x38
finish_ret_to_user+0x4/0x15c
==================================================================

Verify that Tag Check Override (TCO) is enabled in these functions before
the load and disable it afterwards to prevent this to happen.

Note: The issue has been observed only with an MTE enabled userspace.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Reported-by: Branislav Rankov <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
---
.../arm64/mte/check_read_beyond_buffer.c | 78 +++++++++++++++++++
1 file changed, 78 insertions(+)
create mode 100644 tools/testing/selftests/arm64/mte/check_read_beyond_buffer.c

diff --git a/tools/testing/selftests/arm64/mte/check_read_beyond_buffer.c b/tools/testing/selftests/arm64/mte/check_read_beyond_buffer.c
new file mode 100644
index 000000000000..eb03cd52a58e
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_read_beyond_buffer.c
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include <errno.h>
+#include <fcntl.h>
+#include <pthread.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <time.h>
+#include <unistd.h>
+#include <sys/auxv.h>
+#include <sys/mman.h>
+#include <sys/prctl.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define NUM_DEVICES 8
+
+static char *dev[NUM_DEVICES] = {
+ "/proc/cmdline",
+ "/fstab.fvp",
+ "/dev/null",
+ "/proc/mounts",
+ "/proc/filesystems",
+ "/proc/cmdline",
+ "/proc/device-tre", /* incorrect path */
+ "",
+};
+
+#define FAKE_PERMISSION 0x88000
+#define MAX_DESCRIPTOR 0xffffffff
+
+int mte_read_beyond_buffer_test(void)
+{
+ int fd[NUM_DEVICES];
+ unsigned int _desc, _dev;
+
+ for (_desc = 0; _desc <= MAX_DESCRIPTOR; _desc++) {
+ for (_dev = 0; _dev < NUM_DEVICES; _dev++) {
+#ifdef _TEST_DEBUG
+ printf("[TEST]: openat(0x%x, %s, 0x%x)\n", _desc, dev[_dev], FAKE_PERMISSION);
+#endif
+
+ fd[_dev] = openat(_desc, dev[_dev], FAKE_PERMISSION);
+ }
+
+ for (_dev = 0; _dev <= NUM_DEVICES; _dev++)
+ close(fd[_dev]);
+ }
+
+ return KSFT_PASS;
+}
+
+int main(int argc, char *argv[])
+{
+ int err;
+
+ err = mte_default_setup();
+ if (err)
+ return err;
+
+ ksft_set_plan(1);
+
+ evaluate_test(mte_read_beyond_buffer_test(),
+ "Verify that TCO is enabled correctly if a read beyond buffer occurs\n");
+
+ mte_restore_setup();
+ ksft_print_cnts();
+
+ return ksft_get_fail_cnt() == 0 ? KSFT_PASS : KSFT_FAIL;
+}
--
2.30.0

2021-03-08 18:10:47

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v14 5/8] arm64: mte: Enable TCO in functions that can read beyond buffer limits

On Mon, Mar 08, 2021 at 04:14:31PM +0000, Vincenzo Frascino wrote:
> load_unaligned_zeropad() and __get/put_kernel_nofault() functions can
> read passed some buffer limits which may include some MTE granule with a
> different tag.

s/passed/past/

> When MTE async mode is enable, the load operation crosses the boundaries

s/enabel/enabled/

> and the next granule has a different tag the PE sets the TFSR_EL1.TF1 bit
> as if an asynchronous tag fault is happened.
>
> Enable Tag Check Override (TCO) in these functions before the load and
> disable it afterwards to prevent this to happen.
>
> Note: The same condition can be hit in MTE sync mode but we deal with it
> through the exception handling.
> In the current implementation, mte_async_mode flag is set only at boot
> time but in future kasan might acquire some runtime features that
> that change the mode dynamically, hence we disable it when sync mode is
> selected for future proof.
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Reported-by: Branislav Rankov <[email protected]>
> Tested-by: Branislav Rankov <[email protected]>
> Signed-off-by: Vincenzo Frascino <[email protected]>
> ---
> arch/arm64/include/asm/uaccess.h | 24 ++++++++++++++++++++++++
> arch/arm64/include/asm/word-at-a-time.h | 4 ++++
> arch/arm64/kernel/mte.c | 22 ++++++++++++++++++++++
> 3 files changed, 50 insertions(+)
>
> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index 0deb88467111..a857f8f82aeb 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -188,6 +188,26 @@ static inline void __uaccess_enable_tco(void)
> ARM64_MTE, CONFIG_KASAN_HW_TAGS));
> }
>
> +/* Whether the MTE asynchronous mode is enabled. */
> +DECLARE_STATIC_KEY_FALSE(mte_async_mode);

Can we please hide this behind something like:

static inline bool system_uses_mte_async_mode(void)
{
return IS_ENABLED(CONFIG_KASAN_HW_TAGS) &&
static_branch_unlikely(&mte_async_mode);
}

... like we do for system_uses_ttbr0_pan()?

That way the callers are easier to read, and kernels built without
CONFIG_KASAN_HW_TAGS don't have the static branch at all. I reckon you
can put that in one of hte mte headers and include it where needed.

Thanks,
Mark.

> +
> +/*
> + * These functions disable tag checking only if in MTE async mode
> + * since the sync mode generates exceptions synchronously and the
> + * nofault or load_unaligned_zeropad can handle them.
> + */
> +static inline void __uaccess_disable_tco_async(void)
> +{
> + if (static_branch_unlikely(&mte_async_mode))
> + __uaccess_disable_tco();
> +}
> +
> +static inline void __uaccess_enable_tco_async(void)
> +{
> + if (static_branch_unlikely(&mte_async_mode))
> + __uaccess_enable_tco();
> +}
> +
> static inline void uaccess_disable_privileged(void)
> {
> __uaccess_disable_tco();
> @@ -307,8 +327,10 @@ do { \
> do { \
> int __gkn_err = 0; \
> \
> + __uaccess_enable_tco_async(); \
> __raw_get_mem("ldr", *((type *)(dst)), \
> (__force type *)(src), __gkn_err); \
> + __uaccess_disable_tco_async(); \
> if (unlikely(__gkn_err)) \
> goto err_label; \
> } while (0)
> @@ -380,8 +402,10 @@ do { \
> do { \
> int __pkn_err = 0; \
> \
> + __uaccess_enable_tco_async(); \
> __raw_put_mem("str", *((type *)(src)), \
> (__force type *)(dst), __pkn_err); \
> + __uaccess_disable_tco_async(); \
> if (unlikely(__pkn_err)) \
> goto err_label; \
> } while(0)
> diff --git a/arch/arm64/include/asm/word-at-a-time.h b/arch/arm64/include/asm/word-at-a-time.h
> index 3333950b5909..c62d9fa791aa 100644
> --- a/arch/arm64/include/asm/word-at-a-time.h
> +++ b/arch/arm64/include/asm/word-at-a-time.h
> @@ -55,6 +55,8 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
> {
> unsigned long ret, offset;
>
> + __uaccess_enable_tco_async();
> +
> /* Load word from unaligned pointer addr */
> asm(
> "1: ldr %0, %3\n"
> @@ -76,6 +78,8 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
> : "=&r" (ret), "=&r" (offset)
> : "r" (addr), "Q" (*(unsigned long *)addr));
>
> + __uaccess_disable_tco_async();
> +
> return ret;
> }
>
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index fa755cf94e01..1ad9be4c8376 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -26,6 +26,10 @@ u64 gcr_kernel_excl __ro_after_init;
>
> static bool report_fault_once = true;
>
> +/* Whether the MTE asynchronous mode is enabled. */
> +DEFINE_STATIC_KEY_FALSE(mte_async_mode);
> +EXPORT_SYMBOL_GPL(mte_async_mode);
> +
> static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
> {
> pte_t old_pte = READ_ONCE(*ptep);
> @@ -118,12 +122,30 @@ static inline void __mte_enable_kernel(const char *mode, unsigned long tcf)
>
> void mte_enable_kernel_sync(void)
> {
> + /*
> + * Make sure we enter this function when no PE has set
> + * async mode previously.
> + */
> + WARN_ONCE(static_key_enabled(&mte_async_mode),
> + "MTE async mode enabled system wide!");
> +
> __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC);
> }
>
> void mte_enable_kernel_async(void)
> {
> __mte_enable_kernel("asynchronous", SCTLR_ELx_TCF_ASYNC);
> +
> + /*
> + * MTE async mode is set system wide by the first PE that
> + * executes this function.
> + *
> + * Note: If in future KASAN acquires a runtime switching
> + * mode in between sync and async, this strategy needs
> + * to be reviewed.
> + */
> + if (!static_branch_unlikely(&mte_async_mode))
> + static_branch_enable(&mte_async_mode);
> }
>
> void mte_set_report_once(bool state)
> --
> 2.30.0
>

2021-03-08 21:04:38

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH v14 0/8] arm64: ARMv8.5-A: MTE: Add async mode support

On Mon, Mar 8, 2021 at 5:14 PM Vincenzo Frascino
<[email protected]> wrote:
>
> This patchset implements the asynchronous mode support for ARMv8.5-A
> Memory Tagging Extension (MTE), which is a debugging feature that allows
> to detect with the help of the architecture the C and C++ programmatic
> memory errors like buffer overflow, use-after-free, use-after-return, etc.
>
> MTE is built on top of the AArch64 v8.0 virtual address tagging TBI
> (Top Byte Ignore) feature and allows a task to set a 4 bit tag on any
> subset of its address space that is multiple of a 16 bytes granule. MTE
> is based on a lock-key mechanism where the lock is the tag associated to
> the physical memory and the key is the tag associated to the virtual
> address.
> When MTE is enabled and tags are set for ranges of address space of a task,
> the PE will compare the tag related to the physical memory with the tag
> related to the virtual address (tag check operation). Access to the memory
> is granted only if the two tags match. In case of mismatch the PE will raise
> an exception.
>
> The exception can be handled synchronously or asynchronously. When the
> asynchronous mode is enabled:
> - Upon fault the PE updates the TFSR_EL1 register.
> - The kernel detects the change during one of the following:
> - Context switching
> - Return to user/EL0
> - Kernel entry from EL1
> - Kernel exit to EL1
> - If the register has been updated by the PE the kernel clears it and
> reports the error.
>
> The series is based on linux-next/akpm.
>
> To simplify the testing a tree with the new patches on top has been made
> available at [1].
>
> [1] https://git.gitlab.arm.com/linux-arm/linux-vf.git mte/v12.async.akpm

Hi Vincenzo,

As previously discussed, here's the tree with tests support added to
this series:

https://github.com/xairy/linux/tree/vf-v12.async.akpm-tests

Please take a look at the last two patches. Feel free to include them
into v15 if they look good.

Thanks!

2021-03-08 22:03:43

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v14 5/8] arm64: mte: Enable TCO in functions that can read beyond buffer limits

Hi Vincenzo,

I love your patch! Yet something to improve:

[auto build test ERROR on kvmarm/next]
[also build test ERROR on linus/master v5.12-rc2]
[cannot apply to arm64/for-next/core xlnx/master arm/for-next soc/for-next hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Vincenzo-Frascino/arm64-ARMv8-5-A-MTE-Add-async-mode-support/20210309-001716
base: https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
config: arm64-randconfig-r006-20210308 (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/660df126323fe5533a1be7834e1754a1adc69f13
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Vincenzo-Frascino/arm64-ARMv8-5-A-MTE-Add-async-mode-support/20210309-001716
git checkout 660df126323fe5533a1be7834e1754a1adc69f13
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm64

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

>> aarch64-linux-ld: mm/maccess.o:(__jump_table+0x8): undefined reference to `mte_async_mode'
aarch64-linux-ld: mm/maccess.o:(__jump_table+0x18): undefined reference to `mte_async_mode'
aarch64-linux-ld: mm/maccess.o:(__jump_table+0x28): undefined reference to `mte_async_mode'
aarch64-linux-ld: mm/maccess.o:(__jump_table+0x38): undefined reference to `mte_async_mode'
aarch64-linux-ld: mm/maccess.o:(__jump_table+0x48): undefined reference to `mte_async_mode'
aarch64-linux-ld: mm/maccess.o:(__jump_table+0x58): more undefined references to `mte_async_mode' follow

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (2.22 kB)
.config.gz (33.57 kB)
Download all attachments

2021-03-09 00:30:16

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v14 5/8] arm64: mte: Enable TCO in functions that can read beyond buffer limits

Hi Vincenzo,

I love your patch! Yet something to improve:

[auto build test ERROR on kvmarm/next]
[also build test ERROR on linus/master v5.12-rc2 next-20210305]
[cannot apply to arm64/for-next/core xlnx/master arm/for-next soc/for-next hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Vincenzo-Frascino/arm64-ARMv8-5-A-MTE-Add-async-mode-support/20210309-001716
base: https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
config: arm64-randconfig-r021-20210308 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 3a11a41795bec548e91621caaa4cc00fc31b2212)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm64 cross compiling tool for clang build
# apt-get install binutils-aarch64-linux-gnu
# https://github.com/0day-ci/linux/commit/660df126323fe5533a1be7834e1754a1adc69f13
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Vincenzo-Frascino/arm64-ARMv8-5-A-MTE-Add-async-mode-support/20210309-001716
git checkout 660df126323fe5533a1be7834e1754a1adc69f13
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm64

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

>> ld.lld: error: undefined symbol: mte_async_mode
>>> referenced by maccess.c
>>> maccess.o:(copy_from_kernel_nofault) in archive mm/built-in.a
>>> referenced by maccess.c
>>> maccess.o:(copy_from_kernel_nofault) in archive mm/built-in.a
>>> referenced by maccess.c
>>> maccess.o:(copy_from_kernel_nofault) in archive mm/built-in.a
>>> referenced 62 more times

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (2.26 kB)
.config.gz (36.72 kB)
Download all attachments

2021-03-09 10:08:54

by Vincenzo Frascino

[permalink] [raw]
Subject: Re: [PATCH v14 0/8] arm64: ARMv8.5-A: MTE: Add async mode support

Hi Andrey,

On 3/8/21 9:02 PM, Andrey Konovalov wrote:
> On Mon, Mar 8, 2021 at 5:14 PM Vincenzo Frascino
> <[email protected]> wrote:
>>
>> This patchset implements the asynchronous mode support for ARMv8.5-A
>> Memory Tagging Extension (MTE), which is a debugging feature that allows
>> to detect with the help of the architecture the C and C++ programmatic
>> memory errors like buffer overflow, use-after-free, use-after-return, etc.
>>
>> MTE is built on top of the AArch64 v8.0 virtual address tagging TBI
>> (Top Byte Ignore) feature and allows a task to set a 4 bit tag on any
>> subset of its address space that is multiple of a 16 bytes granule. MTE
>> is based on a lock-key mechanism where the lock is the tag associated to
>> the physical memory and the key is the tag associated to the virtual
>> address.
>> When MTE is enabled and tags are set for ranges of address space of a task,
>> the PE will compare the tag related to the physical memory with the tag
>> related to the virtual address (tag check operation). Access to the memory
>> is granted only if the two tags match. In case of mismatch the PE will raise
>> an exception.
>>
>> The exception can be handled synchronously or asynchronously. When the
>> asynchronous mode is enabled:
>> - Upon fault the PE updates the TFSR_EL1 register.
>> - The kernel detects the change during one of the following:
>> - Context switching
>> - Return to user/EL0
>> - Kernel entry from EL1
>> - Kernel exit to EL1
>> - If the register has been updated by the PE the kernel clears it and
>> reports the error.
>>
>> The series is based on linux-next/akpm.
>>
>> To simplify the testing a tree with the new patches on top has been made
>> available at [1].
>>
>> [1] https://git.gitlab.arm.com/linux-arm/linux-vf.git mte/v12.async.akpm
>
> Hi Vincenzo,
>
> As previously discussed, here's the tree with tests support added to
> this series:
>
> https://github.com/xairy/linux/tree/vf-v12.async.akpm-tests
>
> Please take a look at the last two patches. Feel free to include them
> into v15 if they look good.
>
> Thanks!
>

Thank you for this. I will definitely have a look and include them.
Based on the review process, I am planning to have another version early next week.

--
Regards,
Vincenzo

2021-03-09 10:24:18

by Vincenzo Frascino

[permalink] [raw]
Subject: Re: [PATCH v14 5/8] arm64: mte: Enable TCO in functions that can read beyond buffer limits

On 3/8/21 6:09 PM, Mark Rutland wrote:
>> +DECLARE_STATIC_KEY_FALSE(mte_async_mode);
> Can we please hide this behind something like:
>
> static inline bool system_uses_mte_async_mode(void)
> {
> return IS_ENABLED(CONFIG_KASAN_HW_TAGS) &&
> static_branch_unlikely(&mte_async_mode);
> }
>
> ... like we do for system_uses_ttbr0_pan()?
>

I agree, it is a cleaner solution. I will add it to v15.

> That way the callers are easier to read, and kernels built without
> CONFIG_KASAN_HW_TAGS don't have the static branch at all. I reckon you
> can put that in one of hte mte headers and include it where needed.

--
Regards,
Vincenzo

2021-03-09 22:43:48

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v14 5/8] arm64: mte: Enable TCO in functions that can read beyond buffer limits

Hi Vincenzo,

I love your patch! Yet something to improve:

[auto build test ERROR on kvmarm/next]
[also build test ERROR on linus/master v5.12-rc2 next-20210309]
[cannot apply to arm64/for-next/core xlnx/master arm/for-next soc/for-next hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Vincenzo-Frascino/arm64-ARMv8-5-A-MTE-Add-async-mode-support/20210309-001716
base: https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
config: arm64-randconfig-s032-20210309 (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.3-262-g5e674421-dirty
# https://github.com/0day-ci/linux/commit/660df126323fe5533a1be7834e1754a1adc69f13
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Vincenzo-Frascino/arm64-ARMv8-5-A-MTE-Add-async-mode-support/20210309-001716
git checkout 660df126323fe5533a1be7834e1754a1adc69f13
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=arm64

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

aarch64-linux-ld: mm/maccess.o: in function `copy_from_kernel_nofault':
>> maccess.c:(.text+0x340): undefined reference to `mte_async_mode'
maccess.c:(.text+0x340): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
>> aarch64-linux-ld: maccess.c:(.text+0x344): undefined reference to `mte_async_mode'
aarch64-linux-ld: maccess.c:(.text+0x44c): undefined reference to `mte_async_mode'
maccess.c:(.text+0x44c): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
aarch64-linux-ld: maccess.c:(.text+0x450): undefined reference to `mte_async_mode'
aarch64-linux-ld: maccess.c:(.text+0x474): undefined reference to `mte_async_mode'
aarch64-linux-ld: mm/maccess.o:maccess.c:(.text+0x4d0): more undefined references to `mte_async_mode' follow
mm/maccess.o: in function `copy_from_kernel_nofault':
maccess.c:(.text+0x4d0): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
maccess.c:(.text+0x550): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
mm/maccess.o: in function `copy_to_kernel_nofault':
maccess.c:(.text+0x6cc): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
maccess.c:(.text+0x7d8): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
maccess.c:(.text+0x864): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
maccess.c:(.text+0x8ec): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
mm/maccess.o: in function `strncpy_from_kernel_nofault':
maccess.c:(.text+0xaac): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
fs/namei.o: in function `full_name_hash':
namei.c:(.text+0x28): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 against undefined symbol `mte_async_mode'
fs/namei.o: in function `hashlen_string':
namei.c:(.text+0x2a28): additional relocation overflows omitted from the output

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (3.95 kB)
.config.gz (30.39 kB)
Download all attachments

2021-03-11 10:41:34

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v14 3/8] arm64: mte: Drop arch_enable_tagging()

On Mon, Mar 08, 2021 at 04:14:29PM +0000, Vincenzo Frascino wrote:
> arch_enable_tagging() was left in memory.h after the introduction of
> async mode to not break the bysectability of the KASAN KUNIT tests.
>
> Remove the function now that KASAN has been fully converted.
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Signed-off-by: Vincenzo Frascino <[email protected]>

Acked-by: Catalin Marinas <[email protected]>

2021-03-11 13:28:47

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v14 8/8] kselftest/arm64: Verify that TCO is enabled in load_unaligned_zeropad()

On Mon, Mar 08, 2021 at 04:14:34PM +0000, Vincenzo Frascino wrote:
> load_unaligned_zeropad() and __get/put_kernel_nofault() functions can
> read passed some buffer limits which may include some MTE granule with a
> different tag.
>
> When MTE async mode is enable, the load operation crosses the boundaries
> and the next granule has a different tag the PE sets the TFSR_EL1.TF1
> bit as if an asynchronous tag fault is happened:
>
> ==================================================================
> BUG: KASAN: invalid-access
> Asynchronous mode enabled: no access details available
>
> CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc1-ge1045c86620d-dirty #8
> Hardware name: FVP Base RevC (DT)
> Call trace:
> dump_backtrace+0x0/0x1c0
> show_stack+0x18/0x24
> dump_stack+0xcc/0x14c
> kasan_report_async+0x54/0x70
> mte_check_tfsr_el1+0x48/0x4c
> exit_to_user_mode+0x18/0x38
> finish_ret_to_user+0x4/0x15c
> ==================================================================
>
> Verify that Tag Check Override (TCO) is enabled in these functions before
> the load and disable it afterwards to prevent this to happen.
>
> Note: The issue has been observed only with an MTE enabled userspace.

The above bug is all about kernel buffers. While userspace can trigger
the relevant code paths, it should not matter whether the user has MTE
enabled or not. Can you please confirm that you can still triggered the
fault with kernel-mode MTE but non-MTE user-space? If not, we may have a
bug somewhere as the two are unrelated: load_unaligned_zeropad() only
acts on kernel buffers and are subject to the kernel MTE tag check fault
mode.

I don't think we should have a user-space selftest for this. The bug is
not about a user-kernel interface, so an in-kernel test is more
appropriate. Could we instead add this to the kasan tests and calling
load_unaligned_zeropad() and other functions directly?

--
Catalin

2021-03-11 15:02:15

by Vincenzo Frascino

[permalink] [raw]
Subject: Re: [PATCH v14 8/8] kselftest/arm64: Verify that TCO is enabled in load_unaligned_zeropad()

On 3/11/21 1:25 PM, Catalin Marinas wrote:
> On Mon, Mar 08, 2021 at 04:14:34PM +0000, Vincenzo Frascino wrote:
>> load_unaligned_zeropad() and __get/put_kernel_nofault() functions can
>> read passed some buffer limits which may include some MTE granule with a
>> different tag.
>>
>> When MTE async mode is enable, the load operation crosses the boundaries
>> and the next granule has a different tag the PE sets the TFSR_EL1.TF1
>> bit as if an asynchronous tag fault is happened:
>>
>> ==================================================================
>> BUG: KASAN: invalid-access
>> Asynchronous mode enabled: no access details available
>>
>> CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc1-ge1045c86620d-dirty #8
>> Hardware name: FVP Base RevC (DT)
>> Call trace:
>> dump_backtrace+0x0/0x1c0
>> show_stack+0x18/0x24
>> dump_stack+0xcc/0x14c
>> kasan_report_async+0x54/0x70
>> mte_check_tfsr_el1+0x48/0x4c
>> exit_to_user_mode+0x18/0x38
>> finish_ret_to_user+0x4/0x15c
>> ==================================================================
>>
>> Verify that Tag Check Override (TCO) is enabled in these functions before
>> the load and disable it afterwards to prevent this to happen.
>>
>> Note: The issue has been observed only with an MTE enabled userspace.
>
> The above bug is all about kernel buffers. While userspace can trigger
> the relevant code paths, it should not matter whether the user has MTE
> enabled or not. Can you please confirm that you can still triggered the
> fault with kernel-mode MTE but non-MTE user-space? If not, we may have a
> bug somewhere as the two are unrelated: load_unaligned_zeropad() only
> acts on kernel buffers and are subject to the kernel MTE tag check fault
> mode.
>

I retried and you are right, it does not matter if it is a MTE or non-MTE
user-space. The issue seems to be that this test does not trigger the problem
all the times which probably lead me to the wrong conclusions.

> I don't think we should have a user-space selftest for this. The bug is
> not about a user-kernel interface, so an in-kernel test is more
> appropriate. Could we instead add this to the kasan tests and calling
> load_unaligned_zeropad() and other functions directly?
>

I agree with you we should abandon this strategy of triggering the issue due to
my comment above. I will investigate the option of having a kasan test and try
to come up with one that calls the relevant functions directly. I would prefer
though, since the rest of the series is almost ready, to post it in a future
series. What do you think?

--
Regards,
Vincenzo

2021-03-11 16:30:46

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v14 8/8] kselftest/arm64: Verify that TCO is enabled in load_unaligned_zeropad()

On Thu, Mar 11, 2021 at 03:00:26PM +0000, Vincenzo Frascino wrote:
> On 3/11/21 1:25 PM, Catalin Marinas wrote:
> > On Mon, Mar 08, 2021 at 04:14:34PM +0000, Vincenzo Frascino wrote:
> >> load_unaligned_zeropad() and __get/put_kernel_nofault() functions can
> >> read passed some buffer limits which may include some MTE granule with a
> >> different tag.
> >>
> >> When MTE async mode is enable, the load operation crosses the boundaries
> >> and the next granule has a different tag the PE sets the TFSR_EL1.TF1
> >> bit as if an asynchronous tag fault is happened:
> >>
> >> ==================================================================
> >> BUG: KASAN: invalid-access
> >> Asynchronous mode enabled: no access details available
> >>
> >> CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc1-ge1045c86620d-dirty #8
> >> Hardware name: FVP Base RevC (DT)
> >> Call trace:
> >> dump_backtrace+0x0/0x1c0
> >> show_stack+0x18/0x24
> >> dump_stack+0xcc/0x14c
> >> kasan_report_async+0x54/0x70
> >> mte_check_tfsr_el1+0x48/0x4c
> >> exit_to_user_mode+0x18/0x38
> >> finish_ret_to_user+0x4/0x15c
> >> ==================================================================
> >>
> >> Verify that Tag Check Override (TCO) is enabled in these functions before
> >> the load and disable it afterwards to prevent this to happen.
> >>
> >> Note: The issue has been observed only with an MTE enabled userspace.
> >
> > The above bug is all about kernel buffers. While userspace can trigger
> > the relevant code paths, it should not matter whether the user has MTE
> > enabled or not. Can you please confirm that you can still triggered the
> > fault with kernel-mode MTE but non-MTE user-space? If not, we may have a
> > bug somewhere as the two are unrelated: load_unaligned_zeropad() only
> > acts on kernel buffers and are subject to the kernel MTE tag check fault
> > mode.
>
> I retried and you are right, it does not matter if it is a MTE or non-MTE
> user-space. The issue seems to be that this test does not trigger the problem
> all the times which probably lead me to the wrong conclusions.

Keep the test around for some quick checks before you get the kasan
test support.

> > I don't think we should have a user-space selftest for this. The bug is
> > not about a user-kernel interface, so an in-kernel test is more
> > appropriate. Could we instead add this to the kasan tests and calling
> > load_unaligned_zeropad() and other functions directly?
>
> I agree with you we should abandon this strategy of triggering the issue due to
> my comment above. I will investigate the option of having a kasan test and try
> to come up with one that calls the relevant functions directly. I would prefer
> though, since the rest of the series is almost ready, to post it in a future
> series. What do you think?

That's fine by me.

--
Catalin

2021-03-11 16:36:14

by Vincenzo Frascino

[permalink] [raw]
Subject: Re: [PATCH v14 8/8] kselftest/arm64: Verify that TCO is enabled in load_unaligned_zeropad()



On 3/11/21 4:28 PM, Catalin Marinas wrote:
> On Thu, Mar 11, 2021 at 03:00:26PM +0000, Vincenzo Frascino wrote:
>> On 3/11/21 1:25 PM, Catalin Marinas wrote:
>>> On Mon, Mar 08, 2021 at 04:14:34PM +0000, Vincenzo Frascino wrote:
>>>> load_unaligned_zeropad() and __get/put_kernel_nofault() functions can
>>>> read passed some buffer limits which may include some MTE granule with a
>>>> different tag.
>>>>
>>>> When MTE async mode is enable, the load operation crosses the boundaries
>>>> and the next granule has a different tag the PE sets the TFSR_EL1.TF1
>>>> bit as if an asynchronous tag fault is happened:
>>>>
>>>> ==================================================================
>>>> BUG: KASAN: invalid-access
>>>> Asynchronous mode enabled: no access details available
>>>>
>>>> CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc1-ge1045c86620d-dirty #8
>>>> Hardware name: FVP Base RevC (DT)
>>>> Call trace:
>>>> dump_backtrace+0x0/0x1c0
>>>> show_stack+0x18/0x24
>>>> dump_stack+0xcc/0x14c
>>>> kasan_report_async+0x54/0x70
>>>> mte_check_tfsr_el1+0x48/0x4c
>>>> exit_to_user_mode+0x18/0x38
>>>> finish_ret_to_user+0x4/0x15c
>>>> ==================================================================
>>>>
>>>> Verify that Tag Check Override (TCO) is enabled in these functions before
>>>> the load and disable it afterwards to prevent this to happen.
>>>>
>>>> Note: The issue has been observed only with an MTE enabled userspace.
>>>
>>> The above bug is all about kernel buffers. While userspace can trigger
>>> the relevant code paths, it should not matter whether the user has MTE
>>> enabled or not. Can you please confirm that you can still triggered the
>>> fault with kernel-mode MTE but non-MTE user-space? If not, we may have a
>>> bug somewhere as the two are unrelated: load_unaligned_zeropad() only
>>> acts on kernel buffers and are subject to the kernel MTE tag check fault
>>> mode.
>>
>> I retried and you are right, it does not matter if it is a MTE or non-MTE
>> user-space. The issue seems to be that this test does not trigger the problem
>> all the times which probably lead me to the wrong conclusions.
>
> Keep the test around for some quick checks before you get the kasan
> test support.
>

Of course, I never throw away my code.

>>> I don't think we should have a user-space selftest for this. The bug is
>>> not about a user-kernel interface, so an in-kernel test is more
>>> appropriate. Could we instead add this to the kasan tests and calling
>>> load_unaligned_zeropad() and other functions directly?
>>
>> I agree with you we should abandon this strategy of triggering the issue due to
>> my comment above. I will investigate the option of having a kasan test and try
>> to come up with one that calls the relevant functions directly. I would prefer
>> though, since the rest of the series is almost ready, to post it in a future
>> series. What do you think?
>
> That's fine by me.
>

--
Regards,
Vincenzo