2021-04-19 08:52:22

by Marco Elver

[permalink] [raw]
Subject: [PATCH 1/3] kfence: await for allocation using wait_event

On mostly-idle systems, we have observed that toggle_allocation_gate()
is a cause of frequent wake-ups, preventing an otherwise idle CPU to go
into a lower power state.

A late change in KFENCE's development, due to a potential deadlock [1],
required changing the scheduling-friendly wait_event_timeout() and
wake_up() to an open-coded wait-loop using schedule_timeout().
[1] https://lkml.kernel.org/r/[email protected]

To avoid unnecessary wake-ups, switch to using wait_event_timeout().

Unfortunately, we still cannot use a version with direct wake_up() in
__kfence_alloc() due to the same potential for deadlock as in [1].
Instead, add a level of indirection via an irq_work that is scheduled if
we determine that the kfence_timer requires a wake_up().

Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure")
Signed-off-by: Marco Elver <[email protected]>
---
lib/Kconfig.kfence | 1 +
mm/kfence/core.c | 58 +++++++++++++++++++++++++++++++++-------------
2 files changed, 43 insertions(+), 16 deletions(-)

diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence
index 78f50ccb3b45..e641add33947 100644
--- a/lib/Kconfig.kfence
+++ b/lib/Kconfig.kfence
@@ -7,6 +7,7 @@ menuconfig KFENCE
bool "KFENCE: low-overhead sampling-based memory safety error detector"
depends on HAVE_ARCH_KFENCE && (SLAB || SLUB)
select STACKTRACE
+ select IRQ_WORK
help
KFENCE is a low-overhead sampling-based detector of heap out-of-bounds
access, use-after-free, and invalid-free errors. KFENCE is designed
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 768dbd58170d..5f0a56041549 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -10,6 +10,7 @@
#include <linux/atomic.h>
#include <linux/bug.h>
#include <linux/debugfs.h>
+#include <linux/irq_work.h>
#include <linux/kcsan-checks.h>
#include <linux/kfence.h>
#include <linux/kmemleak.h>
@@ -587,6 +588,20 @@ late_initcall(kfence_debugfs_init);

/* === Allocation Gate Timer ================================================ */

+#ifdef CONFIG_KFENCE_STATIC_KEYS
+/* Wait queue to wake up allocation-gate timer task. */
+static DECLARE_WAIT_QUEUE_HEAD(allocation_wait);
+
+static void wake_up_kfence_timer(struct irq_work *work)
+{
+ wake_up(&allocation_wait);
+}
+static DEFINE_IRQ_WORK(wake_up_kfence_timer_work, wake_up_kfence_timer);
+
+/* Indicate if timer task is waiting, to avoid unnecessary irq_work. */
+static bool kfence_timer_waiting;
+#endif
+
/*
* Set up delayed work, which will enable and disable the static key. We need to
* use a work queue (rather than a simple timer), since enabling and disabling a
@@ -604,25 +619,16 @@ static void toggle_allocation_gate(struct work_struct *work)
if (!READ_ONCE(kfence_enabled))
return;

- /* Enable static key, and await allocation to happen. */
atomic_set(&kfence_allocation_gate, 0);
#ifdef CONFIG_KFENCE_STATIC_KEYS
+ /* Enable static key, and await allocation to happen. */
static_branch_enable(&kfence_allocation_key);
- /*
- * Await an allocation. Timeout after 1 second, in case the kernel stops
- * doing allocations, to avoid stalling this worker task for too long.
- */
- {
- unsigned long end_wait = jiffies + HZ;
-
- do {
- set_current_state(TASK_UNINTERRUPTIBLE);
- if (atomic_read(&kfence_allocation_gate) != 0)
- break;
- schedule_timeout(1);
- } while (time_before(jiffies, end_wait));
- __set_current_state(TASK_RUNNING);
- }
+
+ WRITE_ONCE(kfence_timer_waiting, true);
+ smp_mb(); /* See comment in __kfence_alloc(). */
+ wait_event_timeout(allocation_wait, atomic_read(&kfence_allocation_gate), HZ);
+ smp_store_release(&kfence_timer_waiting, false); /* Order after wait_event(). */
+
/* Disable static key and reset timer. */
static_branch_disable(&kfence_allocation_key);
#endif
@@ -729,6 +735,26 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
*/
if (atomic_read(&kfence_allocation_gate) || atomic_inc_return(&kfence_allocation_gate) > 1)
return NULL;
+#ifdef CONFIG_KFENCE_STATIC_KEYS
+ /*
+ * Read of kfence_timer_waiting must be ordered after write to
+ * kfence_allocation_gate (fully ordered per atomic_inc_return()).
+ *
+ * Conversely, the write to kfence_timer_waiting must be ordered before
+ * the check of kfence_allocation_gate in toggle_allocation_gate().
+ *
+ * This ensures that toggle_allocation_gate() always sees the updated
+ * kfence_allocation_gate, or we see that the timer is waiting and will
+ * queue the work to wake it up.
+ */
+ if (READ_ONCE(kfence_timer_waiting)) {
+ /*
+ * Calling wake_up() here may deadlock when allocations happen
+ * from within timer code. Use an irq_work to defer it.
+ */
+ irq_work_queue(&wake_up_kfence_timer_work);
+ }
+#endif

if (!READ_ONCE(kfence_enabled))
return NULL;
--
2.31.1.368.gbe11c130af-goog


2021-04-19 09:45:09

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 1/3] kfence: await for allocation using wait_event

On Mon, 19 Apr 2021 at 11:41, Hillf Danton <[email protected]> wrote:
>
> On Mon, 19 Apr 2021 10:50:25 Marco Elver wrote:
> > +
> > + WRITE_ONCE(kfence_timer_waiting, true);
> > + smp_mb(); /* See comment in __kfence_alloc(). */
>
> This is not needed given task state change in wait_event().

Yes it is. We want to avoid the unconditional irq_work in
__kfence_alloc(). When the system is under load doing frequent
allocations, at least in my tests this avoids the irq_work almost
always. Without the irq_work you'd be correct of course.

> > + wait_event_timeout(allocation_wait, atomic_read(&kfence_allocation_gate), HZ);
> > + smp_store_release(&kfence_timer_waiting, false); /* Order after wait_event(). */
> > +

2021-04-19 13:01:51

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 1/3] kfence: await for allocation using wait_event

On Mon, 19 Apr 2021 at 11:44, Marco Elver <[email protected]> wrote:
>
> On Mon, 19 Apr 2021 at 11:41, Hillf Danton <[email protected]> wrote:
> >
> > On Mon, 19 Apr 2021 10:50:25 Marco Elver wrote:
> > > +
> > > + WRITE_ONCE(kfence_timer_waiting, true);
> > > + smp_mb(); /* See comment in __kfence_alloc(). */
> >
> > This is not needed given task state change in wait_event().
>
> Yes it is. We want to avoid the unconditional irq_work in
> __kfence_alloc(). When the system is under load doing frequent
> allocations, at least in my tests this avoids the irq_work almost
> always. Without the irq_work you'd be correct of course.

And in case this is about the smp_mb() here, yes it definitely is
required. We *must* order the write of kfence_timer_waiting *before*
the check of kfence_allocation_gate, which wait_event() does before
anything else (including changing the state). Otherwise the write may
be reordered after the read, and we could potentially never wake up
because __kfence_alloc() not waking us.

This is documented in __kfence_alloc().

> > > + wait_event_timeout(allocation_wait, atomic_read(&kfence_allocation_gate), HZ);
> > > + smp_store_release(&kfence_timer_waiting, false); /* Order after wait_event(). */
> > > +

2021-04-21 10:36:20

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 1/3] kfence: await for allocation using wait_event

On Wed, Apr 21, 2021 at 05:11PM +0800, Hillf Danton wrote:
> On Mon, 19 Apr 2021 11:49:04 Marco Elver wrote:
> >On Mon, 19 Apr 2021 at 11:44, Marco Elver <[email protected]> wrote:
> >> On Mon, 19 Apr 2021 at 11:41, Hillf Danton <[email protected]> wrote:
> >> > On Mon, 19 Apr 2021 10:50:25 Marco Elver wrote:
> >> > > +
> >> > > + WRITE_ONCE(kfence_timer_waiting, true);
> >> > > + smp_mb(); /* See comment in __kfence_alloc(). */
> >> >
> >> > This is not needed given task state change in wait_event().
> >>
> >> Yes it is. We want to avoid the unconditional irq_work in
> >> __kfence_alloc(). When the system is under load doing frequent
> >> allocations, at least in my tests this avoids the irq_work almost
> >> always. Without the irq_work you'd be correct of course.
> >
> >And in case this is about the smp_mb() here, yes it definitely is
> >required. We *must* order the write of kfence_timer_waiting *before*
> >the check of kfence_allocation_gate, which wait_event() does before
> >anything else (including changing the state).
>
> One of the reasons why wait_event() checks the wait condition before anything
> else is no waker can help waiter before waiter gets themselves on the
> wait queue head list. Nor can waker without scheduling on the waiter
> side, even if the waiter is sitting on the list. So the mb cannot make sense
> without scheduling, let alone the mb in wait_event().

You are right of course. I just went and expanded wait_event():

do {
if (atomic_read(&kfence_allocation_gate))
break;
init_wait_entry(...);
for (;;) {
long __int = prepare_to_wait_event(...);
if (atomic_read(&kfence_allocation_gate))
break;
...
schedule();
}
finish_wait(...);
} while (0);

I just kept looking at the first check. Before the wait entry setup and
finally the second re-check after the mb() in prepare_to_wait_event().
So removing the smp_mb() is indeed fine given the second re-check is
ordered after the write per state change mb().

And then I just saw we should just use waitqueue_active() anyway, which
documents this, too.

I'll send a v2.

Thank you!

-- Marco