2022-10-24 10:57:17

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 0/9] Linux v4.19.255-rt114-rc1

Dear RT Folks,

This is the RT stable review cycle of patch 4.19.255-rt114-rc1.

Please scream at me if I messed something up. Please test the patches
too.

The -rc release will be uploaded to kernel.org and will be deleted
when the final release is out. This is just a review release (or
release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main
release on 2022-10-31.

To build 4.19.255-rt114-rc1 directly, the following patches should be applied:

https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.19.tar.xz

https://www.kernel.org/pub/linux/kernel/v4.x/patch-4.19.255.xz

https://www.kernel.org/pub/linux/kernel/projects/rt/4.19/older/patch-4.19.255-rt114-rc1.patch.xz

Signing key fingerprint:

5BF6 7BC5 0826 72CA BB45 ACAE 587C 5ECA 5D0A 306C

All keys used for the above files and repositories can be found on the
following git repository:

git://git.kernel.org/pub/scm/docs/kernel/pgpkeys.git

Enjoy!
Daniel

Changes from v4.19.255-rt113:


Daniel Wagner (3):
Revert "random: Use local locks for crng context access"
rcu: Update rcuwait
Linux 4.19.255-rt114-rc1

Sebastian Andrzej Siewior (6):
random: Bring back the local_locks
local_lock: Provide INIT_LOCAL_LOCK().
Revert "workqueue: Use local irq lock instead of irq disable regions"
timers: Keep interrupts disabled for TIMER_IRQSAFE timer.
timers: Don't block on ->expiry_lock for TIMER_IRQSAFE timers
workqueue: Use rcuwait for wq_manager_wait

drivers/char/random.c | 16 +++++++------
include/linux/locallock.h | 5 +++++
include/linux/rcuwait.h | 42 +++++++++++++++++++++++++++--------
kernel/exit.c | 7 ++++--
kernel/locking/percpu-rwsem.c | 2 +-
kernel/rcu/update.c | 8 +++++++
kernel/time/timer.c | 12 ++++++++--
kernel/workqueue.c | 28 +++++++++++++++++------
localversion-rt | 2 +-
9 files changed, 93 insertions(+), 29 deletions(-)

--
2.38.0


2022-10-24 10:57:36

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 4/9] Revert "workqueue: Use local irq lock instead of irq disable regions"

From: Sebastian Andrzej Siewior <[email protected]>

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


This reverts the PREEMPT_RT related changes to workqueue. It reverts the
usage of local_locks() and cpu_chill().

This is a preparation to pull in the PREEMPT_RT related changes which
were merged upstream.

Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
[wagi: 827b6f6962da ("workqueue: rework") already reverted
most of the changes, except the missing update in
put_pwq_unlocked.]
Signed-off-by: Daniel Wagner <[email protected]>
---
kernel/workqueue.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4ed22776b2ee..d97c2ad8dc08 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1496,11 +1496,9 @@ EXPORT_SYMBOL(queue_work_on);
void delayed_work_timer_fn(struct timer_list *t)
{
struct delayed_work *dwork = from_timer(dwork, t, timer);
- unsigned long flags;

- local_irq_save(flags);
+ /* should have been called from irqsafe timer with irq already off */
__queue_work(dwork->cpu, dwork->wq, &dwork->work);
- local_irq_restore(flags);
}
EXPORT_SYMBOL(delayed_work_timer_fn);

--
2.38.0

2022-10-24 10:57:38

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 3/9] local_lock: Provide INIT_LOCAL_LOCK().

From: Sebastian Andrzej Siewior <[email protected]>

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


The original code was using INIT_LOCAL_LOCK() and I tried to sneak
around it and forgot that this code also needs to compile on !RT
platforms.

Provide INIT_LOCAL_LOCK() to initialize properly on RT and do nothing on
!RT. Let random.c use which is the only user so far and oes not compile
on !RT otherwise.

Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Link: https://lore.kernel.org/all/[email protected]/
Signed-off-by: Daniel Wagner <[email protected]>
---
drivers/char/random.c | 4 ++--
include/linux/locallock.h | 5 +++++
2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 0fd0462054bd..a7b345c47d1f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -235,7 +235,7 @@ struct crng {

static DEFINE_PER_CPU(struct crng, crngs) = {
.generation = ULONG_MAX,
- .lock.lock = __SPIN_LOCK_UNLOCKED(crngs.lock.lock),
+ .lock = INIT_LOCAL_LOCK(crngs.lock),
};

/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
@@ -514,7 +514,7 @@ struct batch_ ##type { \
}; \
\
static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = { \
- .lock.lock = __SPIN_LOCK_UNLOCKED(batched_entropy_ ##type.lock.lock), \
+ .lock = INIT_LOCAL_LOCK(batched_entropy_ ##type.lock), \
.position = UINT_MAX \
}; \
\
diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index 81c89d87723b..7964ee6b998c 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -23,6 +23,8 @@ struct local_irq_lock {
unsigned long flags;
};

+#define INIT_LOCAL_LOCK(lvar) { .lock = __SPIN_LOCK_UNLOCKED((lvar).lock.lock) }
+
#define DEFINE_LOCAL_IRQ_LOCK(lvar) \
DEFINE_PER_CPU(struct local_irq_lock, lvar) = { \
.lock = __SPIN_LOCK_UNLOCKED((lvar).lock) }
@@ -241,6 +243,9 @@ static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,

#else /* PREEMPT_RT_BASE */

+struct local_irq_lock { };
+#define INIT_LOCAL_LOCK(lvar) { }
+
#define DEFINE_LOCAL_IRQ_LOCK(lvar) __typeof__(const int) lvar
#define DECLARE_LOCAL_IRQ_LOCK(lvar) extern __typeof__(const int) lvar

--
2.38.0

2022-10-24 10:58:08

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 5/9] timers: Keep interrupts disabled for TIMER_IRQSAFE timer.

From: Sebastian Andrzej Siewior <[email protected]>

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


Keep interrupts disabled across callback invocation for the
TIMER_IRQSAFE as expected.
This is required for the timer used by workqueue after the upcomming
rework.

Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Daniel Wagner <[email protected]>
---
kernel/time/timer.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index a2be2277506d..3e2c0bd03004 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1397,8 +1397,7 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head)

fn = timer->function;

- if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL) &&
- timer->flags & TIMER_IRQSAFE) {
+ if (timer->flags & TIMER_IRQSAFE) {
raw_spin_unlock(&base->lock);
call_timer_fn(timer, fn);
base->running_timer = NULL;
--
2.38.0

2022-10-24 10:58:48

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 8/9] workqueue: Use rcuwait for wq_manager_wait

From: Sebastian Andrzej Siewior <[email protected]>

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


Upstream commit d8bb65ab70f702531aaaa11d9710f9450078e295

The workqueue code has it's internal spinlock (pool::lock) and also
implicit spinlock usage in the wq_manager waitqueue. These spinlocks
are converted to 'sleeping' spinlocks on a RT-kernel.

Workqueue functions can be invoked from contexts which are truly atomic
even on a PREEMPT_RT enabled kernel. Taking sleeping locks from such
contexts is forbidden.

pool::lock can be converted to a raw spinlock as the lock held times
are short. But the workqueue manager waitqueue is handled inside of
pool::lock held regions which again violates the lock nesting rules
of raw and regular spinlocks.

The manager waitqueue has no special requirements like custom wakeup
callbacks or mass wakeups. While it does not use exclusive wait mode
explicitly there is no strict requirement to queue the waiters in a
particular order as there is only one waiter at a time.

This allows to replace the waitqueue with rcuwait which solves the
locking problem because rcuwait relies on existing locking.

Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
[wagi: Updated context as v4.19-rt was using swait]
Signed-off-by: Daniel Wagner <[email protected]>
---
kernel/workqueue.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d97c2ad8dc08..a3777fe1e224 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -50,6 +50,7 @@
#include <linux/sched/isolation.h>
#include <linux/nmi.h>
#include <linux/kvm_para.h>
+#include <linux/rcuwait.h>

#include "workqueue_internal.h"

@@ -299,7 +300,8 @@ static struct workqueue_attrs *wq_update_unbound_numa_attrs_buf;
static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
static DEFINE_MUTEX(wq_pool_attach_mutex); /* protects worker attach/detach */
static DEFINE_RAW_SPINLOCK(wq_mayday_lock); /* protects wq->maydays list */
-static DECLARE_SWAIT_QUEUE_HEAD(wq_manager_wait); /* wait for manager to go away */
+/* wait for manager to go away */
+static struct rcuwait manager_wait = __RCUWAIT_INITIALIZER(manager_wait);

static LIST_HEAD(workqueues); /* PR: list of all workqueues */
static bool workqueue_freezing; /* PL: have wqs started freezing? */
@@ -2023,7 +2025,7 @@ static bool manage_workers(struct worker *worker)

pool->manager = NULL;
pool->flags &= ~POOL_MANAGER_ACTIVE;
- swake_up_one(&wq_manager_wait);
+ rcuwait_wake_up(&manager_wait);
return true;
}

@@ -3344,6 +3346,18 @@ static void rcu_free_pool(struct rcu_head *rcu)
kfree(pool);
}

+/* This returns with the lock held on success (pool manager is inactive). */
+static bool wq_manager_inactive(struct worker_pool *pool)
+{
+ raw_spin_lock_irq(&pool->lock);
+
+ if (pool->flags & POOL_MANAGER_ACTIVE) {
+ raw_spin_unlock_irq(&pool->lock);
+ return false;
+ }
+ return true;
+}
+
/**
* put_unbound_pool - put a worker_pool
* @pool: worker_pool to put
@@ -3379,10 +3393,12 @@ static void put_unbound_pool(struct worker_pool *pool)
* Become the manager and destroy all workers. This prevents
* @pool's workers from blocking on attach_mutex. We're the last
* manager and @pool gets freed with the flag set.
+ * Because of how wq_manager_inactive() works, we will hold the
+ * spinlock after a successful wait.
*/
raw_spin_lock_irq(&pool->lock);
- swait_event_lock_irq(wq_manager_wait,
- !(pool->flags & POOL_MANAGER_ACTIVE), pool->lock);
+ rcuwait_wait_event(&manager_wait, wq_manager_inactive(pool),
+ TASK_UNINTERRUPTIBLE);
pool->flags |= POOL_MANAGER_ACTIVE;

while ((worker = first_idle_worker(pool)))
--
2.38.0

2022-10-24 10:59:11

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 1/9] Revert "random: Use local locks for crng context access"

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


This reverts commit af5469c6f4f85f60f3ecc9bd541adfb6bdbeaff2.
---
drivers/char/random.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index c06705a32246..2be38780a7f7 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -53,7 +53,6 @@
#include <linux/uaccess.h>
#include <linux/siphash.h>
#include <linux/uio.h>
-#include <linux/locallock.h>
#include <crypto/chacha20.h>
#include <crypto/blake2s.h>
#include <asm/processor.h>
@@ -235,7 +234,6 @@ struct crng {
static DEFINE_PER_CPU(struct crng, crngs) = {
.generation = ULONG_MAX
};
-DEFINE_LOCAL_IRQ_LOCK(crngs_lock);

/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
static void extract_entropy(void *buf, size_t len);
@@ -364,7 +362,7 @@ static void crng_make_state(u32 chacha_state[CHACHA20_BLOCK_SIZE / sizeof(u32)],
if (unlikely(crng_has_old_seed()))
crng_reseed();

- local_lock_irqsave(crngs_lock, flags);
+ local_irq_save(flags);
crng = raw_cpu_ptr(&crngs);

/*
@@ -389,7 +387,7 @@ static void crng_make_state(u32 chacha_state[CHACHA20_BLOCK_SIZE / sizeof(u32)],
* should wind up here immediately.
*/
crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len);
- local_unlock_irqrestore(crngs_lock, flags);
+ local_irq_restore(flags);
}

static void _get_random_bytes(void *buf, size_t len)
@@ -514,7 +512,6 @@ struct batch_ ##type { \
static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = { \
.position = UINT_MAX \
}; \
-static DEFINE_LOCAL_IRQ_LOCK(batched_entropy_lock_ ##type); \
\
type get_random_ ##type(void) \
{ \
@@ -530,7 +527,7 @@ type get_random_ ##type(void) \
return ret; \
} \
\
- local_lock_irqsave(batched_entropy_lock_ ##type, flags); \
+ local_irq_save(flags); \
batch = raw_cpu_ptr(&batched_entropy_##type); \
\
next_gen = READ_ONCE(base_crng.generation); \
@@ -544,7 +541,7 @@ type get_random_ ##type(void) \
ret = batch->entropy[batch->position]; \
batch->entropy[batch->position] = 0; \
++batch->position; \
- local_unlock_irqrestore(batched_entropy_lock_ ##type, flags); \
+ local_irq_restore(flags); \
return ret; \
} \
EXPORT_SYMBOL(get_random_ ##type);
--
2.38.0

2022-10-24 11:00:16

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 7/9] rcu: Update rcuwait

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


This is an all in one commit backporting updates for rcuwait:
- 03f4b48edae7 ("rcuwait: Annotate task_struct with __rcu")
- 191a43be61d6 ("rcuwait: Introduce rcuwait_active()")
- 5c21f7b322cb ("rcuwait: Introduce prepare_to and finish_rcuwait")
- 80fbaf1c3f29 ("rcuwait: Add @state argument to rcuwait_wait_event()")
- 9d9a6ebfea32 ("rcuwait: Let rcuwait_wake_up() return whether or not a task was awoken")
- 58d4292bd037 ("rcu: Uninline multi-use function: finish_rcuwait()")

Signed-off-by: Daniel Wagner <[email protected]>
---
include/linux/rcuwait.h | 42 +++++++++++++++++++++++++++--------
kernel/exit.c | 7 ++++--
kernel/locking/percpu-rwsem.c | 2 +-
kernel/rcu/update.c | 8 +++++++
4 files changed, 47 insertions(+), 12 deletions(-)

diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h
index 90bfa3279a01..4fe9ecd56aac 100644
--- a/include/linux/rcuwait.h
+++ b/include/linux/rcuwait.h
@@ -3,6 +3,7 @@
#define _LINUX_RCUWAIT_H_

#include <linux/rcupdate.h>
+#include <linux/sched/signal.h>

/*
* rcuwait provides a way of blocking and waking up a single
@@ -18,7 +19,7 @@
* awoken.
*/
struct rcuwait {
- struct task_struct *task;
+ struct task_struct __rcu *task;
};

#define __RCUWAIT_INITIALIZER(name) \
@@ -29,14 +30,33 @@ static inline void rcuwait_init(struct rcuwait *w)
w->task = NULL;
}

-extern void rcuwait_wake_up(struct rcuwait *w);
+extern int rcuwait_wake_up(struct rcuwait *w);
+
+/*
+ * Note: this provides no serialization and, just as with waitqueues,
+ * requires care to estimate as to whether or not the wait is active.
+ */
+static inline int rcuwait_active(struct rcuwait *w)
+{
+ return !!rcu_access_pointer(w->task);
+}

/*
* The caller is responsible for locking around rcuwait_wait_event(),
- * such that writes to @task are properly serialized.
+ * and [prepare_to/finish]_rcuwait() such that writes to @task are
+ * properly serialized.
*/
-#define rcuwait_wait_event(w, condition) \
+
+static inline void prepare_to_rcuwait(struct rcuwait *w)
+{
+ rcu_assign_pointer(w->task, current);
+}
+
+extern void finish_rcuwait(struct rcuwait *w);
+
+#define rcuwait_wait_event(w, condition, state) \
({ \
+ int __ret = 0; \
/* \
* Complain if we are called after do_exit()/exit_notify(), \
* as we cannot rely on the rcu critical region for the \
@@ -44,21 +64,25 @@ extern void rcuwait_wake_up(struct rcuwait *w);
*/ \
WARN_ON(current->exit_state); \
\
- rcu_assign_pointer((w)->task, current); \
+ prepare_to_rcuwait(w); \
for (;;) { \
/* \
* Implicit barrier (A) pairs with (B) in \
* rcuwait_wake_up(). \
*/ \
- set_current_state(TASK_UNINTERRUPTIBLE); \
+ set_current_state(state); \
if (condition) \
break; \
\
+ if (signal_pending_state(state, current)) { \
+ __ret = -EINTR; \
+ break; \
+ } \
+ \
schedule(); \
} \
- \
- WRITE_ONCE((w)->task, NULL); \
- __set_current_state(TASK_RUNNING); \
+ finish_rcuwait(w); \
+ __ret; \
})

#endif /* _LINUX_RCUWAIT_H_ */
diff --git a/kernel/exit.c b/kernel/exit.c
index 2a414fc71b87..cf68896a94fa 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -291,8 +291,9 @@ struct task_struct *task_rcu_dereference(struct task_struct **ptask)
return task;
}

-void rcuwait_wake_up(struct rcuwait *w)
+int rcuwait_wake_up(struct rcuwait *w)
{
+ int ret = 0;
struct task_struct *task;

rcu_read_lock();
@@ -316,8 +317,10 @@ void rcuwait_wake_up(struct rcuwait *w)
*/
task = rcu_dereference(w->task);
if (task)
- wake_up_process(task);
+ ret = wake_up_process(task);
rcu_read_unlock();
+
+ return ret;
}

/*
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
index 883cf1b92d90..41787e80dbde 100644
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -159,7 +159,7 @@ void percpu_down_write(struct percpu_rw_semaphore *sem)
*/

/* Wait for all now active readers to complete. */
- rcuwait_wait_event(&sem->writer, readers_active_check(sem));
+ rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
}
EXPORT_SYMBOL_GPL(percpu_down_write);

diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index ed75addd3ccd..4b2ce6bb94a4 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -53,6 +53,7 @@
#include <linux/rcupdate_wait.h>
#include <linux/sched/isolation.h>
#include <linux/kprobes.h>
+#include <linux/rcuwait.h>

#define CREATE_TRACE_POINTS

@@ -375,6 +376,13 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array,
}
EXPORT_SYMBOL_GPL(__wait_rcu_gp);

+void finish_rcuwait(struct rcuwait *w)
+{
+ rcu_assign_pointer(w->task, NULL);
+ __set_current_state(TASK_RUNNING);
+}
+EXPORT_SYMBOL_GPL(finish_rcuwait);
+
#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
void init_rcu_head(struct rcu_head *head)
{
--
2.38.0

2022-10-24 11:00:20

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 2/9] random: Bring back the local_locks

From: Sebastian Andrzej Siewior <[email protected]>

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


As part of the backports the random code lost its local_lock_t type and
the whole operation became a local_irq_{disable|enable}() simply because
the older kernel did not provide those primitives.

RT as of v4.9 has a slightly different variant of local_locks.
Replace the local_irq_*() operations with matching local_lock_irq*()
operations which were there as part of commit
77760fd7f7ae3 ("random: remove batched entropy locking")

Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Link: https://lore.kernel.org/all/[email protected]/
Signed-off-by: Daniel Wagner <[email protected]>
---
drivers/char/random.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 2be38780a7f7..0fd0462054bd 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -53,6 +53,7 @@
#include <linux/uaccess.h>
#include <linux/siphash.h>
#include <linux/uio.h>
+#include <linux/locallock.h>
#include <crypto/chacha20.h>
#include <crypto/blake2s.h>
#include <asm/processor.h>
@@ -229,10 +230,12 @@ static struct {
struct crng {
u8 key[CHACHA20_KEY_SIZE];
unsigned long generation;
+ struct local_irq_lock lock;
};

static DEFINE_PER_CPU(struct crng, crngs) = {
- .generation = ULONG_MAX
+ .generation = ULONG_MAX,
+ .lock.lock = __SPIN_LOCK_UNLOCKED(crngs.lock.lock),
};

/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
@@ -362,7 +365,7 @@ static void crng_make_state(u32 chacha_state[CHACHA20_BLOCK_SIZE / sizeof(u32)],
if (unlikely(crng_has_old_seed()))
crng_reseed();

- local_irq_save(flags);
+ local_lock_irqsave(crngs.lock, flags);
crng = raw_cpu_ptr(&crngs);

/*
@@ -387,7 +390,7 @@ static void crng_make_state(u32 chacha_state[CHACHA20_BLOCK_SIZE / sizeof(u32)],
* should wind up here immediately.
*/
crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len);
- local_irq_restore(flags);
+ local_unlock_irqrestore(crngs.lock, flags);
}

static void _get_random_bytes(void *buf, size_t len)
@@ -505,11 +508,13 @@ struct batch_ ##type { \
* formula of (integer_blocks + 0.5) * CHACHA20_BLOCK_SIZE. \
*/ \
type entropy[CHACHA20_BLOCK_SIZE * 3 / (2 * sizeof(type))]; \
+ struct local_irq_lock lock; \
unsigned long generation; \
unsigned int position; \
}; \
\
static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = { \
+ .lock.lock = __SPIN_LOCK_UNLOCKED(batched_entropy_ ##type.lock.lock), \
.position = UINT_MAX \
}; \
\
@@ -527,7 +532,7 @@ type get_random_ ##type(void) \
return ret; \
} \
\
- local_irq_save(flags); \
+ local_lock_irqsave(batched_entropy_ ##type.lock, flags); \
batch = raw_cpu_ptr(&batched_entropy_##type); \
\
next_gen = READ_ONCE(base_crng.generation); \
@@ -541,7 +546,7 @@ type get_random_ ##type(void) \
ret = batch->entropy[batch->position]; \
batch->entropy[batch->position] = 0; \
++batch->position; \
- local_irq_restore(flags); \
+ local_unlock_irqrestore(batched_entropy_ ##type.lock, flags); \
return ret; \
} \
EXPORT_SYMBOL(get_random_ ##type);
--
2.38.0

2022-10-24 11:14:19

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 6/9] timers: Don't block on ->expiry_lock for TIMER_IRQSAFE timers

From: Sebastian Andrzej Siewior <[email protected]>

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


Upstream commit c725dafc95f1b37027840aaeaa8b7e4e9cd20516

PREEMPT_RT does not spin and wait until a running timer completes its
callback but instead it blocks on a sleeping lock to prevent a livelock in
the case that the task waiting for the callback completion preempted the
callback.

This cannot be done for timers flagged with TIMER_IRQSAFE. These timers can
be canceled from an interrupt disabled context even on RT kernels.

The expiry callback of such timers is invoked with interrupts disabled so
there is no need to use the expiry lock mechanism because obviously the
callback cannot be preempted even on RT kernels.

Do not use the timer_base::expiry_lock mechanism when waiting for a running
callback to complete if the timer is flagged with TIMER_IRQSAFE.

Also add a lockdep assertion for RT kernels to validate that the expiry
lock mechanism is always invoked in preemptible context.

Reported-by: Mike Galbraith <[email protected]>
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
[bigeasy: The logic in v4.19 is slightly different but the outcome is the
same as we must not sleep while waiting for the irqsafe timer to
complete. The IRQSAFE timer can not be preempted.
The "lockdep annotation" is not available and has been replaced with
might_sleep()]
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Daniel Wagner <[email protected]>
---
kernel/time/timer.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 3e2c0bd03004..0a6d60b3e67c 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1272,6 +1272,9 @@ static int __del_timer_sync(struct timer_list *timer)
if (ret >= 0)
return ret;

+ if (READ_ONCE(timer->flags) & TIMER_IRQSAFE)
+ continue;
+
/*
* When accessing the lock, timers of base are no longer expired
* and so timer is no longer running.
@@ -1336,6 +1339,12 @@ int del_timer_sync(struct timer_list *timer)
* could lead to deadlock.
*/
WARN_ON(in_irq() && !(timer->flags & TIMER_IRQSAFE));
+ /*
+ * Must be able to sleep on PREEMPT_RT because of the slowpath in
+ * __del_timer_sync().
+ */
+ if (IS_ENABLED(CONFIG_PREEMPT_RT) && !(timer->flags & TIMER_IRQSAFE))
+ might_sleep();

return __del_timer_sync(timer);
}
--
2.38.0

2022-10-24 11:15:41

by Daniel Wagner

[permalink] [raw]
Subject: [PATCH RT 9/9] Linux 4.19.255-rt114-rc1

v4.19.255-rt114-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


Signed-off-by: Daniel Wagner <[email protected]>
---
localversion-rt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index fdcd9167ca0b..2ae39973e9f8 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt113
+-rt114-rc1
--
2.38.0

2022-10-24 11:33:54

by Daniel Wagner

[permalink] [raw]
Subject: Re: [PATCH RT 0/9] Linux v4.19.255-rt114-rc1

On Mon, Oct 24, 2022 at 12:44:16PM +0200, Daniel Wagner wrote:
> Dear RT Folks,
>
> This is the RT stable review cycle of patch 4.19.255-rt114-rc1.
>
> Please scream at me if I messed something up. Please test the patches
> too.
>
> The -rc release will be uploaded to kernel.org and will be deleted
> when the final release is out. This is just a review release (or
> release candidate).
>
> The pre-releases will not be pushed to the git repository, only the
> final release is.
>
> If all goes well, this patch will be converted to the next main
> release on 2022-10-31.

Timer changes seem not to be correct though:

[ 24.674424] BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:974
[ 24.674426] in_atomic(): 0, irqs_disabled(): 1, pid: 23, name: ktimersoftd/1
[ 25.730421] BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:974
[ 25.730424] in_atomic(): 0, irqs_disabled(): 1, pid: 11, name: ktimersoftd/0

I get those for when running any of the rttests. I suppose I am missing
an additional fix:

- if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL) &&
- timer->flags & TIMER_IRQSAFE) {
+ if (timer->flags & TIMER_IRQSAFE) {
raw_spin_unlock(&base->lock);
call_timer_fn(timer, fn);
base->running_timer = NULL;


is now queuing up fn callbacks with TIMER_IRQSAFE which then triggers:

+ if (IS_ENABLED(CONFIG_PREEMPT_RT) && !(timer->flags & TIMER_IRQSAFE))
+ might_sleep();

in del_timer_sync(). But this is just a guess.

Daniel