2018-08-02 22:27:04

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 00/15][ANNOUNCE] 3.18.117-rt105-rc1

Hello RT Folks!

This is the RT stable review cycle of patch 3.18.117-rt105-rc1.

In addition to the applicable stable-rt patches not yet in the 3.18-rt
kernel, this set includes some patches needed to fix cross-compilation
build failures.

Please scream at me if I messed something up. Please test the patches
too.

The -rc release will be uploaded to kernel.org and will be deleted
when the final release is out. This is just a review release (or
release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main
release on 8/9/2018.

To build 3.18.117-rt105-rc1 directly, the following patches should be applied:

http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.18.tar.xz

http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.18.117.xz

http://www.kernel.org/pub/linux/kernel/projects/rt/3.18/patch-3.18.117-rt105-rc1.patch.xz


You can also build from 3.18.117-rt104 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.18/incr/patch-3.18.117-rt104-rt105-rc1.patch.xz

Enjoy!

Tom


Anton Blanchard (1):
powerpc/vdso64: Use double word compare on pointers

Julia Cartwright (2):
seqlock: provide the same ordering semantics as mainline
squashfs: make use of local lock in multi_cpu decompressor

Krzysztof Mazur (1):
um: Use POSIX ucontext_t instead of struct ucontext

Paul Gortmaker (1):
powerpc: ps3/device-init.c - adapt to completions using swait vs wait

Peter Zijlstra (1):
sched: Remove TASK_ALL

Philipp Schrader (1):
tracing: Fix rcu splat from idle CPU on boot

Sebastian Andrzej Siewior (5):
arm*: disable NEON in kernel mode
posix-timers: move the rcu head out of the union
alarmtimer: Prevent live lock in alarm_cancel()
locking: add types.h
net: use task_struct instead of CPU number as the queue owner on -RT

Tom Zanussi (3):
Revert "fs, jbd: pull your plug when waiting for space"
s390/mm: Fix missed tsk->pagefault_disabled conversion to
pagefault_disable()
Linux 3.18.117-rt105-rc1

arch/arm/Kconfig | 2 +-
arch/arm64/crypto/Kconfig | 14 ++++----
arch/powerpc/kernel/vdso64/datapage.S | 2 +-
arch/powerpc/kernel/vdso64/gettimeofday.S | 2 +-
arch/powerpc/platforms/ps3/device-init.c | 2 +-
arch/s390/mm/fault.c | 2 +-
arch/um/os-Linux/signal.c | 2 +-
arch/x86/um/stub_segv.c | 2 +-
fs/jbd/checkpoint.c | 2 --
fs/squashfs/decompressor_multi_percpu.c | 16 ++++++---
include/linux/netdevice.h | 54 +++++++++++++++++++++++++++----
include/linux/posix-timers.h | 2 +-
include/linux/sched.h | 1 -
include/linux/seqlock.h | 1 +
include/linux/spinlock_types_raw.h | 2 ++
kernel/time/alarmtimer.c | 2 +-
kernel/time/posix-timers.c | 4 +--
kernel/trace/trace_irqsoff.c | 4 +--
localversion-rt | 2 +-
net/core/dev.c | 6 +++-
20 files changed, 89 insertions(+), 35 deletions(-)

--
2.14.1



2018-08-02 22:27:07

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 01/15] sched: Remove TASK_ALL

From: Peter Zijlstra <[email protected]>

It's unused:

$ git grep "\<TASK_ALL\>" | wc -l
1

And dangerous, kill the bugger.

Cc: [email protected]
Acked-by: Thomas Gleixner <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
(cherry picked from commit ffb940123ed607f1cba0d1f7c281ca92feac9733)
Signed-off-by: Tom Zanussi <[email protected]>

Conflicts:
include/linux/sched.h
---
include/linux/sched.h | 1 -
1 file changed, 1 deletion(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index cc7349a2c0cf..fbe198d733c2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -228,7 +228,6 @@ extern char ___assert_task_state[1 - 2*!!(

/* Convenience macros for the sake of wake_up */
#define TASK_NORMAL (TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE)
-#define TASK_ALL (TASK_NORMAL | __TASK_STOPPED | __TASK_TRACED)

/* get_task_state() */
#define TASK_REPORT (TASK_RUNNING | TASK_INTERRUPTIBLE | \
--
2.14.1


2018-08-02 22:27:42

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 02/15] arm*: disable NEON in kernel mode

From: Sebastian Andrzej Siewior <[email protected]>

NEON in kernel mode is used by the crypto algorithms and raid6 code.
While the raid6 code looks okay, the crypto algorithms do not: NEON
is enabled on first invocation and may allocate/free/map memory before
the NEON mode is disabled again.
This needs to be changed until it can be enabled.
On ARM NEON in kernel mode can be simply disabled. on ARM64 it needs to
stay on due to possible EFI callbacks so here I disable each algorithm.

Cc: [email protected]
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Daniel Wagner <[email protected]>
(cherry picked from commit 728b41d8e7b5307b52bdbffcb492bc8345a4e38a)
Signed-off-by: Tom Zanussi <[email protected]>
---
arch/arm/Kconfig | 2 +-
arch/arm64/crypto/Kconfig | 14 +++++++-------
2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index d635abf51063..12b1e7a5f103 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -2122,7 +2122,7 @@ config NEON

config KERNEL_MODE_NEON
bool "Support for NEON in kernel mode"
- depends on NEON && AEABI
+ depends on NEON && AEABI && !PREEMPT_RT_BASE
help
Say Y to include support for NEON in kernel mode.

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 5562652c5316..003fe0718117 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -10,42 +10,42 @@ if ARM64_CRYPTO

config CRYPTO_SHA1_ARM64_CE
tristate "SHA-1 digest algorithm (ARMv8 Crypto Extensions)"
- depends on ARM64 && KERNEL_MODE_NEON
+ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
select CRYPTO_HASH

config CRYPTO_SHA2_ARM64_CE
tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)"
- depends on ARM64 && KERNEL_MODE_NEON
+ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
select CRYPTO_HASH

config CRYPTO_GHASH_ARM64_CE
tristate "GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions"
- depends on ARM64 && KERNEL_MODE_NEON
+ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
select CRYPTO_HASH

config CRYPTO_AES_ARM64_CE
tristate "AES core cipher using ARMv8 Crypto Extensions"
- depends on ARM64 && KERNEL_MODE_NEON
+ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
select CRYPTO_ALGAPI
select CRYPTO_AES

config CRYPTO_AES_ARM64_CE_CCM
tristate "AES in CCM mode using ARMv8 Crypto Extensions"
- depends on ARM64 && KERNEL_MODE_NEON
+ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
select CRYPTO_ALGAPI
select CRYPTO_AES
select CRYPTO_AEAD

config CRYPTO_AES_ARM64_CE_BLK
tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions"
- depends on ARM64 && KERNEL_MODE_NEON
+ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
select CRYPTO_BLKCIPHER
select CRYPTO_AES
select CRYPTO_ABLK_HELPER

config CRYPTO_AES_ARM64_NEON_BLK
tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions"
- depends on ARM64 && KERNEL_MODE_NEON
+ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
select CRYPTO_BLKCIPHER
select CRYPTO_AES
select CRYPTO_ABLK_HELPER
--
2.14.1


2018-08-02 22:27:44

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 03/15] posix-timers: move the rcu head out of the union

From: Sebastian Andrzej Siewior <[email protected]>

On RT the timer can be preempted while running and therefore we wait
with timer_wait_for_callback() for the timer to complete (instead of
busy looping). The RCU-readlock is held to ensure that this posix timer
is not removed while we wait on it.
If the timer is removed then it invokes call_rcu() with a pointer that
is shared with the hrtimer because it is part of the same union.
In order to avoid any possible side effects I am moving the rcu pointer
out of the union.

Cc: [email protected]
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Daniel Wagner <[email protected]>
(cherry picked from commit 57f93c5f597fa32af860321c5bca34bc5ffe08e1)
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/posix-timers.h | 2 +-
kernel/time/posix-timers.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
index 907f3fd191ac..e2e43c61f6a1 100644
--- a/include/linux/posix-timers.h
+++ b/include/linux/posix-timers.h
@@ -92,8 +92,8 @@ struct k_itimer {
struct alarm alarmtimer;
ktime_t interval;
} alarm;
- struct rcu_head rcu;
} it;
+ struct rcu_head rcu;
};

struct k_clock {
diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
index b5e0ff549422..bdf91054e6c3 100644
--- a/kernel/time/posix-timers.c
+++ b/kernel/time/posix-timers.c
@@ -560,7 +560,7 @@ static struct k_itimer * alloc_posix_timer(void)

static void k_itimer_rcu_free(struct rcu_head *head)
{
- struct k_itimer *tmr = container_of(head, struct k_itimer, it.rcu);
+ struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);

kmem_cache_free(posix_timers_cache, tmr);
}
@@ -577,7 +577,7 @@ static void release_posix_timer(struct k_itimer *tmr, int it_id_set)
}
put_pid(tmr->it_pid);
sigqueue_free(tmr->sigq);
- call_rcu(&tmr->it.rcu, k_itimer_rcu_free);
+ call_rcu(&tmr->rcu, k_itimer_rcu_free);
}

static struct k_clock *clockid_to_kclock(const clockid_t id)
--
2.14.1


2018-08-02 22:27:47

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 04/15] tracing: Fix rcu splat from idle CPU on boot

From: Philipp Schrader <[email protected]>

With PREEMPT_RT and most of the lockdep-related options enabled I
encountered this splat when booting our DRA7 evaluation module:

[ 0.055073]
[ 0.055076] ===============================
[ 0.055079] [ INFO: suspicious RCU usage. ]
[ 0.055084] 4.1.6+ #2 Not tainted
[ 0.055086] -------------------------------
[ 0.055090] include/trace/events/hist.h:31 suspicious
rcu_dereference_check() usage!
[ 0.055093]
[ 0.055093] other info that might help us debug this:
[ 0.055093]
[ 0.055097]
[ 0.055097] RCU used illegally from idle CPU!
[ 0.055097] rcu_scheduler_active = 1, debug_locks = 1
[ 0.055100] RCU used illegally from extended quiescent state!
[ 0.055104] no locks held by swapper/0/0.
[ 0.055106]
[ 0.055106] stack backtrace:
[ 0.055112] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.1.6+ #2
[ 0.055116] Hardware name: Generic DRA74X (Flattened Device Tree)
[ 0.055130] [<c00196b8>] (unwind_backtrace) from [<c001515c>]
(show_stack+0x20/0x24)
[ 0.055146] [<c001515c>] (show_stack) from [<c07bc408>]
(dump_stack+0x84/0xa0)
[ 0.055160] [<c07bc408>] (dump_stack) from [<c009bc38>]
(lockdep_rcu_suspicious+0xb0/0x110)
[ 0.055172] [<c009bc38>] (lockdep_rcu_suspicious) from [<c01246c4>]
(time_hardirqs_off+0x2b8/0x3c8)
[ 0.055184] [<c01246c4>] (time_hardirqs_off) from [<c009a218>]
(trace_hardirqs_off_caller+0x2c/0xf4)
[ 0.055194] [<c009a218>] (trace_hardirqs_off_caller) from
[<c009a2f4>] (trace_hardirqs_off+0x14/0x18)
[ 0.055204] [<c009a2f4>] (trace_hardirqs_off) from [<c00c7ecc>]
(rcu_idle_enter+0x78/0xcc)
[ 0.055213] [<c00c7ecc>] (rcu_idle_enter) from [<c0093eb0>]
(cpu_startup_entry+0x190/0x518)
[ 0.055222] [<c0093eb0>] (cpu_startup_entry) from [<c07b95b4>]
(rest_init+0x13c/0x17c)
[ 0.055231] [<c07b95b4>] (rest_init) from [<c0b32c74>]
(start_kernel+0x320/0x380)
[ 0.055238] [<c0b32c74>] (start_kernel) from [<8000807c>] (0x8000807c)

As per Steve Rotstedt's suggestion I changed the trace_* calls to
trace_*_rcuidle calls. He pointed out that the trace points were getting
triggered when rcu wasn't watching.

Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: [email protected]
(cherry picked from commit ac56e4167d84ada099f2af0d1d53f4742d577ce9)
Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace_irqsoff.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index d1940b095d85..9cbc38722905 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -452,7 +452,7 @@ EXPORT_SYMBOL_GPL(stop_critical_timings);
#ifdef CONFIG_PROVE_LOCKING
void time_hardirqs_on(unsigned long a0, unsigned long a1)
{
- trace_preemptirqsoff_hist(IRQS_ON, 0);
+ trace_preemptirqsoff_hist_rcuidle(IRQS_ON, 0);
if (!preempt_trace() && irq_trace())
stop_critical_timing(a0, a1);
}
@@ -461,7 +461,7 @@ void time_hardirqs_off(unsigned long a0, unsigned long a1)
{
if (!preempt_trace() && irq_trace())
start_critical_timing(a0, a1);
- trace_preemptirqsoff_hist(IRQS_OFF, 1);
+ trace_preemptirqsoff_hist_rcuidle(IRQS_OFF, 1);
}

#else /* !CONFIG_PROVE_LOCKING */
--
2.14.1


2018-08-02 22:27:50

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 10/15] Revert "fs, jbd: pull your plug when waiting for space"

This reverts commit 3b5cf23e6b87a938522eb074baeb034e66dc9cb3.

Similar to commit d5bc2c7b2cc0 Revert "fs: jbd2: pull your plug when
waiting for space", according to Sebastian Siewior: this "is the same
thing but for ext3/jbd. The code was removed at some point so I did
not revert in my tree."

From the original commit message: "This was a duct-tape fix which
shouldn't be needed since commit "locking/rt-mutex: fix deadlock in
device mapper / block-IO".

Cc: [email protected]
Signed-off-by: Tom Zanussi <[email protected]>
---
fs/jbd/checkpoint.c | 2 --
1 file changed, 2 deletions(-)

diff --git a/fs/jbd/checkpoint.c b/fs/jbd/checkpoint.c
index 95debd71e5fa..08c03044abdd 100644
--- a/fs/jbd/checkpoint.c
+++ b/fs/jbd/checkpoint.c
@@ -129,8 +129,6 @@ void __log_wait_for_space(journal_t *journal)
if (journal->j_flags & JFS_ABORT)
return;
spin_unlock(&journal->j_state_lock);
- if (current->plug)
- io_schedule();
mutex_lock(&journal->j_checkpoint_mutex);

/*
--
2.14.1


2018-08-02 22:27:58

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 09/15] squashfs: make use of local lock in multi_cpu decompressor

From: Julia Cartwright <[email protected]>

Currently, the squashfs multi_cpu decompressor makes use of
get_cpu_ptr()/put_cpu_ptr(), which unconditionally disable preemption
during decompression.

Because the workload is distributed across CPUs, all CPUs can observe a
very high wakeup latency, which has been seen to be as much as 8000us.

Convert this decompressor to make use of a local lock, which will allow
execution of the decompressor with preemption-enabled, but also ensure
concurrent accesses to the percpu compressor data on the local CPU will
be serialized.

Cc: [email protected]
Reported-by: Alexander Stein <[email protected]>
Tested-by: Alexander Stein <[email protected]>
Signed-off-by: Julia Cartwright <[email protected]>
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
(cherry picked from commit c160736542d7b3d67da32848d2f028b8e35730e5)
Signed-off-by: Tom Zanussi <[email protected]>
---
fs/squashfs/decompressor_multi_percpu.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/fs/squashfs/decompressor_multi_percpu.c b/fs/squashfs/decompressor_multi_percpu.c
index 23a9c28ad8ea..6a73c4fa88e7 100644
--- a/fs/squashfs/decompressor_multi_percpu.c
+++ b/fs/squashfs/decompressor_multi_percpu.c
@@ -10,6 +10,7 @@
#include <linux/slab.h>
#include <linux/percpu.h>
#include <linux/buffer_head.h>
+#include <linux/locallock.h>

#include "squashfs_fs.h"
#include "squashfs_fs_sb.h"
@@ -25,6 +26,8 @@ struct squashfs_stream {
void *stream;
};

+static DEFINE_LOCAL_IRQ_LOCK(stream_lock);
+
void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
void *comp_opts)
{
@@ -79,10 +82,15 @@ int squashfs_decompress(struct squashfs_sb_info *msblk, struct buffer_head **bh,
{
struct squashfs_stream __percpu *percpu =
(struct squashfs_stream __percpu *) msblk->stream;
- struct squashfs_stream *stream = get_cpu_ptr(percpu);
- int res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
- offset, length, output);
- put_cpu_ptr(stream);
+ struct squashfs_stream *stream;
+ int res;
+
+ stream = get_locked_ptr(stream_lock, percpu);
+
+ res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
+ offset, length, output);
+
+ put_locked_ptr(stream_lock, stream);

if (res < 0)
ERROR("%s decompression failed, data probably corrupt\n",
--
2.14.1


2018-08-02 22:28:06

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 08/15] seqlock: provide the same ordering semantics as mainline

From: Julia Cartwright <[email protected]>

The mainline implementation of read_seqbegin() orders prior loads w.r.t.
the read-side critical section. Fixup the RT writer-boosting
implementation to provide the same guarantee.

Also, while we're here, update the usage of ACCESS_ONCE() to use
READ_ONCE().

Fixes: e69f15cf77c23 ("seqlock: Prevent rt starvation")
Cc: [email protected]
Signed-off-by: Julia Cartwright <[email protected]>
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
(cherry picked from commit afa4c06b89a3c0fb7784ff900ccd707bef519cb7)
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/seqlock.h | 1 +
1 file changed, 1 insertion(+)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 4acd0e2fb5cb..efa234031230 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -338,6 +338,7 @@ repeat:
spin_unlock_wait(&sl->lock);
goto repeat;
}
+ smp_rmb();
return ret;
}
#endif
--
2.14.1


2018-08-02 22:28:21

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 05/15] alarmtimer: Prevent live lock in alarm_cancel()

From: Sebastian Andrzej Siewior <[email protected]>

If alarm_try_to_cancel() requires a retry, then depending on the
priority setting the retry loop might prevent timer callback completion
on RT. Prevent that by waiting for completion on RT, no change for a
non RT kernel.

Cc: [email protected]
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
(cherry picked from commit 51e376c469bf05f32cb1ceb9e39d31bb92f1f6c8)
Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/time/alarmtimer.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
index 119847b93ba6..5a5e05fc92a3 100644
--- a/kernel/time/alarmtimer.c
+++ b/kernel/time/alarmtimer.c
@@ -395,7 +395,7 @@ int alarm_cancel(struct alarm *alarm)
int ret = alarm_try_to_cancel(alarm);
if (ret >= 0)
return ret;
- cpu_relax();
+ hrtimer_wait_for_timer(&alarm->timer);
}
}
EXPORT_SYMBOL_GPL(alarm_cancel);
--
2.14.1


2018-08-02 22:28:48

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 07/15] net: use task_struct instead of CPU number as the queue owner on -RT

From: Sebastian Andrzej Siewior <[email protected]>

In commit ("net: move xmit_recursion to per-task variable on -RT") the
recursion level was changed to be per-task since we can get preempted in
BH on -RT. The lock owner should consequently be recorded as the task
that holds the lock and not the CPU. Otherwise we trigger the "Dead loop
on virtual device" warning on SMP systems.

Cc: [email protected]
Reported-by: Kurt Kanzenbach <[email protected]>
Tested-by: Kurt Kanzenbach <[email protected]>
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
(cherry picked from commit 910142bad86ec1031c63b0b37575b2537ef5c27d)
Signed-off-by: Tom Zanussi <[email protected]>

Conflicts:
net/core/dev.c
---
include/linux/netdevice.h | 54 +++++++++++++++++++++++++++++++++++++++++------
net/core/dev.c | 6 +++++-
2 files changed, 53 insertions(+), 7 deletions(-)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 2bb8ddaf641e..9cc578ba2037 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -556,7 +556,11 @@ struct netdev_queue {
* write mostly part
*/
spinlock_t _xmit_lock ____cacheline_aligned_in_smp;
+#ifdef CONFIG_PREEMPT_RT_FULL
+ struct task_struct *xmit_lock_owner;
+#else
int xmit_lock_owner;
+#endif
/*
* please use this field instead of dev->trans_start
*/
@@ -3080,41 +3084,79 @@ static inline u32 netif_msg_init(int debug_value, int default_msg_enable_bits)
return (1 << debug_value) - 1;
}

+#ifdef CONFIG_PREEMPT_RT_FULL
+static inline void netdev_queue_set_owner(struct netdev_queue *txq, int cpu)
+{
+ txq->xmit_lock_owner = current;
+}
+
+static inline void netdev_queue_clear_owner(struct netdev_queue *txq)
+{
+ txq->xmit_lock_owner = NULL;
+}
+
+static inline bool netdev_queue_has_owner(struct netdev_queue *txq)
+{
+ if (txq->xmit_lock_owner != NULL)
+ return true;
+ return false;
+}
+
+#else
+
+static inline void netdev_queue_set_owner(struct netdev_queue *txq, int cpu)
+{
+ txq->xmit_lock_owner = cpu;
+}
+
+static inline void netdev_queue_clear_owner(struct netdev_queue *txq)
+{
+ txq->xmit_lock_owner = -1;
+}
+
+static inline bool netdev_queue_has_owner(struct netdev_queue *txq)
+{
+ if (txq->xmit_lock_owner != -1)
+ return true;
+ return false;
+}
+#endif
+
static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)
{
spin_lock(&txq->_xmit_lock);
- txq->xmit_lock_owner = cpu;
+ netdev_queue_set_owner(txq, cpu);
}

static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
{
spin_lock_bh(&txq->_xmit_lock);
- txq->xmit_lock_owner = smp_processor_id();
+ netdev_queue_set_owner(txq, smp_processor_id());
}

static inline bool __netif_tx_trylock(struct netdev_queue *txq)
{
bool ok = spin_trylock(&txq->_xmit_lock);
if (likely(ok))
- txq->xmit_lock_owner = smp_processor_id();
+ netdev_queue_set_owner(txq, smp_processor_id());
return ok;
}

static inline void __netif_tx_unlock(struct netdev_queue *txq)
{
- txq->xmit_lock_owner = -1;
+ netdev_queue_clear_owner(txq);
spin_unlock(&txq->_xmit_lock);
}

static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
{
- txq->xmit_lock_owner = -1;
+ netdev_queue_clear_owner(txq);
spin_unlock_bh(&txq->_xmit_lock);
}

static inline void txq_trans_update(struct netdev_queue *txq)
{
- if (txq->xmit_lock_owner != -1)
+ if (netdev_queue_has_owner(txq))
txq->trans_start = jiffies;
}

diff --git a/net/core/dev.c b/net/core/dev.c
index eb39270ac306..bbdec43abe1c 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3028,7 +3028,11 @@ static int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv)
if (dev->flags & IFF_UP) {
int cpu = smp_processor_id(); /* ok because BHs are off */

+#ifdef CONFIG_PREEMPT_RT_FULL
+ if (txq->xmit_lock_owner != current) {
+#else
if (txq->xmit_lock_owner != cpu) {
+#endif

if (xmit_rec_read() > RECURSION_LIMIT)
goto recursion_alert;
@@ -6233,7 +6237,7 @@ static void netdev_init_one_queue(struct net_device *dev,
/* Initialize queue lock */
spin_lock_init(&queue->_xmit_lock);
netdev_set_xmit_lockdep_class(&queue->_xmit_lock, dev->type);
- queue->xmit_lock_owner = -1;
+ netdev_queue_clear_owner(queue);
netdev_queue_numa_node_write(queue, NUMA_NO_NODE);
queue->dev = dev;
#ifdef CONFIG_BQL
--
2.14.1


2018-08-02 22:29:01

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH 06/15] locking: add types.h

From: Sebastian Andrzej Siewior <[email protected]>

During the stable update the arm architecture did not compile anymore
due to missing definition of u16/32.

Cc: [email protected]
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
(cherry picked from commit 1289b06974d64f244a26455fab699c6a1332f4bc)
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/spinlock_types_raw.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h
index edffc4d53fc9..03235b475b77 100644
--- a/include/linux/spinlock_types_raw.h
+++ b/include/linux/spinlock_types_raw.h
@@ -1,6 +1,8 @@
#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
#define __LINUX_SPINLOCK_TYPES_RAW_H

+#include <linux/types.h>
+
#if defined(CONFIG_SMP)
# include <asm/spinlock_types.h>
#else
--
2.14.1


Subject: Re: [PATCH 00/15][ANNOUNCE] 3.18.117-rt105-rc1

On 2018-08-02 17:25:15 [-0500], Tom Zanussi wrote:
> Please scream at me if I messed something up. Please test the patches
> too.

The series claims to have 15 patches and I see only the first 10 (in my
inbox, rt-users and linux-kernel).

> Enjoy!
>
> Tom

Sebastian

2018-08-03 13:35:16

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH 00/15][ANNOUNCE] 3.18.117-rt105-rc1

Hi Sebastian,

On 8/3/2018 2:32 AM, Sebastian Andrzej Siewior wrote:
> On 2018-08-02 17:25:15 [-0500], Tom Zanussi wrote:
>> Please scream at me if I messed something up. Please test the patches
>> too.
>
> The series claims to have 15 patches and I see only the first 10 (in my
> inbox, rt-users and linux-kernel).
>

Yeah, my script was broken and didn't send the last third - resent.
Thanks for letting me know,

Tom

>> Enjoy!
>>
>> Tom
>
> Sebastian
>