2018-11-11 19:43:05

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 0/41] More RCU flavor consolidation cleanup for v4.21/v5.0

Hello!

This series does additional cleanup for the RCU flavor consolidation,
focusing primarily on uses of old API members, for example, so that
call_rcu_bh() becomes call_rcu(). There are also a few straggling
internal-to-RCU cleanups.

1. Remove unused rcu_state externs, courtesy of Joel Fernandes.

2. Fix rcu_{node,data} comments about gp_seq_needed, courtesy of
Joel Fernandes.

3. Eliminate synchronize_rcu_mult() and its sole caller.

4. Consolidate the RCU update functions invoked by sync.c.

5-41. Replace old flavorful RCU API calls with the corresponding
vanilla calls.

Thanx, Paul

------------------------------------------------------------------------

arch/powerpc/mm/hugetlbpage.c | 2
arch/s390/mm/pgalloc.c | 2
arch/sparc/oprofile/init.c | 2
crypto/pcrypt.c | 2
drivers/char/ipmi/ipmi_si_intf.c | 2
drivers/cpufreq/cpufreq_governor.c | 2
drivers/cpufreq/intel_pstate.c | 2
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 +-
drivers/net/ethernet/realtek/8139too.c | 2
drivers/net/ethernet/realtek/r8169.c | 4 -
drivers/net/ethernet/sis/sis190.c | 2
drivers/vhost/net.c | 2
fs/file.c | 2
include/linux/percpu-rwsem.h | 2
include/linux/rcupdate_wait.h | 17 -------
include/linux/tracepoint.h | 2
include/linux/types.h | 4 -
init/main.c | 6 +-
kernel/cgroup/cgroup.c | 2
kernel/events/core.c | 2
kernel/kprobes.c | 10 ++--
kernel/livepatch/patch.c | 4 -
kernel/livepatch/transition.c | 4 -
kernel/locking/lockdep.c | 2
kernel/module.c | 14 ++---
kernel/rcu/sync.c | 12 ++---
kernel/rcu/tree.h | 15 ------
kernel/rcu/update.c | 6 --
kernel/sched/core.c | 2
kernel/sched/cpufreq.c | 4 -
kernel/sched/cpufreq_schedutil.c | 2
kernel/sched/membarrier.c | 6 +-
kernel/trace/ftrace.c | 24 +++++-----
kernel/trace/ring_buffer.c | 12 ++---
kernel/trace/trace.c | 10 ++--
kernel/trace/trace_events_filter.c | 4 -
kernel/trace/trace_kprobe.c | 2
kernel/tracepoint.c | 4 -
kernel/workqueue.c | 8 +--
lib/percpu-refcount.c | 2
mm/mmu_gather.c | 2
mm/slab.c | 4 -
mm/slab_common.c | 6 +-
net/bridge/br_mdb.c | 2
net/bridge/br_multicast.c | 14 ++---
net/core/netpoll.c | 4 -
net/core/skmsg.c | 2
net/decnet/af_decnet.c | 2
net/ipv4/netfilter/ipt_CLUSTERIP.c | 6 +-
net/netfilter/ipset/ip_set_hash_gen.h | 4 -
net/netfilter/nfnetlink_log.c | 2
net/netfilter/xt_hashlimit.c | 4 -
net/sched/sch_api.c | 2
net/sched/sch_generic.c | 8 +--
tools/include/linux/kernel.h | 2
tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h | 4 -
56 files changed, 126 insertions(+), 156 deletions(-)



2018-11-11 19:45:01

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 36/41] cgroups: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: "Dennis Zhou (Facebook)" <[email protected]>
---
kernel/cgroup/cgroup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 6aaf5dd5383b..7a8429f8e280 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5343,7 +5343,7 @@ int __init cgroup_init(void)
cgroup_rstat_boot();

/*
- * The latency of the synchronize_sched() is too high for cgroups,
+ * The latency of the synchronize_rcu() is too high for cgroups,
* avoid it at the cost of forcing all readers into the slow path.
*/
rcu_sync_enter_start(&cgroup_threadgroup_rwsem.rss);
--
2.17.1


2018-11-11 19:45:05

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 26/41] events: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
---
kernel/events/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 84530ab358c3..c4b90cf7734a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -9918,7 +9918,7 @@ static void account_event(struct perf_event *event)
* call the perf scheduling hooks before proceeding to
* install events that need them.
*/
- synchronize_sched();
+ synchronize_rcu();
}
/*
* Now that we have waited for the sync_sched(), allow further
--
2.17.1


2018-11-11 19:45:17

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 40/41] tools/kernel.h: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: <[email protected]>
---
tools/include/linux/kernel.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/include/linux/kernel.h b/tools/include/linux/kernel.h
index 6935ef94e77a..857d9e22826e 100644
--- a/tools/include/linux/kernel.h
+++ b/tools/include/linux/kernel.h
@@ -116,6 +116,6 @@ int scnprintf(char * buf, size_t size, const char * fmt, ...);
#define round_down(x, y) ((x) & ~__round_mask(x, y))

#define current_gfp_context(k) 0
-#define synchronize_sched()
+#define synchronize_rcu()

#endif
--
2.17.1


2018-11-11 19:45:24

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 30/41] netfilter: Replace call_rcu_bh(), rcu_barrier_bh(), and synchronize_rcu_bh()

Now that call_rcu()'s callback is not invoked until after bh-disable
regions of code have completed (in addition to explicitly marked
RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_bh(). Similarly, rcu_barrier() can be used in place of
rcu_barrier_bh() and synchronize_rcu() in place of synchronize_rcu_bh().
This commit therefore makes these changes.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Pablo Neira Ayuso <[email protected]>
Cc: Jozsef Kadlecsik <[email protected]>
Cc: Florian Westphal <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Alexey Kuznetsov <[email protected]>
Cc: Hideaki YOSHIFUJI <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
---
net/ipv4/netfilter/ipt_CLUSTERIP.c | 6 +++---
net/netfilter/ipset/ip_set_hash_gen.h | 4 ++--
net/netfilter/nfnetlink_log.c | 2 +-
net/netfilter/xt_hashlimit.c | 4 ++--
4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c
index 2c8d313ae216..5b0c1ee6ae26 100644
--- a/net/ipv4/netfilter/ipt_CLUSTERIP.c
+++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c
@@ -94,7 +94,7 @@ static inline void
clusterip_config_put(struct clusterip_config *c)
{
if (refcount_dec_and_test(&c->refcount))
- call_rcu_bh(&c->rcu, clusterip_config_rcu_free);
+ call_rcu(&c->rcu, clusterip_config_rcu_free);
}

/* decrease the count of entries using/referencing this config. If last
@@ -876,8 +876,8 @@ static void __exit clusterip_tg_exit(void)
xt_unregister_target(&clusterip_tg_reg);
unregister_pernet_subsys(&clusterip_net_ops);

- /* Wait for completion of call_rcu_bh()'s (clusterip_config_rcu_free) */
- rcu_barrier_bh();
+ /* Wait for completion of call_rcu()'s (clusterip_config_rcu_free) */
+ rcu_barrier();
}

module_init(clusterip_tg_init);
diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
index e287da68d5fa..2c9609929c71 100644
--- a/net/netfilter/ipset/ip_set_hash_gen.h
+++ b/net/netfilter/ipset/ip_set_hash_gen.h
@@ -67,7 +67,7 @@ tune_ahash_max(u8 curr, u32 multi)

/* A hash bucket */
struct hbucket {
- struct rcu_head rcu; /* for call_rcu_bh */
+ struct rcu_head rcu; /* for call_rcu */
/* Which positions are used in the array */
DECLARE_BITMAP(used, AHASH_MAX_TUNED);
u8 size; /* size of the array */
@@ -664,7 +664,7 @@ mtype_resize(struct ip_set *set, bool retried)
spin_unlock_bh(&set->lock);

/* Give time to other readers of the set */
- synchronize_rcu_bh();
+ synchronize_rcu();

pr_debug("set %s resized from %u (%p) to %u (%p)\n", set->name,
orig->htable_bits, orig, t->htable_bits, t);
diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
index 332c69d27b47..b1f9c5303f02 100644
--- a/net/netfilter/nfnetlink_log.c
+++ b/net/netfilter/nfnetlink_log.c
@@ -148,7 +148,7 @@ static void
instance_put(struct nfulnl_instance *inst)
{
if (inst && refcount_dec_and_test(&inst->use))
- call_rcu_bh(&inst->rcu, nfulnl_instance_free_rcu);
+ call_rcu(&inst->rcu, nfulnl_instance_free_rcu);
}

static void nfulnl_timer(struct timer_list *t);
diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
index 3e7d259e5d8d..8138b68a9a44 100644
--- a/net/netfilter/xt_hashlimit.c
+++ b/net/netfilter/xt_hashlimit.c
@@ -260,7 +260,7 @@ static inline void
dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
{
hlist_del_rcu(&ent->node);
- call_rcu_bh(&ent->rcu, dsthash_free_rcu);
+ call_rcu(&ent->rcu, dsthash_free_rcu);
ht->count--;
}
static void htable_gc(struct work_struct *work);
@@ -1329,7 +1329,7 @@ static void __exit hashlimit_mt_exit(void)
xt_unregister_matches(hashlimit_mt_reg, ARRAY_SIZE(hashlimit_mt_reg));
unregister_pernet_subsys(&hashlimit_net_ops);

- rcu_barrier_bh();
+ rcu_barrier();
kmem_cache_destroy(hashlimit_cachep);
}

--
2.17.1


2018-11-11 19:45:29

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 04/41] rcu: Consolidate the RCU update functions invoked by sync.c

From: "Paul E. McKenney" <[email protected]>

This commit retains all the various gp_ops[] entries, but makes their
update functions all be synchronize_rcu(), call_rcu() and rcu_barrier().
The read-side checks remain consistent with the various RCU flavors,
which still exist on the read side.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
---
kernel/rcu/sync.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/rcu/sync.c b/kernel/rcu/sync.c
index 3f943efcf61c..9d570b1892b0 100644
--- a/kernel/rcu/sync.c
+++ b/kernel/rcu/sync.c
@@ -44,15 +44,15 @@ static const struct {
__INIT_HELD(rcu_read_lock_held)
},
[RCU_SCHED_SYNC] = {
- .sync = synchronize_sched,
- .call = call_rcu_sched,
- .wait = rcu_barrier_sched,
+ .sync = synchronize_rcu,
+ .call = call_rcu,
+ .wait = rcu_barrier,
__INIT_HELD(rcu_read_lock_sched_held)
},
[RCU_BH_SYNC] = {
- .sync = synchronize_rcu_bh,
- .call = call_rcu_bh,
- .wait = rcu_barrier_bh,
+ .sync = synchronize_rcu,
+ .call = call_rcu,
+ .wait = rcu_barrier,
__INIT_HELD(rcu_read_lock_bh_held)
},
};
--
2.17.1


2018-11-11 19:45:35

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()

Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched(). This commit therefore makes that change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Lai Jiangshan <[email protected]>
---
kernel/workqueue.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0280deac392e..392be4b252f6 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3396,7 +3396,7 @@ static void put_unbound_pool(struct worker_pool *pool)
del_timer_sync(&pool->mayday_timer);

/* sched-RCU protected to allow dereferences from get_work_pool() */
- call_rcu_sched(&pool->rcu, rcu_free_pool);
+ call_rcu(&pool->rcu, rcu_free_pool);
}

/**
@@ -3503,14 +3503,14 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
put_unbound_pool(pool);
mutex_unlock(&wq_pool_mutex);

- call_rcu_sched(&pwq->rcu, rcu_free_pwq);
+ call_rcu(&pwq->rcu, rcu_free_pwq);

/*
* If we're the last pwq going away, @wq is already dead and no one
* is gonna access it anymore. Schedule RCU free.
*/
if (is_last)
- call_rcu_sched(&wq->rcu, rcu_free_wq);
+ call_rcu(&wq->rcu, rcu_free_wq);
}

/**
@@ -4195,7 +4195,7 @@ void destroy_workqueue(struct workqueue_struct *wq)
* The base ref is never dropped on per-cpu pwqs. Directly
* schedule RCU free.
*/
- call_rcu_sched(&wq->rcu, rcu_free_wq);
+ call_rcu(&wq->rcu, rcu_free_wq);
} else {
/*
* We're the sole accessor of @wq at this point. Directly
--
2.17.1


2018-11-11 19:45:45

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 41/41] rcutorture/formal: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney <[email protected]>
---
.../rcutorture/formal/srcu-cbmc/include/linux/types.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h b/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h
index 891ad13e95b2..d27285f8ee82 100644
--- a/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h
+++ b/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/types.h
@@ -131,8 +131,8 @@ struct hlist_node {
* weird ABI and we need to ask it explicitly.
*
* The alignment is required to guarantee that bits 0 and 1 of @next will be
- * clear under normal conditions -- as long as we use call_rcu(),
- * call_rcu_bh(), call_rcu_sched(), or call_srcu() to queue callback.
+ * clear under normal conditions -- as long as we use call_rcu() or
+ * call_srcu() to queue callback.
*
* This guarantee is important for few reasons:
* - future call_rcu_lazy() will make use of lower bits in the pointer;
--
2.17.1


2018-11-11 19:46:02

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 32/41] net/core: Replace call_rcu_bh() and synchronize_rcu_bh()

Now that call_rcu()'s callback is not invoked until after all bh-disable
regions of code have completed (in addition to explicitly marked
RCU read-side critical sections), call_rcu() can be used in place of
call_rcu_bh(). Similarly, synchronize_rcu() can be used in place of
synchronize_rcu_bh(). This commit therefore makes these changes.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: <[email protected]>
---
net/core/netpoll.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index 5da9552b186b..677d3f332172 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -800,7 +800,7 @@ void __netpoll_cleanup(struct netpoll *np)
ops->ndo_netpoll_cleanup(np->dev);

RCU_INIT_POINTER(np->dev->npinfo, NULL);
- call_rcu_bh(&npinfo->rcu, rcu_cleanup_netpoll_info);
+ call_rcu(&npinfo->rcu, rcu_cleanup_netpoll_info);
} else
RCU_INIT_POINTER(np->dev->npinfo, NULL);
}
@@ -811,7 +811,7 @@ void __netpoll_free(struct netpoll *np)
ASSERT_RTNL();

/* Wait for transmitting packets to finish before freeing. */
- synchronize_rcu_bh();
+ synchronize_rcu();
__netpoll_cleanup(np);
kfree(np);
}
--
2.17.1


2018-11-11 19:46:17

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 19/41] main: Replace rcu_barrier_sched() with rcu_barrier()

Now that all RCU flavors have been consolidated, rcu_barrier_sched()
is but a synonym for rcu_barrier(). This commit therefore replaces
the former with the latter.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: "Steven Rostedt (VMware)" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: <[email protected]>
---
init/main.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/init/main.c b/init/main.c
index ee147103ba1b..a45486330243 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1046,12 +1046,12 @@ static void mark_readonly(void)
{
if (rodata_enabled) {
/*
- * load_module() results in W+X mappings, which are cleaned up
- * with call_rcu_sched(). Let's make sure that queued work is
+ * load_module() results in W+X mappings, which are cleaned
+ * up with call_rcu(). Let's make sure that queued work is
* flushed so that we don't hit false positives looking for
* insecure pages which are W+X.
*/
- rcu_barrier_sched();
+ rcu_barrier();
mark_rodata_ro();
rodata_test();
} else
--
2.17.1


2018-11-11 19:46:26

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
---
kernel/sched/cpufreq.c | 4 ++--
kernel/sched/cpufreq_schedutil.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/cpufreq.c b/kernel/sched/cpufreq.c
index 5e54cbcae673..90fee8e01280 100644
--- a/kernel/sched/cpufreq.c
+++ b/kernel/sched/cpufreq.c
@@ -51,8 +51,8 @@ EXPORT_SYMBOL_GPL(cpufreq_add_update_util_hook);
*
* Clear the update_util_data pointer for the given CPU.
*
- * Callers must use RCU-sched callbacks to free any memory that might be
- * accessed via the old update_util_data pointer or invoke synchronize_sched()
+ * Callers must use RCU callbacks to free any memory that might be
+ * accessed via the old update_util_data pointer or invoke synchronize_rcu()
* right after this function to avoid use-after-free.
*/
void cpufreq_remove_update_util_hook(int cpu)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 3fffad3bc8a8..6a1bb76afbd1 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -839,7 +839,7 @@ static void sugov_stop(struct cpufreq_policy *policy)
for_each_cpu(cpu, policy->cpus)
cpufreq_remove_update_util_hook(cpu);

- synchronize_sched();
+ synchronize_rcu();

if (!policy->fast_switch_enabled) {
irq_work_sync(&sg_policy->irq_work);
--
2.17.1


2018-11-11 19:46:38

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 22/41] sched/membarrier: synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
---
kernel/sched/membarrier.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index 388a7a6c1aa2..3cd8a3a795d2 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -210,7 +210,7 @@ static int membarrier_register_global_expedited(void)
* future scheduler executions will observe the new
* thread flag state for this mm.
*/
- synchronize_sched();
+ synchronize_rcu();
}
atomic_or(MEMBARRIER_STATE_GLOBAL_EXPEDITED_READY,
&mm->membarrier_state);
@@ -246,7 +246,7 @@ static int membarrier_register_private_expedited(int flags)
* Ensure all future scheduler executions will observe the
* new thread flag state for this process.
*/
- synchronize_sched();
+ synchronize_rcu();
}
atomic_or(state, &mm->membarrier_state);

--
2.17.1


2018-11-11 19:46:41

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 08/41] powerpc: Convert hugepd_free() to use call_rcu()

Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched(). This commit therefore makes that change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: <[email protected]>
---
arch/powerpc/mm/hugetlbpage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 8cf035e68378..4c01e9a01a74 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -289,7 +289,7 @@ static void hugepd_free(struct mmu_gather *tlb, void *hugepte)

(*batchp)->ptes[(*batchp)->index++] = hugepte;
if ((*batchp)->index == HUGEPD_FREELIST_SIZE) {
- call_rcu_sched(&(*batchp)->rcu, hugepd_free_rcu_callback);
+ call_rcu(&(*batchp)->rcu, hugepd_free_rcu_callback);
*batchp = NULL;
}
put_cpu_var(hugepd_freelist_cur);
--
2.17.1


2018-11-11 19:47:14

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 38/41] net/core/skmsg: Replace call_rcu_sched() with call_rcu()

Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched(). This commit therefore makes that change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: John Fastabend <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: <[email protected]>
---
net/core/skmsg.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 56a99d0c9aa0..c92d6ccce610 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -580,7 +580,7 @@ void sk_psock_drop(struct sock *sk, struct sk_psock *psock)
write_unlock_bh(&sk->sk_callback_lock);
sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED);

- call_rcu_sched(&psock->rcu, sk_psock_destroy);
+ call_rcu(&psock->rcu, sk_psock_destroy);
}
EXPORT_SYMBOL_GPL(sk_psock_drop);

--
2.17.1


2018-11-11 19:47:20

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 11/41] ethernet/sis: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Francois Romieu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: <[email protected]>
---
drivers/net/ethernet/sis/sis190.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/sis/sis190.c b/drivers/net/ethernet/sis/sis190.c
index c2c50522b96d..808cf9816673 100644
--- a/drivers/net/ethernet/sis/sis190.c
+++ b/drivers/net/ethernet/sis/sis190.c
@@ -1142,7 +1142,7 @@ static void sis190_down(struct net_device *dev)
if (!poll_locked)
poll_locked++;

- synchronize_sched();
+ synchronize_rcu();

} while (SIS_R32(IntrMask));

--
2.17.1


2018-11-11 19:47:22

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 18/41] tracing: Replace synchronize_sched() and call_rcu_sched()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can
be replaced by synchronize_rcu(). Similarly, call_rcu_sched() can be
replaced by call_rcu(). This commit therefore makes these changes.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: <[email protected]>
---
include/linux/tracepoint.h | 2 +-
kernel/trace/ftrace.c | 24 ++++++++++++------------
kernel/trace/ring_buffer.c | 12 ++++++------
kernel/trace/trace.c | 10 +++++-----
kernel/trace/trace_events_filter.c | 4 ++--
kernel/trace/trace_kprobe.c | 2 +-
kernel/tracepoint.c | 4 ++--
7 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 538ba1a58f5b..432080b59c26 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -82,7 +82,7 @@ int unregister_tracepoint_module_notifier(struct notifier_block *nb)
static inline void tracepoint_synchronize_unregister(void)
{
synchronize_srcu(&tracepoint_srcu);
- synchronize_sched();
+ synchronize_rcu();
}
#else
static inline void tracepoint_synchronize_unregister(void)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index f536f601bd46..5b4f73e4fd56 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -173,7 +173,7 @@ static void ftrace_sync(struct work_struct *work)
{
/*
* This function is just a stub to implement a hard force
- * of synchronize_sched(). This requires synchronizing
+ * of synchronize_rcu(). This requires synchronizing
* tasks even in userspace and idle.
*
* Yes, function tracing is rude.
@@ -934,7 +934,7 @@ ftrace_profile_write(struct file *filp, const char __user *ubuf,
ftrace_profile_enabled = 0;
/*
* unregister_ftrace_profiler calls stop_machine
- * so this acts like an synchronize_sched.
+ * so this acts like an synchronize_rcu.
*/
unregister_ftrace_profiler();
}
@@ -1086,7 +1086,7 @@ struct ftrace_ops *ftrace_ops_trampoline(unsigned long addr)

/*
* Some of the ops may be dynamically allocated,
- * they are freed after a synchronize_sched().
+ * they are freed after a synchronize_rcu().
*/
preempt_disable_notrace();

@@ -1286,7 +1286,7 @@ static void free_ftrace_hash_rcu(struct ftrace_hash *hash)
{
if (!hash || hash == EMPTY_HASH)
return;
- call_rcu_sched(&hash->rcu, __free_ftrace_hash_rcu);
+ call_rcu(&hash->rcu, __free_ftrace_hash_rcu);
}

void ftrace_free_filter(struct ftrace_ops *ops)
@@ -1501,7 +1501,7 @@ static bool hash_contains_ip(unsigned long ip,
* the ip is not in the ops->notrace_hash.
*
* This needs to be called with preemption disabled as
- * the hashes are freed with call_rcu_sched().
+ * the hashes are freed with call_rcu().
*/
static int
ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip, void *regs)
@@ -4496,7 +4496,7 @@ unregister_ftrace_function_probe_func(char *glob, struct trace_array *tr,
if (ftrace_enabled && !ftrace_hash_empty(hash))
ftrace_run_modify_code(&probe->ops, FTRACE_UPDATE_CALLS,
&old_hash_ops);
- synchronize_sched();
+ synchronize_rcu();

hlist_for_each_entry_safe(entry, tmp, &hhd, hlist) {
hlist_del(&entry->hlist);
@@ -5314,7 +5314,7 @@ ftrace_graph_release(struct inode *inode, struct file *file)
mutex_unlock(&graph_lock);

/* Wait till all users are no longer using the old hash */
- synchronize_sched();
+ synchronize_rcu();

free_ftrace_hash(old_hash);
}
@@ -5707,7 +5707,7 @@ void ftrace_release_mod(struct module *mod)
list_for_each_entry_safe(mod_map, n, &ftrace_mod_maps, list) {
if (mod_map->mod == mod) {
list_del_rcu(&mod_map->list);
- call_rcu_sched(&mod_map->rcu, ftrace_free_mod_map);
+ call_rcu(&mod_map->rcu, ftrace_free_mod_map);
break;
}
}
@@ -5927,7 +5927,7 @@ ftrace_mod_address_lookup(unsigned long addr, unsigned long *size,
struct ftrace_mod_map *mod_map;
const char *ret = NULL;

- /* mod_map is freed via call_rcu_sched() */
+ /* mod_map is freed via call_rcu() */
preempt_disable();
list_for_each_entry_rcu(mod_map, &ftrace_mod_maps, list) {
ret = ftrace_func_address_lookup(mod_map, addr, size, off, sym);
@@ -6262,7 +6262,7 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,

/*
* Some of the ops may be dynamically allocated,
- * they must be freed after a synchronize_sched().
+ * they must be freed after a synchronize_rcu().
*/
preempt_disable_notrace();

@@ -6433,7 +6433,7 @@ static void clear_ftrace_pids(struct trace_array *tr)
rcu_assign_pointer(tr->function_pids, NULL);

/* Wait till all users are no longer using pid filtering */
- synchronize_sched();
+ synchronize_rcu();

trace_free_pid_list(pid_list);
}
@@ -6580,7 +6580,7 @@ ftrace_pid_write(struct file *filp, const char __user *ubuf,
rcu_assign_pointer(tr->function_pids, pid_list);

if (filtered_pids) {
- synchronize_sched();
+ synchronize_rcu();
trace_free_pid_list(filtered_pids);
} else if (pid_list) {
/* Register a probe to set whether to ignore the tracing of a task */
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 65bd4616220d..4f3247a53259 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1834,7 +1834,7 @@ int ring_buffer_resize(struct ring_buffer *buffer, unsigned long size,
* There could have been a race between checking
* record_disable and incrementing it.
*/
- synchronize_sched();
+ synchronize_rcu();
for_each_buffer_cpu(buffer, cpu) {
cpu_buffer = buffer->buffers[cpu];
rb_check_pages(cpu_buffer);
@@ -3151,7 +3151,7 @@ static bool rb_per_cpu_empty(struct ring_buffer_per_cpu *cpu_buffer)
* This prevents all writes to the buffer. Any attempt to write
* to the buffer after this will fail and return NULL.
*
- * The caller should call synchronize_sched() after this.
+ * The caller should call synchronize_rcu() after this.
*/
void ring_buffer_record_disable(struct ring_buffer *buffer)
{
@@ -3253,7 +3253,7 @@ bool ring_buffer_record_is_set_on(struct ring_buffer *buffer)
* This prevents all writes to the buffer. Any attempt to write
* to the buffer after this will fail and return NULL.
*
- * The caller should call synchronize_sched() after this.
+ * The caller should call synchronize_rcu() after this.
*/
void ring_buffer_record_disable_cpu(struct ring_buffer *buffer, int cpu)
{
@@ -4191,7 +4191,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_read_prepare);
void
ring_buffer_read_prepare_sync(void)
{
- synchronize_sched();
+ synchronize_rcu();
}
EXPORT_SYMBOL_GPL(ring_buffer_read_prepare_sync);

@@ -4363,7 +4363,7 @@ void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu)
atomic_inc(&cpu_buffer->record_disabled);

/* Make sure all commits have finished */
- synchronize_sched();
+ synchronize_rcu();

raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);

@@ -4496,7 +4496,7 @@ int ring_buffer_swap_cpu(struct ring_buffer *buffer_a,
goto out;

/*
- * We can't do a synchronize_sched here because this
+ * We can't do a synchronize_rcu here because this
* function can be called in atomic context.
* Normally this will be called from the same CPU as cpu.
* If not it's up to the caller to protect this.
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index ff1c4b20cd0a..51612b4a603f 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1681,7 +1681,7 @@ void tracing_reset(struct trace_buffer *buf, int cpu)
ring_buffer_record_disable(buffer);

/* Make sure all commits have finished */
- synchronize_sched();
+ synchronize_rcu();
ring_buffer_reset_cpu(buffer, cpu);

ring_buffer_record_enable(buffer);
@@ -1698,7 +1698,7 @@ void tracing_reset_online_cpus(struct trace_buffer *buf)
ring_buffer_record_disable(buffer);

/* Make sure all commits have finished */
- synchronize_sched();
+ synchronize_rcu();

buf->time_start = buffer_ftrace_now(buf, buf->cpu);

@@ -2250,7 +2250,7 @@ void trace_buffered_event_disable(void)
preempt_enable();

/* Wait for all current users to finish */
- synchronize_sched();
+ synchronize_rcu();

for_each_tracing_cpu(cpu) {
free_page((unsigned long)per_cpu(trace_buffered_event, cpu));
@@ -5398,7 +5398,7 @@ static int tracing_set_tracer(struct trace_array *tr, const char *buf)
if (tr->current_trace->reset)
tr->current_trace->reset(tr);

- /* Current trace needs to be nop_trace before synchronize_sched */
+ /* Current trace needs to be nop_trace before synchronize_rcu */
tr->current_trace = &nop_trace;

#ifdef CONFIG_TRACER_MAX_TRACE
@@ -5412,7 +5412,7 @@ static int tracing_set_tracer(struct trace_array *tr, const char *buf)
* The update_max_tr is called from interrupts disabled
* so a synchronized_sched() is sufficient.
*/
- synchronize_sched();
+ synchronize_rcu();
free_snapshot(tr);
}
#endif
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 84a65173b1e9..35f3aa55be85 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -1614,7 +1614,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,

/*
* The calls can still be using the old filters.
- * Do a synchronize_sched() and to ensure all calls are
+ * Do a synchronize_rcu() and to ensure all calls are
* done with them before we free them.
*/
tracepoint_synchronize_unregister();
@@ -1845,7 +1845,7 @@ int apply_subsystem_event_filter(struct trace_subsystem_dir *dir,
if (filter) {
/*
* No event actually uses the system filter
- * we can free it without synchronize_sched().
+ * we can free it without synchronize_rcu().
*/
__free_filter(system->filter);
system->filter = filter;
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index fec67188c4d2..adc153ab51c0 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -333,7 +333,7 @@ disable_trace_kprobe(struct trace_kprobe *tk, struct trace_event_file *file)
* event_call related objects, which will be accessed in
* the kprobe_trace_func/kretprobe_trace_func.
*/
- synchronize_sched();
+ synchronize_rcu();
kfree(link); /* Ignored if link == NULL */
}

diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index a3be42304485..46f2ab1e08a9 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -92,7 +92,7 @@ static __init int release_early_probes(void)
while (early_probes) {
tmp = early_probes;
early_probes = tmp->next;
- call_rcu_sched(tmp, rcu_free_old_probes);
+ call_rcu(tmp, rcu_free_old_probes);
}

return 0;
@@ -123,7 +123,7 @@ static inline void release_probes(struct tracepoint_func *old)
* cover both cases. So let us chain the SRCU and sched RCU
* callbacks to wait for both grace periods.
*/
- call_rcu_sched(&tp_probes->rcu, rcu_free_old_probes);
+ call_rcu(&tp_probes->rcu, rcu_free_old_probes);
}
}

--
2.17.1


2018-11-11 19:47:31

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 05/41] sched/membarrier: Replace synchronize_sched() with synchronize_rcu()

From: "Paul E. McKenney" <[email protected]>

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, the synchronize_sched()
in sys_membarrier() can be replaced by synchronize_rcu(). This commit
therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
---
kernel/sched/membarrier.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index 76e0eaf4654e..388a7a6c1aa2 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -298,7 +298,7 @@ SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
if (tick_nohz_full_enabled())
return -EINVAL;
if (num_online_cpus() > 1)
- synchronize_sched();
+ synchronize_rcu();
return 0;
case MEMBARRIER_CMD_GLOBAL_EXPEDITED:
return membarrier_global_expedited();
--
2.17.1


2018-11-11 19:47:42

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 10/41] drivers/ipmi: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Corey Minyard <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: <[email protected]>
---
drivers/char/ipmi/ipmi_si_intf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
index 677618e6f1f7..dc8603d34320 100644
--- a/drivers/char/ipmi/ipmi_si_intf.c
+++ b/drivers/char/ipmi/ipmi_si_intf.c
@@ -2187,7 +2187,7 @@ static void shutdown_smi(void *send_info)
* handlers might have been running before we freed the
* interrupt.
*/
- synchronize_sched();
+ synchronize_rcu();

/*
* Timeouts are stopped, now make sure the interrupts are off
--
2.17.1


2018-11-11 19:47:44

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 35/41] types: Remove call_rcu_bh() and call_rcu_sched()

Now that call_rcu()'s callback is not invoked until after bh-disable and
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_bh() and call_rcu_sched(). This commit therefore removes
these two API members from the callback_head structure's header comment.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
---
include/linux/types.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/types.h b/include/linux/types.h
index 9834e90aa010..c2615d6a019e 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -212,8 +212,8 @@ struct ustat {
* weird ABI and we need to ask it explicitly.
*
* The alignment is required to guarantee that bit 0 of @next will be
- * clear under normal conditions -- as long as we use call_rcu(),
- * call_rcu_bh(), call_rcu_sched(), or call_srcu() to queue callback.
+ * clear under normal conditions -- as long as we use call_rcu() or
+ * call_srcu() to queue the callback.
*
* This guarantee is important for few reasons:
* - future call_rcu_lazy() will make use of lower bits in the pointer;
--
2.17.1


2018-11-11 19:47:49

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 16/41] cpufreq/cpufreq_governor: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
---
drivers/cpufreq/cpufreq_governor.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 6d53f7d9fc7a..ffa9adeaba31 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -346,7 +346,7 @@ static inline void gov_clear_update_util(struct cpufreq_policy *policy)
for_each_cpu(i, policy->cpus)
cpufreq_remove_update_util_hook(i);

- synchronize_sched();
+ synchronize_rcu();
}

static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *policy,
--
2.17.1


2018-11-11 19:47:57

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 17/41] fs/file: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: <[email protected]>
---
fs/file.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/file.c b/fs/file.c
index 7ffd6e9d103d..50304c7525ea 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -158,7 +158,7 @@ static int expand_fdtable(struct files_struct *files, unsigned int nr)
* or have finished their rcu_read_lock_sched() section.
*/
if (atomic_read(&files->count) > 1)
- synchronize_sched();
+ synchronize_rcu();

spin_lock(&files->file_lock);
if (!new_fdt)
--
2.17.1


2018-11-11 19:47:58

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 39/41] net/decnet: Replace rcu_barrier_bh() with rcu_barrier()

Now that all RCU flavors have been consolidated, rcu_barrier_bh()
is but a synonym for rcu_barrier(). This commit therefore replaces
the former with the latter.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
---
net/decnet/af_decnet.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
index 7d6ff983ba2c..dbd0f7bae00a 100644
--- a/net/decnet/af_decnet.c
+++ b/net/decnet/af_decnet.c
@@ -2405,7 +2405,7 @@ static void __exit decnet_exit(void)

proto_unregister(&dn_proto);

- rcu_barrier_bh(); /* Wait for completion of call_rcu_bh()'s */
+ rcu_barrier(); /* Wait for completion of call_rcu()'s */
}
module_exit(decnet_exit);
#endif
--
2.17.1


2018-11-11 19:48:05

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 13/41] ethernet/intel/ixgbe: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Jeff Kirsher <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: <[email protected]>
---
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 113b38e0defb..e8095bdf46de 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -6077,9 +6077,9 @@ void ixgbe_down(struct ixgbe_adapter *adapter)
/* Disable Rx */
ixgbe_disable_rx(adapter);

- /* synchronize_sched() needed for pending XDP buffers to drain */
+ /* synchronize_rcu() needed for pending XDP buffers to drain */
if (adapter->xdp_ring[0])
- synchronize_sched();
+ synchronize_rcu();

ixgbe_irq_disable(adapter);

@@ -10476,7 +10476,7 @@ void ixgbe_txrx_ring_disable(struct ixgbe_adapter *adapter, int ring)
ixgbe_disable_rxr_hw(adapter, rx_ring);

if (xdp_ring)
- synchronize_sched();
+ synchronize_rcu();

/* Rx/Tx/XDP Tx share the same napi context. */
napi_disable(&rx_ring->q_vector->napi);
--
2.17.1


2018-11-11 19:48:07

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 27/41] percpu-refcount: Replace call_rcu_sched() with call_rcu()

Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched(). This commit therefore makes that change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Ming Lei <[email protected]>
Cc: Bart Van Assche <[email protected]>
Cc: Jens Axboe <[email protected]>
---
lib/percpu-refcount.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index de10b8c0bff6..9877682e49c7 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -181,7 +181,7 @@ static void __percpu_ref_switch_to_atomic(struct percpu_ref *ref,
ref->confirm_switch = confirm_switch ?: percpu_ref_noop_confirm_switch;

percpu_ref_get(ref); /* put after confirmation */
- call_rcu_sched(&ref->rcu, percpu_ref_switch_to_atomic_rcu);
+ call_rcu(&ref->rcu, percpu_ref_switch_to_atomic_rcu);
}

static void __percpu_ref_switch_to_percpu(struct percpu_ref *ref)
--
2.17.1


2018-11-11 19:48:07

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 34/41] percpu-rwsem: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Christoph Lameter <[email protected]>
---
include/linux/percpu-rwsem.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
index 79b99d653e03..71b75643c432 100644
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -41,7 +41,7 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *
* cannot both change sem->state from readers_fast and start checking
* counters while we are here. So if we see !sem->state, we know that
* the writer won't be checking until we're past the preempt_enable()
- * and that one the synchronize_sched() is done, the writer will see
+ * and that once the synchronize_rcu() is done, the writer will see
* anything we did within this RCU-sched read-size critical section.
*/
__this_cpu_inc(*sem->read_count);
--
2.17.1


2018-11-11 19:48:15

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 31/41] net/sched: Replace call_rcu_bh() and rcu_barrier_bh()

Now that call_rcu()'s callback is not invoked until after bh-disable
regions of code have completed (in addition to explicitly marked
RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_bh(). Similarly, rcu_barrier() can be used in place o
frcu_barrier_bh(). This commit therefore makes these changes.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Jamal Hadi Salim <[email protected]>
Cc: Cong Wang <[email protected]>
Cc: Jiri Pirko <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: <[email protected]>
---
net/sched/sch_api.c | 2 +-
net/sched/sch_generic.c | 8 ++++----
2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
index ca3b0f46de53..016e628c6ac9 100644
--- a/net/sched/sch_api.c
+++ b/net/sched/sch_api.c
@@ -540,7 +540,7 @@ void qdisc_put_stab(struct qdisc_size_table *tab)

if (--tab->refcnt == 0) {
list_del(&tab->list);
- call_rcu_bh(&tab->rcu, stab_kfree_rcu);
+ call_rcu(&tab->rcu, stab_kfree_rcu);
}
}
EXPORT_SYMBOL(qdisc_put_stab);
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index de1663f7d3ad..66ba2ce2320f 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -1372,7 +1372,7 @@ void mini_qdisc_pair_swap(struct mini_Qdisc_pair *miniqp,
if (!tp_head) {
RCU_INIT_POINTER(*miniqp->p_miniq, NULL);
/* Wait for flying RCU callback before it is freed. */
- rcu_barrier_bh();
+ rcu_barrier();
return;
}

@@ -1380,10 +1380,10 @@ void mini_qdisc_pair_swap(struct mini_Qdisc_pair *miniqp,
&miniqp->miniq1 : &miniqp->miniq2;

/* We need to make sure that readers won't see the miniq
- * we are about to modify. So wait until previous call_rcu_bh callback
+ * we are about to modify. So wait until previous call_rcu callback
* is done.
*/
- rcu_barrier_bh();
+ rcu_barrier();
miniq->filter_list = tp_head;
rcu_assign_pointer(*miniqp->p_miniq, miniq);

@@ -1392,7 +1392,7 @@ void mini_qdisc_pair_swap(struct mini_Qdisc_pair *miniqp,
* block potential new user of miniq_old until all readers
* are not seeing it.
*/
- call_rcu_bh(&miniq_old->rcu, mini_qdisc_rcu_func);
+ call_rcu(&miniq_old->rcu, mini_qdisc_rcu_func);
}
EXPORT_SYMBOL(mini_qdisc_pair_swap);

--
2.17.1


2018-11-11 19:48:25

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 15/41] cpufreq/intel_pstate: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Srinivas Pandruvada <[email protected]>
Cc: Len Brown <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Viresh Kumar <[email protected]>
Cc: <[email protected]>
---
drivers/cpufreq/intel_pstate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 9578312e43f2..ed124d72db76 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -1930,7 +1930,7 @@ static void intel_pstate_clear_update_util_hook(unsigned int cpu)

cpufreq_remove_update_util_hook(cpu);
cpu_data->update_util_set = false;
- synchronize_sched();
+ synchronize_rcu();
}

static int intel_pstate_get_max_freq(struct cpudata *cpu)
--
2.17.1


2018-11-11 19:48:35

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 14/41] drivers/vhost: Replace synchronize_rcu_bh() with synchronize_rcu()

Now that synchronize_rcu() waits for bh-disable regions of code as well
as RCU read-side critical sections, synchronize_rcu_bh() can be replaced
by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: "Michael S. Tsirkin" <[email protected]>
Cc: Jason Wang <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
---
drivers/vhost/net.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index ab11b2bee273..564ead864028 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1359,7 +1359,7 @@ static int vhost_net_release(struct inode *inode, struct file *f)
if (rx_sock)
sockfd_put(rx_sock);
/* Make sure no callbacks are outstanding */
- synchronize_rcu_bh();
+ synchronize_rcu();
/* We do an extra flush before freeing memory,
* since jobs can re-queue themselves. */
vhost_net_flush(n);
--
2.17.1


2018-11-11 19:48:37

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 37/41] livepatch: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change,
even though it is but a comment.

Signed-off-by: Paul E. McKenney <[email protected]>
---
kernel/livepatch/patch.c | 4 ++--
kernel/livepatch/transition.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 82d584225dc6..7702cb4064fc 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -61,7 +61,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
ops = container_of(fops, struct klp_ops, fops);

/*
- * A variant of synchronize_sched() is used to allow patching functions
+ * A variant of synchronize_rcu() is used to allow patching functions
* where RCU is not watching, see klp_synchronize_transition().
*/
preempt_disable_notrace();
@@ -72,7 +72,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
/*
* func should never be NULL because preemption should be disabled here
* and unregister_ftrace_function() does the equivalent of a
- * synchronize_sched() before the func_stack removal.
+ * synchronize_rcu() before the func_stack removal.
*/
if (WARN_ON_ONCE(!func))
goto unlock;
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index 5bc349805e03..304d5eb8a98c 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -52,7 +52,7 @@ static DECLARE_DELAYED_WORK(klp_transition_work, klp_transition_work_fn);

/*
* This function is just a stub to implement a hard force
- * of synchronize_sched(). This requires synchronizing
+ * of synchronize_rcu(). This requires synchronizing
* tasks even in userspace and idle.
*/
static void klp_sync(struct work_struct *work)
@@ -175,7 +175,7 @@ void klp_cancel_transition(void)
void klp_update_patch_state(struct task_struct *task)
{
/*
- * A variant of synchronize_sched() is used to allow patching functions
+ * A variant of synchronize_rcu() is used to allow patching functions
* where RCU is not watching, see klp_synchronize_transition().
*/
preempt_disable_notrace();
--
2.17.1


2018-11-11 19:48:40

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 29/41] mm: Replace call_rcu_sched() with call_rcu()

Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched(). This commit therefore makes that change.

Signed-off-by: Paul E. McKenney <[email protected]>
---
mm/mmu_gather.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 2a9fbc4a37d5..f2f03c655807 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -199,7 +199,7 @@ void tlb_table_flush(struct mmu_gather *tlb)

if (*batch) {
tlb_table_invalidate(tlb);
- call_rcu_sched(&(*batch)->rcu, tlb_remove_table_rcu);
+ call_rcu(&(*batch)->rcu, tlb_remove_table_rcu);
*batch = NULL;
}
}
--
2.17.1


2018-11-11 19:48:56

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 20/41] kprobes: eplace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Anil S Keshavamurthy <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
---
kernel/kprobes.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 90e98e233647..08e31d863191 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -229,7 +229,7 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
struct kprobe_insn_page *kip, *next;

/* Ensure no-one is interrupted on the garbages */
- synchronize_sched();
+ synchronize_rcu();

list_for_each_entry_safe(kip, next, &c->pages, list) {
int i;
@@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
if (ret) {
ap->flags |= KPROBE_FLAG_DISABLED;
list_del_rcu(&p->list);
- synchronize_sched();
+ synchronize_rcu();
}
}
}
@@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
ret = arm_kprobe(p);
if (ret) {
hlist_del_rcu(&p->hlist);
- synchronize_sched();
+ synchronize_rcu();
goto out;
}
}
@@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
kps[i]->addr = NULL;
mutex_unlock(&kprobe_mutex);

- synchronize_sched();
+ synchronize_rcu();
for (i = 0; i < num; i++)
if (kps[i]->addr)
__unregister_kprobe_bottom(kps[i]);
@@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
rps[i]->kp.addr = NULL;
mutex_unlock(&kprobe_mutex);

- synchronize_sched();
+ synchronize_rcu();
for (i = 0; i < num; i++) {
if (rps[i]->kp.addr) {
__unregister_kprobe_bottom(&rps[i]->kp);
--
2.17.1


2018-11-11 19:49:02

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 12/41] ethernet/realtek: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Realtek linux nic maintainers <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: <[email protected]>
---
drivers/net/ethernet/realtek/8139too.c | 2 +-
drivers/net/ethernet/realtek/r8169.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/realtek/8139too.c b/drivers/net/ethernet/realtek/8139too.c
index ffd68a7bc9e1..69d752f0b621 100644
--- a/drivers/net/ethernet/realtek/8139too.c
+++ b/drivers/net/ethernet/realtek/8139too.c
@@ -1661,7 +1661,7 @@ static void rtl8139_tx_timeout_task (struct work_struct *work)

napi_disable(&tp->napi);
netif_stop_queue(dev);
- synchronize_sched();
+ synchronize_rcu();

netdev_dbg(dev, "Transmit timeout, status %02x %04x %04x media %02x\n",
RTL_R8(ChipCmd), RTL_R16(IntrStatus),
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
index 1fd01688d37b..4f1d89f0dc24 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -5866,7 +5866,7 @@ static void rtl_reset_work(struct rtl8169_private *tp)

napi_disable(&tp->napi);
netif_stop_queue(dev);
- synchronize_sched();
+ synchronize_rcu();

rtl8169_hw_reset(tp);

@@ -6609,7 +6609,7 @@ static void rtl8169_down(struct net_device *dev)
rtl8169_rx_missed(dev);

/* Give a racing hard_start_xmit a few cycles to complete. */
- synchronize_sched();
+ synchronize_rcu();

rtl8169_tx_clear(tp);

--
2.17.1


2018-11-11 19:49:08

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 24/41] modules: Replace synchronize_sched() and call_rcu_sched()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can
be replaced by synchronize_rcu(). Similarly, call_rcu_sched() can be
replaced by call_rcu(). This commit therefore makes these changes.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Jessica Yu <[email protected]>
---
kernel/module.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/module.c b/kernel/module.c
index 49a405891587..99b46c32d579 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -2159,7 +2159,7 @@ static void free_module(struct module *mod)
/* Remove this module from bug list, this uses list_del_rcu */
module_bug_cleanup(mod);
/* Wait for RCU-sched synchronizing before releasing mod->list and buglist. */
- synchronize_sched();
+ synchronize_rcu();
mutex_unlock(&module_mutex);

/* This may be empty, but that's OK */
@@ -3507,15 +3507,15 @@ static noinline int do_init_module(struct module *mod)
/*
* We want to free module_init, but be aware that kallsyms may be
* walking this with preempt disabled. In all the failure paths, we
- * call synchronize_sched(), but we don't want to slow down the success
+ * call synchronize_rcu(), but we don't want to slow down the success
* path, so use actual RCU here.
* Note that module_alloc() on most architectures creates W+X page
* mappings which won't be cleaned up until do_free_init() runs. Any
* code such as mark_rodata_ro() which depends on those mappings to
* be cleaned up needs to sync with the queued work - ie
- * rcu_barrier_sched()
+ * rcu_barrier()
*/
- call_rcu_sched(&freeinit->rcu, do_free_init);
+ call_rcu(&freeinit->rcu, do_free_init);
mutex_unlock(&module_mutex);
wake_up_all(&module_wq);

@@ -3526,7 +3526,7 @@ static noinline int do_init_module(struct module *mod)
fail:
/* Try to protect us from buggy refcounters. */
mod->state = MODULE_STATE_GOING;
- synchronize_sched();
+ synchronize_rcu();
module_put(mod);
blocking_notifier_call_chain(&module_notify_list,
MODULE_STATE_GOING, mod);
@@ -3819,7 +3819,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
ddebug_cleanup:
ftrace_release_mod(mod);
dynamic_debug_remove(mod, info->debug);
- synchronize_sched();
+ synchronize_rcu();
kfree(mod->args);
free_arch_cleanup:
module_arch_cleanup(mod);
@@ -3834,7 +3834,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
mod_tree_remove(mod);
wake_up_all(&module_wq);
/* Wait for RCU-sched synchronizing before releasing mod->list. */
- synchronize_sched();
+ synchronize_rcu();
mutex_unlock(&module_mutex);
free_module:
/* Free lock-classes; relies on the preceding sync_rcu() */
--
2.17.1


2018-11-11 19:49:11

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 09/41] crypto/pcrypt: Replace synchronize_rcu_bh() with synchronize_rcu()

Now that synchronize_rcu() waits for bh-disable regions of code as
well as RCU read-side critical sections, the synchronize_rcu_bh() in
pcrypt_cpumask_change_notify() can be replaced by synchronize_rcu().
This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Steffen Klassert <[email protected]>
Cc: <[email protected]>
---
crypto/pcrypt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
index f8ec3d4ba4a8..8eb3c4c9ff67 100644
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -382,7 +382,7 @@ static int pcrypt_cpumask_change_notify(struct notifier_block *self,

cpumask_copy(new_mask->mask, cpumask->cbcpu);
rcu_assign_pointer(pcrypt->cb_cpumask, new_mask);
- synchronize_rcu_bh();
+ synchronize_rcu();

free_cpumask_var(old_mask->mask);
kfree(old_mask);
--
2.17.1


2018-11-11 19:49:19

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 02/41] rcu: Fix rcu_{node,data} comments about gp_seq_needed

From: "Joel Fernandes (Google)" <[email protected]>

Recent changes have removed the old ->gp_seq_needed field from the
rcu_state structure, which in turn obsoleted a couple of comments in
the rcu_node and rcu_data structures. This commit therefore updates
these comments accordingly.

Signed-off-by: Joel Fernandes (Google) <[email protected]>
Cc: <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
kernel/rcu/tree.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 57a937ac51c2..c3e2807a834a 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -57,7 +57,7 @@ struct rcu_node {
/* some rcu_state fields as well as */
/* following. */
unsigned long gp_seq; /* Track rsp->rcu_gp_seq. */
- unsigned long gp_seq_needed; /* Track rsp->rcu_gp_seq_needed. */
+ unsigned long gp_seq_needed; /* Track furthest future GP request. */
unsigned long completedqs; /* All QSes done for this node. */
unsigned long qsmask; /* CPUs or groups that need to switch in */
/* order for current grace period to proceed.*/
@@ -163,7 +163,7 @@ union rcu_noqs {
struct rcu_data {
/* 1) quiescent-state and grace-period handling : */
unsigned long gp_seq; /* Track rsp->rcu_gp_seq counter. */
- unsigned long gp_seq_needed; /* Track rsp->rcu_gp_seq_needed ctr. */
+ unsigned long gp_seq_needed; /* Track furthest future GP request. */
union rcu_noqs cpu_no_qs; /* No QSes yet for this CPU. */
bool core_needs_qs; /* Core waits for quiesc state. */
bool beenonline; /* CPU online at least once. */
--
2.17.1


2018-11-11 19:49:30

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 07/41] s390/mm: Convert tlb_table_flush() to use call_rcu()

Now that call_rcu()'s callback is not invoked until after all
preempt-disable regions of code have completed (in addition to explicitly
marked RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_sched(). This commit therefore makes that change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: <[email protected]>
---
arch/s390/mm/pgalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 76d89ee8b428..da64e4b9324e 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -350,7 +350,7 @@ void tlb_table_flush(struct mmu_gather *tlb)
struct mmu_table_batch **batch = &tlb->batch;

if (*batch) {
- call_rcu_sched(&(*batch)->rcu, tlb_remove_table_rcu);
+ call_rcu(&(*batch)->rcu, tlb_remove_table_rcu);
*batch = NULL;
}
}
--
2.17.1


2018-11-11 19:49:58

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 21/41] lockdep: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Will Deacon <[email protected]>
---
kernel/locking/lockdep.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1efada2dd9dd..ef27f98714c0 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4195,7 +4195,7 @@ void lockdep_free_key_range(void *start, unsigned long size)
*
* sync_sched() is sufficient because the read-side is IRQ disable.
*/
- synchronize_sched();
+ synchronize_rcu();

/*
* XXX at this point we could return the resources to the pool;
--
2.17.1


2018-11-11 19:50:02

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 33/41] net/bridge: Replace call_rcu_bh() and rcu_barrier_bh()

Now that call_rcu()'s callback is not invoked until after all bh-disable
regions of code have completed (in addition to explicitly marked
RCU read-side critical sections), call_rcu() can be used in place
of call_rcu_bh(). Similarly, rcu_barrier() can be used in place of
rcu_barrier_bh(). This commit therefore makes these changes.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Roopa Prabhu <[email protected]>
Cc: Nikolay Aleksandrov <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
---
net/bridge/br_mdb.c | 2 +-
net/bridge/br_multicast.c | 14 +++++++-------
2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
index a7ea2d431714..596ec6e7df11 100644
--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -728,7 +728,7 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry)
rcu_assign_pointer(*pp, p->next);
hlist_del_init(&p->mglist);
del_timer(&p->timer);
- call_rcu_bh(&p->rcu, br_multicast_free_pg);
+ call_rcu(&p->rcu, br_multicast_free_pg);
err = 0;

if (!mp->ports && !mp->host_joined &&
diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
index 6bac0d6b7b94..0255223f2001 100644
--- a/net/bridge/br_multicast.c
+++ b/net/bridge/br_multicast.c
@@ -260,7 +260,7 @@ static void br_multicast_group_expired(struct timer_list *t)
hlist_del_rcu(&mp->hlist[mdb->ver]);
mdb->size--;

- call_rcu_bh(&mp->rcu, br_multicast_free_group);
+ call_rcu(&mp->rcu, br_multicast_free_group);

out:
spin_unlock(&br->multicast_lock);
@@ -291,7 +291,7 @@ static void br_multicast_del_pg(struct net_bridge *br,
del_timer(&p->timer);
br_mdb_notify(br->dev, p->port, &pg->addr, RTM_DELMDB,
p->flags);
- call_rcu_bh(&p->rcu, br_multicast_free_pg);
+ call_rcu(&p->rcu, br_multicast_free_pg);

if (!mp->ports && !mp->host_joined &&
netif_running(br->dev))
@@ -358,7 +358,7 @@ static int br_mdb_rehash(struct net_bridge_mdb_htable __rcu **mdbp, int max,
}

br_mdb_rehash_seq++;
- call_rcu_bh(&mdb->rcu, br_mdb_free);
+ call_rcu(&mdb->rcu, br_mdb_free);

out:
rcu_assign_pointer(*mdbp, mdb);
@@ -1629,7 +1629,7 @@ br_multicast_leave_group(struct net_bridge *br,
rcu_assign_pointer(*pp, p->next);
hlist_del_init(&p->mglist);
del_timer(&p->timer);
- call_rcu_bh(&p->rcu, br_multicast_free_pg);
+ call_rcu(&p->rcu, br_multicast_free_pg);
br_mdb_notify(br->dev, port, group, RTM_DELMDB,
p->flags);

@@ -2051,19 +2051,19 @@ void br_multicast_dev_del(struct net_bridge *br)
hlist_for_each_entry_safe(mp, n, &mdb->mhash[i],
hlist[ver]) {
del_timer(&mp->timer);
- call_rcu_bh(&mp->rcu, br_multicast_free_group);
+ call_rcu(&mp->rcu, br_multicast_free_group);
}
}

if (mdb->old) {
spin_unlock_bh(&br->multicast_lock);
- rcu_barrier_bh();
+ rcu_barrier();
spin_lock_bh(&br->multicast_lock);
WARN_ON(mdb->old);
}

mdb->old = mdb;
- call_rcu_bh(&mdb->rcu, br_mdb_free);
+ call_rcu(&mdb->rcu, br_mdb_free);

out:
spin_unlock_bh(&br->multicast_lock);
--
2.17.1


2018-11-11 19:50:07

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 03/41] rcu: Eliminate synchronize_rcu_mult()

From: "Paul E. McKenney" <[email protected]>

Now that synchronize_rcu() waits for both RCU read-side critical
sections and preempt-disabled regions of code, the sole caller of
synchronize_rcu_mult() can be replaced by synchronize_rcu().
This patch makes this change and removes synchronize_rcu_mult().
Note that _wait_rcu_gp() still supports synchronize_rcu_mult(),
and thus might be simplified in the future to take only take
a single call_rcu() function rather than the current list of them.

Signed-off-by: Paul E. McKenney <[email protected]>
---
include/linux/rcupdate_wait.h | 17 -----------------
kernel/rcu/update.c | 6 ++----
kernel/sched/core.c | 2 +-
3 files changed, 3 insertions(+), 22 deletions(-)

diff --git a/include/linux/rcupdate_wait.h b/include/linux/rcupdate_wait.h
index 8a16c3eb3dd0..c0578ba23c1a 100644
--- a/include/linux/rcupdate_wait.h
+++ b/include/linux/rcupdate_wait.h
@@ -31,21 +31,4 @@ do { \

#define wait_rcu_gp(...) _wait_rcu_gp(false, __VA_ARGS__)

-/**
- * synchronize_rcu_mult - Wait concurrently for multiple grace periods
- * @...: List of call_rcu() functions for different grace periods to wait on
- *
- * This macro waits concurrently for multiple types of RCU grace periods.
- * For example, synchronize_rcu_mult(call_rcu, call_rcu_tasks) would wait
- * on concurrent RCU and RCU-tasks grace periods. Waiting on a give SRCU
- * domain requires you to write a wrapper function for that SRCU domain's
- * call_srcu() function, supplying the corresponding srcu_struct.
- *
- * If Tiny RCU, tell _wait_rcu_gp() does not bother waiting for RCU,
- * given that anywhere synchronize_rcu_mult() can be called is automatically
- * a grace period.
- */
-#define synchronize_rcu_mult(...) \
- _wait_rcu_gp(IS_ENABLED(CONFIG_TINY_RCU), __VA_ARGS__)
-
#endif /* _LINUX_SCHED_RCUPDATE_WAIT_H */
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index f203b94f6b5b..c729ca5e6ee2 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -335,8 +335,7 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array,
/* Initialize and register callbacks for each crcu_array element. */
for (i = 0; i < n; i++) {
if (checktiny &&
- (crcu_array[i] == call_rcu ||
- crcu_array[i] == call_rcu_bh)) {
+ (crcu_array[i] == call_rcu)) {
might_sleep();
continue;
}
@@ -352,8 +351,7 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array,
/* Wait for all callbacks to be invoked. */
for (i = 0; i < n; i++) {
if (checktiny &&
- (crcu_array[i] == call_rcu ||
- crcu_array[i] == call_rcu_bh))
+ (crcu_array[i] == call_rcu))
continue;
for (j = 0; j < i; j++)
if (crcu_array[j] == crcu_array[i])
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f12225f26b70..ea12ebc57840 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5788,7 +5788,7 @@ int sched_cpu_deactivate(unsigned int cpu)
*
* Do sync before park smpboot threads to take care the rcu boost case.
*/
- synchronize_rcu_mult(call_rcu, call_rcu_sched);
+ synchronize_rcu();

if (!sched_smp_initialized)
return 0;
--
2.17.1


2018-11-11 19:50:12

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 06/41] sparc/oprofile: Convert timer_stop() to use synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
in addition to explicitly marked RCU read-side critical sections,
synchronize_rcu() can be used in place of synchronize_sched(). This
commit therefore makes that change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Robert Richter <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
---
arch/sparc/oprofile/init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/sparc/oprofile/init.c b/arch/sparc/oprofile/init.c
index f9024bccff16..43730c9b1c86 100644
--- a/arch/sparc/oprofile/init.c
+++ b/arch/sparc/oprofile/init.c
@@ -53,7 +53,7 @@ static void timer_stop(void)
{
nmi_adjust_hz(1);
unregister_die_notifier(&profile_timer_exceptions_nb);
- synchronize_sched(); /* Allow already-started NMIs to complete. */
+ synchronize_rcu(); /* Allow already-started NMIs to complete. */
}

static int op_nmi_timer_init(struct oprofile_operations *ops)
--
2.17.1


2018-11-11 19:50:37

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 28/41] slab: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of code
as well as RCU read-side critical sections, synchronize_sched() can be
replaced by synchronize_rcu(). This commit therefore makes this change.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: <[email protected]>
---
mm/slab.c | 4 ++--
mm/slab_common.c | 6 +++---
2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 2a5654bb3b3f..3abb9feb3818 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -962,10 +962,10 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
* To protect lockless access to n->shared during irq disabled context.
* If n->shared isn't NULL in irq disabled context, accessing to it is
* guaranteed to be valid until irq is re-enabled, because it will be
- * freed after synchronize_sched().
+ * freed after synchronize_rcu().
*/
if (old_shared && force_change)
- synchronize_sched();
+ synchronize_rcu();

fail:
kfree(old_shared);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 7eb8dc136c1c..9c11e8a937d2 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -724,7 +724,7 @@ void slab_deactivate_memcg_cache_rcu_sched(struct kmem_cache *s,
css_get(&s->memcg_params.memcg->css);

s->memcg_params.deact_fn = deact_fn;
- call_rcu_sched(&s->memcg_params.deact_rcu_head, kmemcg_deactivate_rcufn);
+ call_rcu(&s->memcg_params.deact_rcu_head, kmemcg_deactivate_rcufn);
}

void memcg_deactivate_kmem_caches(struct mem_cgroup *memcg)
@@ -839,11 +839,11 @@ static void flush_memcg_workqueue(struct kmem_cache *s)
mutex_unlock(&slab_mutex);

/*
- * SLUB deactivates the kmem_caches through call_rcu_sched. Make
+ * SLUB deactivates the kmem_caches through call_rcu. Make
* sure all registered rcu callbacks have been invoked.
*/
if (IS_ENABLED(CONFIG_SLUB))
- rcu_barrier_sched();
+ rcu_barrier();

/*
* SLAB and SLUB create memcg kmem_caches through workqueue and SLUB
--
2.17.1


2018-11-11 19:50:48

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH tip/core/rcu 01/41] rcu: Remove unused rcu_state externs

From: "Joel Fernandes (Google)" <[email protected]>

The rcu_bh_state and rcu_sched_state variables were removed during the
RCU flavor consolidations, but external declarations remain in tree.h.
This commit therefore removes these obsolete declarations.

Signed-off-by: Joel Fernandes (Google) <[email protected]>
Cc: <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
kernel/rcu/tree.h | 11 -----------
1 file changed, 11 deletions(-)

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 703e19ff532d..57a937ac51c2 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -398,17 +398,6 @@ static const char *tp_rcu_varname __used __tracepoint_string = rcu_name;
#define RCU_NAME rcu_name
#endif /* #else #ifdef CONFIG_TRACING */

-/*
- * RCU implementation internal declarations:
- */
-extern struct rcu_state rcu_sched_state;
-
-extern struct rcu_state rcu_bh_state;
-
-#ifdef CONFIG_PREEMPT_RCU
-extern struct rcu_state rcu_preempt_state;
-#endif /* #ifdef CONFIG_PREEMPT_RCU */
-
int rcu_dynticks_snap(struct rcu_data *rdp);

#ifdef CONFIG_RCU_BOOST
--
2.17.1


2018-11-12 00:13:30

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Sun, Nov 11, 2018 at 11:43:52AM -0800, Paul E. McKenney wrote:
> Now that synchronize_rcu() waits for preempt-disable regions of code
> as well as RCU read-side critical sections, synchronize_sched() can be
> replaced by synchronize_rcu(). This commit therefore makes this change.

Yes, but it also waits for an actual RCU quiestent state, which makes
synchoinize_rcu() potentially much more expensive than an actual
synchronize_sched().

So why are we doing this?

2018-11-12 00:46:14

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 01:12:33AM +0100, Peter Zijlstra wrote:
> On Sun, Nov 11, 2018 at 11:43:52AM -0800, Paul E. McKenney wrote:
> > Now that synchronize_rcu() waits for preempt-disable regions of code
> > as well as RCU read-side critical sections, synchronize_sched() can be
> > replaced by synchronize_rcu(). This commit therefore makes this change.
>
> Yes, but it also waits for an actual RCU quiestent state, which makes
> synchoinize_rcu() potentially much more expensive than an actual
> synchronize_sched().

None of the readers have changed.

For the updaters, if CONFIG_PREEMPT=n, synchronize_rcu() and
synchronize_sched() always were one and the same. When CONFIG_PREEMPT=y,
synchronize_rcu() and synchronize_sched() are now one and the same.

> So why are we doing this?

Given that synchronize_rcu() and synchronize_sched() are now always one
and the same, this is a distinction without a difference. So we might
as well get rid of the _bh and _sched APIs. (See the tail end of current
mainline's include/linux/rcupdate.h.)

If you are instead asking why the RCU flavors (RCU-bh, RCU-preempt,
and RCU-sched) got merged, it was due to a security incident stemming
from confusion between two of the flavor, with the resulting bug turning
out to be exploitable. Linus therefore requested that I do something
to make this not happen again, which I did.

Thanx, Paul


2018-11-12 00:54:32

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Sun, Nov 11, 2018 at 04:45:28PM -0800, Paul E. McKenney wrote:
> On Mon, Nov 12, 2018 at 01:12:33AM +0100, Peter Zijlstra wrote:
> > On Sun, Nov 11, 2018 at 11:43:52AM -0800, Paul E. McKenney wrote:
> > > Now that synchronize_rcu() waits for preempt-disable regions of code
> > > as well as RCU read-side critical sections, synchronize_sched() can be
> > > replaced by synchronize_rcu(). This commit therefore makes this change.
> >
> > Yes, but it also waits for an actual RCU quiestent state, which makes
> > synchoinize_rcu() potentially much more expensive than an actual
> > synchronize_sched().
>
> None of the readers have changed.
>
> For the updaters, if CONFIG_PREEMPT=n, synchronize_rcu() and
> synchronize_sched() always were one and the same. When CONFIG_PREEMPT=y,
> synchronize_rcu() and synchronize_sched() are now one and the same.

The Changelog does not state this; and does the commit that makes that
happen state the regression potential?

> > So why are we doing this?
>
> Given that synchronize_rcu() and synchronize_sched() are now always one
> and the same, this is a distinction without a difference.

The Changelog did not state a reason for the patch. Therefore it is a
bad patch.

2018-11-12 01:48:20

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 01:53:29AM +0100, Peter Zijlstra wrote:
> On Sun, Nov 11, 2018 at 04:45:28PM -0800, Paul E. McKenney wrote:
> > On Mon, Nov 12, 2018 at 01:12:33AM +0100, Peter Zijlstra wrote:
> > > On Sun, Nov 11, 2018 at 11:43:52AM -0800, Paul E. McKenney wrote:
> > > > Now that synchronize_rcu() waits for preempt-disable regions of code
> > > > as well as RCU read-side critical sections, synchronize_sched() can be
> > > > replaced by synchronize_rcu(). This commit therefore makes this change.
> > >
> > > Yes, but it also waits for an actual RCU quiestent state, which makes
> > > synchoinize_rcu() potentially much more expensive than an actual
> > > synchronize_sched().
> >
> > None of the readers have changed.
> >
> > For the updaters, if CONFIG_PREEMPT=n, synchronize_rcu() and
> > synchronize_sched() always were one and the same. When CONFIG_PREEMPT=y,
> > synchronize_rcu() and synchronize_sched() are now one and the same.
>
> The Changelog does not state this; and does the commit that makes that
> happen state the regression potential?

The Changelog says this:

Now that synchronize_rcu() waits for preempt-disable
regions of code as well as RCU read-side critical sections,
synchronize_sched() can be replaced by synchronize_rcu().
This commit therefore makes this change.

The "synchronize_rcu() waits for preempt-disable regions of code as
well as RCU read-side critical sections" seems pretty unambiguous to me.
Exactly what more are you wanting said there?

There were quite a few commits involved in making this happen. Perhaps
the most pertinent are these:

3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")
45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")

Normal grace periods are almost always quite long compared to typical
read-side critical sections, preempt-disable regions of code, and so on.
So in the common case this should be OK. Or are you instead worried
about synchronize_sched_expedited()?

> > > So why are we doing this?
> >
> > Given that synchronize_rcu() and synchronize_sched() are now always one
> > and the same, this is a distinction without a difference.
>
> The Changelog did not state a reason for the patch. Therefore it is a
> bad patch.

??? Here is the current definition of synchronize_sched() in mainline:

static inline void synchronize_sched(void)
{
synchronize_rcu();
}

Thanx, Paul


2018-11-12 02:08:00

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Sun, Nov 11, 2018 at 05:47:36PM -0800, Paul E. McKenney wrote:
> On Mon, Nov 12, 2018 at 01:53:29AM +0100, Peter Zijlstra wrote:
> > On Sun, Nov 11, 2018 at 04:45:28PM -0800, Paul E. McKenney wrote:
> > > On Mon, Nov 12, 2018 at 01:12:33AM +0100, Peter Zijlstra wrote:
> > > > On Sun, Nov 11, 2018 at 11:43:52AM -0800, Paul E. McKenney wrote:
> > > > > Now that synchronize_rcu() waits for preempt-disable regions of code
> > > > > as well as RCU read-side critical sections, synchronize_sched() can be
> > > > > replaced by synchronize_rcu(). This commit therefore makes this change.
> > > >
> > > > Yes, but it also waits for an actual RCU quiestent state, which makes
> > > > synchoinize_rcu() potentially much more expensive than an actual
> > > > synchronize_sched().
> > >
> > > None of the readers have changed.
> > >
> > > For the updaters, if CONFIG_PREEMPT=n, synchronize_rcu() and
> > > synchronize_sched() always were one and the same. When CONFIG_PREEMPT=y,
> > > synchronize_rcu() and synchronize_sched() are now one and the same.
> >
> > The Changelog does not state this; and does the commit that makes that
> > happen state the regression potential?
>
> The Changelog says this:
>
> Now that synchronize_rcu() waits for preempt-disable
> regions of code as well as RCU read-side critical sections,
> synchronize_sched() can be replaced by synchronize_rcu().
> This commit therefore makes this change.
>
> The "synchronize_rcu() waits for preempt-disable regions of code as
> well as RCU read-side critical sections" seems pretty unambiguous to me.
> Exactly what more are you wanting said there?

The quoted bit only states that synchronize_rcu() is sufficient; it does
not say it is equivalent and the patch is a nop. It also doesn't say
that the purpose is to get rid of the synchronize_sched() function.

> There were quite a few commits involved in making this happen. Perhaps
> the most pertinent are these:
>
> 3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")
> 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")

The latter; it does not mention that this will possible make
synchronize_sched() quite a bit more expensive on PREEMPT=y builds :/

> Normal grace periods are almost always quite long compared to typical
> read-side critical sections, preempt-disable regions of code, and so on.
> So in the common case this should be OK. Or are you instead worried
> about synchronize_sched_expedited()?

No, I still feel expedited should not exist at all ;-)

But for PREEMPT=y synchronize_sched() can be quite a bit shorter than
synchronize_rcu(), since we don't have to wait for preempted read side
stuff.

> > > > So why are we doing this?
> > >
> > > Given that synchronize_rcu() and synchronize_sched() are now always one
> > > and the same, this is a distinction without a difference.
> >
> > The Changelog did not state a reason for the patch. Therefore it is a
> > bad patch.
>
> ??? Here is the current definition of synchronize_sched() in mainline:
>
> static inline void synchronize_sched(void)
> {
> synchronize_rcu();
> }

Again, the patch didn't say that.

If the Changelog would've read something like:

"Since synchronize_sched() is now equivalent to synchronize_rcu(),
replace the synchronize_sched() usage such that we can eventually remove
the interface."

It would've been clear that the patch is a nop and what the purpose
was.

2018-11-12 02:10:42

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 18/41] tracing: Replace synchronize_sched() and call_rcu_sched()

On Sun, 11 Nov 2018 11:43:47 -0800
"Paul E. McKenney" <[email protected]> wrote:

> Now that synchronize_rcu() waits for preempt-disable regions of code
> as well as RCU read-side critical sections, synchronize_sched() can
> be replaced by synchronize_rcu(). Similarly, call_rcu_sched() can be
> replaced by call_rcu(). This commit therefore makes these changes.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: <[email protected]>


Acked-by: Steven Rostedt (VMware) <[email protected]>

-- Steve

2018-11-12 02:25:54

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 03:07:10AM +0100, Peter Zijlstra wrote:
> On Sun, Nov 11, 2018 at 05:47:36PM -0800, Paul E. McKenney wrote:
> > On Mon, Nov 12, 2018 at 01:53:29AM +0100, Peter Zijlstra wrote:
> > > On Sun, Nov 11, 2018 at 04:45:28PM -0800, Paul E. McKenney wrote:
> > > > On Mon, Nov 12, 2018 at 01:12:33AM +0100, Peter Zijlstra wrote:
> > > > > On Sun, Nov 11, 2018 at 11:43:52AM -0800, Paul E. McKenney wrote:
> > > > > > Now that synchronize_rcu() waits for preempt-disable regions of code
> > > > > > as well as RCU read-side critical sections, synchronize_sched() can be
> > > > > > replaced by synchronize_rcu(). This commit therefore makes this change.
> > > > >
> > > > > Yes, but it also waits for an actual RCU quiestent state, which makes
> > > > > synchoinize_rcu() potentially much more expensive than an actual
> > > > > synchronize_sched().
> > > >
> > > > None of the readers have changed.
> > > >
> > > > For the updaters, if CONFIG_PREEMPT=n, synchronize_rcu() and
> > > > synchronize_sched() always were one and the same. When CONFIG_PREEMPT=y,
> > > > synchronize_rcu() and synchronize_sched() are now one and the same.
> > >
> > > The Changelog does not state this; and does the commit that makes that
> > > happen state the regression potential?
> >
> > The Changelog says this:
> >
> > Now that synchronize_rcu() waits for preempt-disable
> > regions of code as well as RCU read-side critical sections,
> > synchronize_sched() can be replaced by synchronize_rcu().
> > This commit therefore makes this change.
> >
> > The "synchronize_rcu() waits for preempt-disable regions of code as
> > well as RCU read-side critical sections" seems pretty unambiguous to me.
> > Exactly what more are you wanting said there?
>
> The quoted bit only states that synchronize_rcu() is sufficient; it does
> not say it is equivalent and the patch is a nop. It also doesn't say
> that the purpose is to get rid of the synchronize_sched() function.
>
> > There were quite a few commits involved in making this happen. Perhaps
> > the most pertinent are these:
> >
> > 3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")
> > 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
>
> The latter; it does not mention that this will possible make
> synchronize_sched() quite a bit more expensive on PREEMPT=y builds :/

In theory, sure. In practice, people have switched any number of
things from RCU-sched to RCU and back without problems.

> > Normal grace periods are almost always quite long compared to typical
> > read-side critical sections, preempt-disable regions of code, and so on.
> > So in the common case this should be OK. Or are you instead worried
> > about synchronize_sched_expedited()?
>
> No, I still feel expedited should not exist at all ;-)

I figured as much. ;-)

> But for PREEMPT=y synchronize_sched() can be quite a bit shorter than
> synchronize_rcu(), since we don't have to wait for preempted read side
> stuff.

Again, there are quite a few places that have managed that transition
without issue. Why do you expect this change to have problems that have
not been seen elsewhere?

> > > > > So why are we doing this?
> > > >
> > > > Given that synchronize_rcu() and synchronize_sched() are now always one
> > > > and the same, this is a distinction without a difference.
> > >
> > > The Changelog did not state a reason for the patch. Therefore it is a
> > > bad patch.
> >
> > ??? Here is the current definition of synchronize_sched() in mainline:
> >
> > static inline void synchronize_sched(void)
> > {
> > synchronize_rcu();
> > }
>
> Again, the patch didn't say that.
>
> If the Changelog would've read something like:
>
> "Since synchronize_sched() is now equivalent to synchronize_rcu(),
> replace the synchronize_sched() usage such that we can eventually remove
> the interface."
>
> It would've been clear that the patch is a nop and what the purpose
> was.

I can easily make that change.

Thanx, Paul


2018-11-12 03:01:29

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 20/41] kprobes: eplace synchronize_sched() with synchronize_rcu()

On Sun, 11 Nov 2018 11:43:49 -0800
"Paul E. McKenney" <[email protected]> wrote:

> Now that synchronize_rcu() waits for preempt-disable regions of code
> as well as RCU read-side critical sections, synchronize_sched() can be
> replaced by synchronize_rcu(). This commit therefore makes this change.

Would you mean synchronize_rcu() can ensure that any interrupt handler
(which should run under preempt-disable state) run out (even on non-preemptive
kernel)?
If so, I agree with these changes.

Thank you,

>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: "Naveen N. Rao" <[email protected]>
> Cc: Anil S Keshavamurthy <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: Masami Hiramatsu <[email protected]>
> ---
> kernel/kprobes.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 90e98e233647..08e31d863191 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -229,7 +229,7 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
> struct kprobe_insn_page *kip, *next;
>
> /* Ensure no-one is interrupted on the garbages */
> - synchronize_sched();
> + synchronize_rcu();
>
> list_for_each_entry_safe(kip, next, &c->pages, list) {
> int i;
> @@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> if (ret) {
> ap->flags |= KPROBE_FLAG_DISABLED;
> list_del_rcu(&p->list);
> - synchronize_sched();
> + synchronize_rcu();
> }
> }
> }
> @@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
> ret = arm_kprobe(p);
> if (ret) {
> hlist_del_rcu(&p->hlist);
> - synchronize_sched();
> + synchronize_rcu();
> goto out;
> }
> }
> @@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
> kps[i]->addr = NULL;
> mutex_unlock(&kprobe_mutex);
>
> - synchronize_sched();
> + synchronize_rcu();
> for (i = 0; i < num; i++)
> if (kps[i]->addr)
> __unregister_kprobe_bottom(kps[i]);
> @@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
> rps[i]->kp.addr = NULL;
> mutex_unlock(&kprobe_mutex);
>
> - synchronize_sched();
> + synchronize_rcu();
> for (i = 0; i < num; i++) {
> if (rps[i]->kp.addr) {
> __unregister_kprobe_bottom(&rps[i]->kp);
> --
> 2.17.1
>


--
Masami Hiramatsu <[email protected]>

2018-11-12 03:25:19

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 20/41] kprobes: eplace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 12:00:48PM +0900, Masami Hiramatsu wrote:
> On Sun, 11 Nov 2018 11:43:49 -0800
> "Paul E. McKenney" <[email protected]> wrote:
>
> > Now that synchronize_rcu() waits for preempt-disable regions of code
> > as well as RCU read-side critical sections, synchronize_sched() can be
> > replaced by synchronize_rcu(). This commit therefore makes this change.
>
> Would you mean synchronize_rcu() can ensure that any interrupt handler
> (which should run under preempt-disable state) run out (even on non-preemptive
> kernel)?

Yes, but only as of this merge window. See this commit:

3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")

Don't try this in v4.19 or earlier, but v4.20 and later is OK. ;-)

Thanx, Paul

> If so, I agree with these changes.
>
> Thank you,
>
> >
> > Signed-off-by: Paul E. McKenney <[email protected]>
> > Cc: "Naveen N. Rao" <[email protected]>
> > Cc: Anil S Keshavamurthy <[email protected]>
> > Cc: "David S. Miller" <[email protected]>
> > Cc: Masami Hiramatsu <[email protected]>
> > ---
> > kernel/kprobes.c | 10 +++++-----
> > 1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > index 90e98e233647..08e31d863191 100644
> > --- a/kernel/kprobes.c
> > +++ b/kernel/kprobes.c
> > @@ -229,7 +229,7 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
> > struct kprobe_insn_page *kip, *next;
> >
> > /* Ensure no-one is interrupted on the garbages */
> > - synchronize_sched();
> > + synchronize_rcu();
> >
> > list_for_each_entry_safe(kip, next, &c->pages, list) {
> > int i;
> > @@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> > if (ret) {
> > ap->flags |= KPROBE_FLAG_DISABLED;
> > list_del_rcu(&p->list);
> > - synchronize_sched();
> > + synchronize_rcu();
> > }
> > }
> > }
> > @@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
> > ret = arm_kprobe(p);
> > if (ret) {
> > hlist_del_rcu(&p->hlist);
> > - synchronize_sched();
> > + synchronize_rcu();
> > goto out;
> > }
> > }
> > @@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
> > kps[i]->addr = NULL;
> > mutex_unlock(&kprobe_mutex);
> >
> > - synchronize_sched();
> > + synchronize_rcu();
> > for (i = 0; i < num; i++)
> > if (kps[i]->addr)
> > __unregister_kprobe_bottom(kps[i]);
> > @@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
> > rps[i]->kp.addr = NULL;
> > mutex_unlock(&kprobe_mutex);
> >
> > - synchronize_sched();
> > + synchronize_rcu();
> > for (i = 0; i < num; i++) {
> > if (rps[i]->kp.addr) {
> > __unregister_kprobe_bottom(&rps[i]->kp);
> > --
> > 2.17.1
> >
>
>
> --
> Masami Hiramatsu <[email protected]>
>


2018-11-12 09:02:11

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Sun, Nov 11, 2018 at 06:24:55PM -0800, Paul E. McKenney wrote:

> > > There were quite a few commits involved in making this happen. Perhaps
> > > the most pertinent are these:
> > >
> > > 3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")
> > > 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
> >
> > The latter; it does not mention that this will possible make
> > synchronize_sched() quite a bit more expensive on PREEMPT=y builds :/
>
> In theory, sure. In practice, people have switched any number of
> things from RCU-sched to RCU and back without problems.

Still, better safe than sorry. It was a rather big change in behaviour,
so it wouldn't have been strange to call that out.

> > But for PREEMPT=y synchronize_sched() can be quite a bit shorter than
> > synchronize_rcu(), since we don't have to wait for preempted read side
> > stuff.
>
> Again, there are quite a few places that have managed that transition
> without issue. Why do you expect this change to have problems that have
> not been seen elsewhere?

I'm not, I'm just taking issue with the Changelog.

> > Again, the patch didn't say that.
> >
> > If the Changelog would've read something like:
> >
> > "Since synchronize_sched() is now equivalent to synchronize_rcu(),
> > replace the synchronize_sched() usage such that we can eventually remove
> > the interface."
> >
> > It would've been clear that the patch is a nop and what the purpose
> > was.
>
> I can easily make that change.

Please, sufficient doesn't imply necessary etc.. A changelog should
always clarify why we do the patch.

2018-11-12 12:49:36

by Jessica Yu

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 24/41] modules: Replace synchronize_sched() and call_rcu_sched()

+++ Paul E. McKenney [11/11/18 11:43 -0800]:
>Now that synchronize_rcu() waits for preempt-disable regions of code
>as well as RCU read-side critical sections, synchronize_sched() can
>be replaced by synchronize_rcu(). Similarly, call_rcu_sched() can be
>replaced by call_rcu(). This commit therefore makes these changes.
>
>Signed-off-by: Paul E. McKenney <[email protected]>
>Cc: Jessica Yu <[email protected]>

Acked-by: Jessica Yu <[email protected]>

Thanks!

>---
> kernel/module.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
>diff --git a/kernel/module.c b/kernel/module.c
>index 49a405891587..99b46c32d579 100644
>--- a/kernel/module.c
>+++ b/kernel/module.c
>@@ -2159,7 +2159,7 @@ static void free_module(struct module *mod)
> /* Remove this module from bug list, this uses list_del_rcu */
> module_bug_cleanup(mod);
> /* Wait for RCU-sched synchronizing before releasing mod->list and buglist. */
>- synchronize_sched();
>+ synchronize_rcu();
> mutex_unlock(&module_mutex);
>
> /* This may be empty, but that's OK */
>@@ -3507,15 +3507,15 @@ static noinline int do_init_module(struct module *mod)
> /*
> * We want to free module_init, but be aware that kallsyms may be
> * walking this with preempt disabled. In all the failure paths, we
>- * call synchronize_sched(), but we don't want to slow down the success
>+ * call synchronize_rcu(), but we don't want to slow down the success
> * path, so use actual RCU here.
> * Note that module_alloc() on most architectures creates W+X page
> * mappings which won't be cleaned up until do_free_init() runs. Any
> * code such as mark_rodata_ro() which depends on those mappings to
> * be cleaned up needs to sync with the queued work - ie
>- * rcu_barrier_sched()
>+ * rcu_barrier()
> */
>- call_rcu_sched(&freeinit->rcu, do_free_init);
>+ call_rcu(&freeinit->rcu, do_free_init);
> mutex_unlock(&module_mutex);
> wake_up_all(&module_wq);
>
>@@ -3526,7 +3526,7 @@ static noinline int do_init_module(struct module *mod)
> fail:
> /* Try to protect us from buggy refcounters. */
> mod->state = MODULE_STATE_GOING;
>- synchronize_sched();
>+ synchronize_rcu();
> module_put(mod);
> blocking_notifier_call_chain(&module_notify_list,
> MODULE_STATE_GOING, mod);
>@@ -3819,7 +3819,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
> ddebug_cleanup:
> ftrace_release_mod(mod);
> dynamic_debug_remove(mod, info->debug);
>- synchronize_sched();
>+ synchronize_rcu();
> kfree(mod->args);
> free_arch_cleanup:
> module_arch_cleanup(mod);
>@@ -3834,7 +3834,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
> mod_tree_remove(mod);
> wake_up_all(&module_wq);
> /* Wait for RCU-sched synchronizing before releasing mod->list. */
>- synchronize_sched();
>+ synchronize_rcu();
> mutex_unlock(&module_mutex);
> free_module:
> /* Free lock-classes; relies on the preceding sync_rcu() */
>--
>2.17.1
>

2018-11-12 14:09:14

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 0/41] More RCU flavor consolidation cleanup for v4.21/v5.0

----- On Nov 11, 2018, at 2:41 PM, paulmck [email protected] wrote:

> Hello!
>
> This series does additional cleanup for the RCU flavor consolidation,
> focusing primarily on uses of old API members, for example, so that
> call_rcu_bh() becomes call_rcu(). There are also a few straggling
> internal-to-RCU cleanups.
>
> 1. Remove unused rcu_state externs, courtesy of Joel Fernandes.
>
> 2. Fix rcu_{node,data} comments about gp_seq_needed, courtesy of
> Joel Fernandes.
>
> 3. Eliminate synchronize_rcu_mult() and its sole caller.
>
> 4. Consolidate the RCU update functions invoked by sync.c.
>
> 5-41. Replace old flavorful RCU API calls with the corresponding
> vanilla calls.

Hi Paul,

Just a heads up: we might want to spell out warnings in very big letters
for anyone trying to backport code using RCU from post-4.21 kernels
back to older kernels. I fear that newer code will build just fine
on older kernels, but will spectacularly fail in hard-to-debug ways at
runtime.

Renaming synchronize_rcu() and call_rcu() to something that did not
exist in prior kernels would prevent that. It may not be as pretty
though.

Thanks,

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

2018-11-12 14:18:45

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 10:00:47AM +0100, Peter Zijlstra wrote:
> On Sun, Nov 11, 2018 at 06:24:55PM -0800, Paul E. McKenney wrote:
>
> > > > There were quite a few commits involved in making this happen. Perhaps
> > > > the most pertinent are these:
> > > >
> > > > 3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")
> > > > 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
> > >
> > > The latter; it does not mention that this will possible make
> > > synchronize_sched() quite a bit more expensive on PREEMPT=y builds :/
> >
> > In theory, sure. In practice, people have switched any number of
> > things from RCU-sched to RCU and back without problems.
>
> Still, better safe than sorry. It was a rather big change in behaviour,
> so it wouldn't have been strange to call that out.

This guy:

45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")

Has a commit log that says:

Now that RCU-preempt knows about preemption disabling, its
implementation of synchronize_rcu() works for synchronize_sched(),
and likewise for the other RCU-sched update-side API members.
This commit therefore confines the RCU-sched update-side code
to CONFIG_PREEMPT=n builds, and defines RCU-sched's update-side
API members in terms of those of RCU-preempt.

That last phrase seems pretty explicit. What am I missing here?

Not that it matters, given that I know of no way to change a mainlined
commit log. I suppose I could ask Jon if he would be willing to take
a 2018 RCU API LWN article, if that would help.

> > > But for PREEMPT=y synchronize_sched() can be quite a bit shorter than
> > > synchronize_rcu(), since we don't have to wait for preempted read side
> > > stuff.
> >
> > Again, there are quite a few places that have managed that transition
> > without issue. Why do you expect this change to have problems that have
> > not been seen elsewhere?
>
> I'm not, I'm just taking issue with the Changelog.

OK, good.

> > > Again, the patch didn't say that.
> > >
> > > If the Changelog would've read something like:
> > >
> > > "Since synchronize_sched() is now equivalent to synchronize_rcu(),
> > > replace the synchronize_sched() usage such that we can eventually remove
> > > the interface."
> > >
> > > It would've been clear that the patch is a nop and what the purpose
> > > was.
> >
> > I can easily make that change.
>
> Please, sufficient doesn't imply necessary etc.. A changelog should
> always clarify why we do the patch.

??? Did you mean to say "necessary doesn't imply sufficient"? If so,
what else do you feel is missing?

If not, color me confused.

Thanx, Paul


2018-11-12 15:39:38

by Corey Minyard

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 10/41] drivers/ipmi: Replace synchronize_sched() with synchronize_rcu()

On 11/11/18 1:43 PM, Paul E. McKenney wrote:
> Now that synchronize_rcu() waits for preempt-disable regions of code
> as well as RCU read-side critical sections, synchronize_sched() can be
> replaced by synchronize_rcu(). This commit therefore makes this change.


Assuming the above is true, this looks fine to me.

Acked-by: Corey MInyard <[email protected]>

Or I can take it in my tree, either way.

-corey


> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Corey Minyard <[email protected]>
> Cc: Arnd Bergmann <[email protected]>
> Cc: Greg Kroah-Hartman <[email protected]>
> Cc: <[email protected]>
> ---
> drivers/char/ipmi/ipmi_si_intf.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
> index 677618e6f1f7..dc8603d34320 100644
> --- a/drivers/char/ipmi/ipmi_si_intf.c
> +++ b/drivers/char/ipmi/ipmi_si_intf.c
> @@ -2187,7 +2187,7 @@ static void shutdown_smi(void *send_info)
> * handlers might have been running before we freed the
> * interrupt.
> */
> - synchronize_sched();
> + synchronize_rcu();
>
> /*
> * Timeouts are stopped, now make sure the interrupts are off



2018-11-12 16:02:55

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 0/41] More RCU flavor consolidation cleanup for v4.21/v5.0

On Mon, Nov 12, 2018 at 09:07:50AM -0500, Mathieu Desnoyers wrote:
> ----- On Nov 11, 2018, at 2:41 PM, paulmck [email protected] wrote:
>
> > Hello!
> >
> > This series does additional cleanup for the RCU flavor consolidation,
> > focusing primarily on uses of old API members, for example, so that
> > call_rcu_bh() becomes call_rcu(). There are also a few straggling
> > internal-to-RCU cleanups.
> >
> > 1. Remove unused rcu_state externs, courtesy of Joel Fernandes.
> >
> > 2. Fix rcu_{node,data} comments about gp_seq_needed, courtesy of
> > Joel Fernandes.
> >
> > 3. Eliminate synchronize_rcu_mult() and its sole caller.
> >
> > 4. Consolidate the RCU update functions invoked by sync.c.
> >
> > 5-41. Replace old flavorful RCU API calls with the corresponding
> > vanilla calls.
>
> Hi Paul,
>
> Just a heads up: we might want to spell out warnings in very big letters
> for anyone trying to backport code using RCU from post-4.21 kernels
> back to older kernels. I fear that newer code will build just fine
> on older kernels, but will spectacularly fail in hard-to-debug ways at
> runtime.
>
> Renaming synchronize_rcu() and call_rcu() to something that did not
> exist in prior kernels would prevent that. It may not be as pretty
> though.

From v4.20 rather than v4.21, but yes. Would it make sense to have Sasha
automatically flag -stable candidates going back past that boundary that
contain call_rcu(), synchronize_rcu(), etc.? Adding Sasha on CC, and
I might be able to touch base with him this week.

Thanx, Paul


2018-11-12 16:05:42

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 10/41] drivers/ipmi: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 09:38:55AM -0600, Corey Minyard wrote:
> On 11/11/18 1:43 PM, Paul E. McKenney wrote:
> >Now that synchronize_rcu() waits for preempt-disable regions of code
> >as well as RCU read-side critical sections, synchronize_sched() can be
> >replaced by synchronize_rcu(). This commit therefore makes this change.
>
> Assuming the above is true, this looks fine to me.
>
> Acked-by: Corey MInyard <[email protected]>

Thank you!

> Or I can take it in my tree, either way.

I will carry it unless/until I send it upstream (this coming merge
window) or I see it already upstream.

So your choice. ;-)

Thanx, Paul

> -corey
>
>
> >Signed-off-by: Paul E. McKenney <[email protected]>
> >Cc: Corey Minyard <[email protected]>
> >Cc: Arnd Bergmann <[email protected]>
> >Cc: Greg Kroah-Hartman <[email protected]>
> >Cc: <[email protected]>
> >---
> > drivers/char/ipmi/ipmi_si_intf.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> >diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
> >index 677618e6f1f7..dc8603d34320 100644
> >--- a/drivers/char/ipmi/ipmi_si_intf.c
> >+++ b/drivers/char/ipmi/ipmi_si_intf.c
> >@@ -2187,7 +2187,7 @@ static void shutdown_smi(void *send_info)
> > * handlers might have been running before we freed the
> > * interrupt.
> > */
> >- synchronize_sched();
> >+ synchronize_rcu();
> > /*
> > * Timeouts are stopped, now make sure the interrupts are off
>
>


2018-11-12 16:18:38

by Corey Minyard

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 10/41] drivers/ipmi: Replace synchronize_sched() with synchronize_rcu()

On 11/12/18 10:04 AM, Paul E. McKenney wrote:
> On Mon, Nov 12, 2018 at 09:38:55AM -0600, Corey Minyard wrote:
>> On 11/11/18 1:43 PM, Paul E. McKenney wrote:
>>> Now that synchronize_rcu() waits for preempt-disable regions of code
>>> as well as RCU read-side critical sections, synchronize_sched() can be
>>> replaced by synchronize_rcu(). This commit therefore makes this change.
>> Assuming the above is true, this looks fine to me.
>>
>> Acked-by: Corey MInyard <[email protected]>


Dang, that's

Acked-by: Corey Minyard <[email protected]>


> Thank you!
>
>> Or I can take it in my tree, either way.
> I will carry it unless/until I send it upstream (this coming merge
> window) or I see it already upstream.
>
> So your choice. ;-)


Ok, if you don't mind, you can carry it.

Thanks,

-corey


> Thanx, Paul
>
>> -corey
>>
>>
>>> Signed-off-by: Paul E. McKenney <[email protected]>
>>> Cc: Corey Minyard <[email protected]>
>>> Cc: Arnd Bergmann <[email protected]>
>>> Cc: Greg Kroah-Hartman <[email protected]>
>>> Cc: <[email protected]>
>>> ---
>>> drivers/char/ipmi/ipmi_si_intf.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
>>> index 677618e6f1f7..dc8603d34320 100644
>>> --- a/drivers/char/ipmi/ipmi_si_intf.c
>>> +++ b/drivers/char/ipmi/ipmi_si_intf.c
>>> @@ -2187,7 +2187,7 @@ static void shutdown_smi(void *send_info)
>>> * handlers might have been running before we freed the
>>> * interrupt.
>>> */
>>> - synchronize_sched();
>>> + synchronize_rcu();
>>> /*
>>> * Timeouts are stopped, now make sure the interrupts are off
>>


2018-11-12 16:33:01

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 24/41] modules: Replace synchronize_sched() and call_rcu_sched()

On Mon, Nov 12, 2018 at 01:48:52PM +0100, Jessica Yu wrote:
> +++ Paul E. McKenney [11/11/18 11:43 -0800]:
> >Now that synchronize_rcu() waits for preempt-disable regions of code
> >as well as RCU read-side critical sections, synchronize_sched() can
> >be replaced by synchronize_rcu(). Similarly, call_rcu_sched() can be
> >replaced by call_rcu(). This commit therefore makes these changes.
> >
> >Signed-off-by: Paul E. McKenney <[email protected]>
> >Cc: Jessica Yu <[email protected]>
>
> Acked-by: Jessica Yu <[email protected]>

Applied, thank you!

Thanx, Paul

> Thanks!
>
> >---
> >kernel/module.c | 14 +++++++-------
> >1 file changed, 7 insertions(+), 7 deletions(-)
> >
> >diff --git a/kernel/module.c b/kernel/module.c
> >index 49a405891587..99b46c32d579 100644
> >--- a/kernel/module.c
> >+++ b/kernel/module.c
> >@@ -2159,7 +2159,7 @@ static void free_module(struct module *mod)
> > /* Remove this module from bug list, this uses list_del_rcu */
> > module_bug_cleanup(mod);
> > /* Wait for RCU-sched synchronizing before releasing mod->list and buglist. */
> >- synchronize_sched();
> >+ synchronize_rcu();
> > mutex_unlock(&module_mutex);
> >
> > /* This may be empty, but that's OK */
> >@@ -3507,15 +3507,15 @@ static noinline int do_init_module(struct module *mod)
> > /*
> > * We want to free module_init, but be aware that kallsyms may be
> > * walking this with preempt disabled. In all the failure paths, we
> >- * call synchronize_sched(), but we don't want to slow down the success
> >+ * call synchronize_rcu(), but we don't want to slow down the success
> > * path, so use actual RCU here.
> > * Note that module_alloc() on most architectures creates W+X page
> > * mappings which won't be cleaned up until do_free_init() runs. Any
> > * code such as mark_rodata_ro() which depends on those mappings to
> > * be cleaned up needs to sync with the queued work - ie
> >- * rcu_barrier_sched()
> >+ * rcu_barrier()
> > */
> >- call_rcu_sched(&freeinit->rcu, do_free_init);
> >+ call_rcu(&freeinit->rcu, do_free_init);
> > mutex_unlock(&module_mutex);
> > wake_up_all(&module_wq);
> >
> >@@ -3526,7 +3526,7 @@ static noinline int do_init_module(struct module *mod)
> >fail:
> > /* Try to protect us from buggy refcounters. */
> > mod->state = MODULE_STATE_GOING;
> >- synchronize_sched();
> >+ synchronize_rcu();
> > module_put(mod);
> > blocking_notifier_call_chain(&module_notify_list,
> > MODULE_STATE_GOING, mod);
> >@@ -3819,7 +3819,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
> > ddebug_cleanup:
> > ftrace_release_mod(mod);
> > dynamic_debug_remove(mod, info->debug);
> >- synchronize_sched();
> >+ synchronize_rcu();
> > kfree(mod->args);
> > free_arch_cleanup:
> > module_arch_cleanup(mod);
> >@@ -3834,7 +3834,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
> > mod_tree_remove(mod);
> > wake_up_all(&module_wq);
> > /* Wait for RCU-sched synchronizing before releasing mod->list. */
> >- synchronize_sched();
> >+ synchronize_rcu();
> > mutex_unlock(&module_mutex);
> > free_module:
> > /* Free lock-classes; relies on the preceding sync_rcu() */
> >--
> >2.17.1
> >
>


2018-11-12 16:39:46

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 10/41] drivers/ipmi: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 10:15:41AM -0600, Corey Minyard wrote:
> On 11/12/18 10:04 AM, Paul E. McKenney wrote:
> >On Mon, Nov 12, 2018 at 09:38:55AM -0600, Corey Minyard wrote:
> >>On 11/11/18 1:43 PM, Paul E. McKenney wrote:
> >>>Now that synchronize_rcu() waits for preempt-disable regions of code
> >>>as well as RCU read-side critical sections, synchronize_sched() can be
> >>>replaced by synchronize_rcu(). This commit therefore makes this change.
> >>Assuming the above is true, this looks fine to me.
> >>
> >>Acked-by: Corey MInyard <[email protected]>
>
> Dang, that's
>
> Acked-by: Corey Minyard <[email protected]>

Just in time, applied, thank you! ;-)

> >Thank you!
> >
> >>Or I can take it in my tree, either way.
> >I will carry it unless/until I send it upstream (this coming merge
> >window) or I see it already upstream.
> >
> >So your choice. ;-)
>
> Ok, if you don't mind, you can carry it.

Sounds good!

Thanx, Paul

> Thanks,
>
> -corey
>
>
> > Thanx, Paul
> >
> >>-corey
> >>
> >>
> >>>Signed-off-by: Paul E. McKenney <[email protected]>
> >>>Cc: Corey Minyard <[email protected]>
> >>>Cc: Arnd Bergmann <[email protected]>
> >>>Cc: Greg Kroah-Hartman <[email protected]>
> >>>Cc: <[email protected]>
> >>>---
> >>> drivers/char/ipmi/ipmi_si_intf.c | 2 +-
> >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>>diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
> >>>index 677618e6f1f7..dc8603d34320 100644
> >>>--- a/drivers/char/ipmi/ipmi_si_intf.c
> >>>+++ b/drivers/char/ipmi/ipmi_si_intf.c
> >>>@@ -2187,7 +2187,7 @@ static void shutdown_smi(void *send_info)
> >>> * handlers might have been running before we freed the
> >>> * interrupt.
> >>> */
> >>>- synchronize_sched();
> >>>+ synchronize_rcu();
> >>> /*
> >>> * Timeouts are stopped, now make sure the interrupts are off
> >>
>


2018-11-12 18:18:46

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 05:28:52AM -0800, Paul E. McKenney wrote:
> On Mon, Nov 12, 2018 at 10:00:47AM +0100, Peter Zijlstra wrote:

> > Still, better safe than sorry. It was a rather big change in behaviour,
> > so it wouldn't have been strange to call that out.
>
> This guy:
>
> 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
>
> Has a commit log that says:
>
> Now that RCU-preempt knows about preemption disabling, its
> implementation of synchronize_rcu() works for synchronize_sched(),
> and likewise for the other RCU-sched update-side API members.
> This commit therefore confines the RCU-sched update-side code
> to CONFIG_PREEMPT=n builds, and defines RCU-sched's update-side
> API members in terms of those of RCU-preempt.
>
> That last phrase seems pretty explicit. What am I missing here?

That does not explicitly state that because RCU-preempt
synchornize_rcu() can take _much_ longer, the new synchronize_sched()
can now take _much_ longer too.

So when someone bisects a problem to this commit; and he reads the
Changelog, he might get the impression that was unexpected.

> Not that it matters, given that I know of no way to change a mainlined
> commit log. I suppose I could ask Jon if he would be willing to take
> a 2018 RCU API LWN article, if that would help.

Yes, it is water under the bridge; but Changelogs should be explicit
about behavioural changes.

And while the merged RCU has the semantic behaviour required, the timing
behaviour did change significantly.

> > > > Again, the patch didn't say that.
> > > >
> > > > If the Changelog would've read something like:
> > > >
> > > > "Since synchronize_sched() is now equivalent to synchronize_rcu(),
> > > > replace the synchronize_sched() usage such that we can eventually remove
> > > > the interface."
> > > >
> > > > It would've been clear that the patch is a nop and what the purpose
> > > > was.
> > >
> > > I can easily make that change.
> >
> > Please, sufficient doesn't imply necessary etc.. A changelog should
> > always clarify why we do the patch.
>
> ??? Did you mean to say "necessary doesn't imply sufficient"? If so,
> what else do you feel is missing?

No, I meant to say that your original Changelog only states that
sync_rcu now covers rcu-sched behaviour. Which means that the change is
sufficient.

It completely and utterly fails to explain _why_ you're doing the
change. Ie. you do not address why it is necessary.

A Changelog should always explain why the change is needed.

In this case because you want to get rid of the sync_sched() api.

2018-11-12 20:21:20

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Mon, 12 Nov 2018 19:17:41 +0100
Peter Zijlstra <[email protected]> wrote:

> On Mon, Nov 12, 2018 at 05:28:52AM -0800, Paul E. McKenney wrote:
> > On Mon, Nov 12, 2018 at 10:00:47AM +0100, Peter Zijlstra wrote:
>
> > > Still, better safe than sorry. It was a rather big change in behaviour,
> > > so it wouldn't have been strange to call that out.
> >
> > This guy:
> >
> > 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
> >
> > Has a commit log that says:
> >
> > Now that RCU-preempt knows about preemption disabling, its
> > implementation of synchronize_rcu() works for synchronize_sched(),
> > and likewise for the other RCU-sched update-side API members.
> > This commit therefore confines the RCU-sched update-side code
> > to CONFIG_PREEMPT=n builds, and defines RCU-sched's update-side
> > API members in terms of those of RCU-preempt.
> >
> > That last phrase seems pretty explicit. What am I missing here?
>
> That does not explicitly state that because RCU-preempt
> synchornize_rcu() can take _much_ longer, the new synchronize_sched()
> can now take _much_ longer too.

I'm curious. Are there measurements to see how much longer they can
take? Saying "_much_ longer" would require that one has done the
timings to see what the actual impact is.

>
> So when someone bisects a problem to this commit; and he reads the
> Changelog, he might get the impression that was unexpected.

It may well be unexpected. What is the timing differences between a
normal synchronize_rcu and a synchronize_sched. Of course, 99% of users
wont see any difference as 99% don't run CONFIG_PREEMPT.

-- Steve

2018-11-12 21:41:56

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 0/41] More RCU flavor consolidation cleanup for v4.21/v5.0

On Mon, Nov 12, 2018 at 08:01:37AM -0800, Paul E. McKenney wrote:
>On Mon, Nov 12, 2018 at 09:07:50AM -0500, Mathieu Desnoyers wrote:
>> ----- On Nov 11, 2018, at 2:41 PM, paulmck [email protected] wrote:
>>
>> > Hello!
>> >
>> > This series does additional cleanup for the RCU flavor consolidation,
>> > focusing primarily on uses of old API members, for example, so that
>> > call_rcu_bh() becomes call_rcu(). There are also a few straggling
>> > internal-to-RCU cleanups.
>> >
>> > 1. Remove unused rcu_state externs, courtesy of Joel Fernandes.
>> >
>> > 2. Fix rcu_{node,data} comments about gp_seq_needed, courtesy of
>> > Joel Fernandes.
>> >
>> > 3. Eliminate synchronize_rcu_mult() and its sole caller.
>> >
>> > 4. Consolidate the RCU update functions invoked by sync.c.
>> >
>> > 5-41. Replace old flavorful RCU API calls with the corresponding
>> > vanilla calls.
>>
>> Hi Paul,
>>
>> Just a heads up: we might want to spell out warnings in very big letters
>> for anyone trying to backport code using RCU from post-4.21 kernels
>> back to older kernels. I fear that newer code will build just fine
>> on older kernels, but will spectacularly fail in hard-to-debug ways at
>> runtime.
>>
>> Renaming synchronize_rcu() and call_rcu() to something that did not
>> exist in prior kernels would prevent that. It may not be as pretty
>> though.
>
>From v4.20 rather than v4.21, but yes. Would it make sense to have Sasha
>automatically flag -stable candidates going back past that boundary that
>contain call_rcu(), synchronize_rcu(), etc.? Adding Sasha on CC, and
>I might be able to touch base with him this week.

We had a similar issue recently with a vfs change
(https://patchwork.kernel.org/patch/10604339/) leading to potentially
the same results as described above, we took it as is to avoid these
issues in the future, though this is a much smaller change than what's
proposed here.

We can look into an good way to solve this. While I can alert on
post-4.20 stable tagged patches that touch rcu, do you really want to be
dealing with this for the next 10+ years? It'll also means each of those
patches will need a manual backport.

Let's talk at Plumbers :)

--
Thanks,
Sasha

2018-11-12 22:17:57

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 0/41] More RCU flavor consolidation cleanup for v4.21/v5.0

On Mon, Nov 12, 2018 at 04:40:23PM -0500, Sasha Levin wrote:
> On Mon, Nov 12, 2018 at 08:01:37AM -0800, Paul E. McKenney wrote:
> >On Mon, Nov 12, 2018 at 09:07:50AM -0500, Mathieu Desnoyers wrote:
> >>----- On Nov 11, 2018, at 2:41 PM, paulmck [email protected] wrote:
> >>
> >>> Hello!
> >>>
> >>> This series does additional cleanup for the RCU flavor consolidation,
> >>> focusing primarily on uses of old API members, for example, so that
> >>> call_rcu_bh() becomes call_rcu(). There are also a few straggling
> >>> internal-to-RCU cleanups.
> >>>
> >>> 1. Remove unused rcu_state externs, courtesy of Joel Fernandes.
> >>>
> >>> 2. Fix rcu_{node,data} comments about gp_seq_needed, courtesy of
> >>> Joel Fernandes.
> >>>
> >>> 3. Eliminate synchronize_rcu_mult() and its sole caller.
> >>>
> >>> 4. Consolidate the RCU update functions invoked by sync.c.
> >>>
> >>> 5-41. Replace old flavorful RCU API calls with the corresponding
> >>> vanilla calls.
> >>
> >>Hi Paul,
> >>
> >>Just a heads up: we might want to spell out warnings in very big letters
> >>for anyone trying to backport code using RCU from post-4.21 kernels
> >>back to older kernels. I fear that newer code will build just fine
> >>on older kernels, but will spectacularly fail in hard-to-debug ways at
> >>runtime.
> >>
> >>Renaming synchronize_rcu() and call_rcu() to something that did not
> >>exist in prior kernels would prevent that. It may not be as pretty
> >>though.
> >
> >From v4.20 rather than v4.21, but yes. Would it make sense to have Sasha
> >automatically flag -stable candidates going back past that boundary that
> >contain call_rcu(), synchronize_rcu(), etc.? Adding Sasha on CC, and
> >I might be able to touch base with him this week.
>
> We had a similar issue recently with a vfs change
> (https://patchwork.kernel.org/patch/10604339/) leading to potentially
> the same results as described above, we took it as is to avoid these
> issues in the future, though this is a much smaller change than what's
> proposed here.
>
> We can look into an good way to solve this. While I can alert on
> post-4.20 stable tagged patches that touch rcu, do you really want to be
> dealing with this for the next 10+ years? It'll also means each of those
> patches will need a manual backport.
>
> Let's talk at Plumbers :)

Sounds like a plan! ;-)

Thanx, Paul


2018-11-12 22:23:22

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 07:17:41PM +0100, Peter Zijlstra wrote:
> On Mon, Nov 12, 2018 at 05:28:52AM -0800, Paul E. McKenney wrote:
> > On Mon, Nov 12, 2018 at 10:00:47AM +0100, Peter Zijlstra wrote:
>
> > > Still, better safe than sorry. It was a rather big change in behaviour,
> > > so it wouldn't have been strange to call that out.
> >
> > This guy:
> >
> > 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
> >
> > Has a commit log that says:
> >
> > Now that RCU-preempt knows about preemption disabling, its
> > implementation of synchronize_rcu() works for synchronize_sched(),
> > and likewise for the other RCU-sched update-side API members.
> > This commit therefore confines the RCU-sched update-side code
> > to CONFIG_PREEMPT=n builds, and defines RCU-sched's update-side
> > API members in terms of those of RCU-preempt.
> >
> > That last phrase seems pretty explicit. What am I missing here?
>
> That does not explicitly state that because RCU-preempt
> synchornize_rcu() can take _much_ longer, the new synchronize_sched()
> can now take _much_ longer too.
>
> So when someone bisects a problem to this commit; and he reads the
> Changelog, he might get the impression that was unexpected.

Of course, a preempt_disable() section of code can still be preempted
by the underlying hypervisor, so in a surprisingly large fraction of
the installed base, there really isn't that much difference.

> > Not that it matters, given that I know of no way to change a mainlined
> > commit log. I suppose I could ask Jon if he would be willing to take
> > a 2018 RCU API LWN article, if that would help.
>
> Yes, it is water under the bridge; but Changelogs should be explicit
> about behavioural changes.
>
> And while the merged RCU has the semantic behaviour required, the timing
> behaviour did change significantly.

When running on bare metal, potentially. From what I see, preemption
of RCU read-side critical sections is the exception rather than the rule.
And again, when running on hypervisors, even irq-disable regions of code
can be preempted. (And yes, there is work in flight to allow RCU to deal
with this.)

> > > > > Again, the patch didn't say that.
> > > > >
> > > > > If the Changelog would've read something like:
> > > > >
> > > > > "Since synchronize_sched() is now equivalent to synchronize_rcu(),
> > > > > replace the synchronize_sched() usage such that we can eventually remove
> > > > > the interface."
> > > > >
> > > > > It would've been clear that the patch is a nop and what the purpose
> > > > > was.
> > > >
> > > > I can easily make that change.
> > >
> > > Please, sufficient doesn't imply necessary etc.. A changelog should
> > > always clarify why we do the patch.
> >
> > ??? Did you mean to say "necessary doesn't imply sufficient"? If so,
> > what else do you feel is missing?
>
> No, I meant to say that your original Changelog only states that
> sync_rcu now covers rcu-sched behaviour. Which means that the change is
> sufficient.
>
> It completely and utterly fails to explain _why_ you're doing the
> change. Ie. you do not address why it is necessary.
>
> A Changelog should always explain why the change is needed.
>
> In this case because you want to get rid of the sync_sched() api.

Right, which is stated in your suggested wording above. So I am still
not seeing what you want added to this:

"Since synchronize_sched() is now equivalent to synchronize_rcu(),
replace the synchronize_sched() usage such that we can eventually
remove the interface."

Thanx, Paul


2018-11-13 00:31:20

by Nikolay Aleksandrov

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 33/41] net/bridge: Replace call_rcu_bh() and rcu_barrier_bh()

On 11/11/18 9:44 PM, Paul E. McKenney wrote:
> Now that call_rcu()'s callback is not invoked until after all bh-disable
> regions of code have completed (in addition to explicitly marked
> RCU read-side critical sections), call_rcu() can be used in place
> of call_rcu_bh(). Similarly, rcu_barrier() can be used in place of
> rcu_barrier_bh(). This commit therefore makes these changes.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Roopa Prabhu <[email protected]>
> Cc: Nikolay Aleksandrov <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: <[email protected]>
> Cc: <[email protected]>
> ---
> net/bridge/br_mdb.c | 2 +-
> net/bridge/br_multicast.c | 14 +++++++-------
> 2 files changed, 8 insertions(+), 8 deletions(-)
>

Really like this change, makes life simpler.

Acked-by: Nikolay Aleksandrov <[email protected]>


2018-11-13 15:49:12

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()

On Sun, Nov 11, 2018 at 11:43:54AM -0800, Paul E. McKenney wrote:
> Now that call_rcu()'s callback is not invoked until after all
> preempt-disable regions of code have completed (in addition to explicitly
> marked RCU read-side critical sections), call_rcu() can be used in place
> of call_rcu_sched(). This commit therefore makes that change.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Lai Jiangshan <[email protected]>

Acked-by: Tejun Heo <[email protected]>

Thanks.

--
tejun

2018-11-13 15:51:20

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 34/41] percpu-rwsem: Replace synchronize_sched() with synchronize_rcu()

On Sun, Nov 11, 2018 at 11:44:03AM -0800, Paul E. McKenney wrote:
> Now that synchronize_rcu() waits for preempt-disable regions of code
> as well as RCU read-side critical sections, synchronize_sched() can be
> replaced by synchronize_rcu(). This commit therefore makes this change,
> even though it is but a comment.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Dennis Zhou <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Christoph Lameter <[email protected]>

Acked-by: Tejun Heo <[email protected]>

Thanks.

--
tejun

2018-11-13 15:51:40

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 27/41] percpu-refcount: Replace call_rcu_sched() with call_rcu()

On Sun, Nov 11, 2018 at 11:43:56AM -0800, Paul E. McKenney wrote:
> Now that call_rcu()'s callback is not invoked until after all
> preempt-disable regions of code have completed (in addition to explicitly
> marked RCU read-side critical sections), call_rcu() can be used in place
> of call_rcu_sched(). This commit therefore makes that change.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Ming Lei <[email protected]>
> Cc: Bart Van Assche <[email protected]>
> Cc: Jens Axboe <[email protected]>

Acked-by: Tejun Heo <[email protected]>

Thanks.

--
tejun

2018-11-13 15:53:05

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 36/41] cgroups: Replace synchronize_sched() with synchronize_rcu()

On Sun, Nov 11, 2018 at 11:44:05AM -0800, Paul E. McKenney wrote:
> Now that synchronize_rcu() waits for preempt-disable regions of code
> as well as RCU read-side critical sections, synchronize_sched() can be
> replaced by synchronize_rcu(). This commit therefore makes this change,
> even though it is but a comment.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Cc: Dennis Zhou <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: "Dennis Zhou (Facebook)" <[email protected]>

Acked-by: Tejun Heo <[email protected]>

Thanks.

--
tejun

2018-11-13 16:55:27

by Bowers, AndrewX

[permalink] [raw]
Subject: RE: [Intel-wired-lan] [PATCH tip/core/rcu 13/41] ethernet/intel/ixgbe: Replace synchronize_sched() with synchronize_rcu()

> -----Original Message-----
> From: Intel-wired-lan [mailto:[email protected]] On
> Behalf Of Paul E. McKenney
> Sent: Sunday, November 11, 2018 11:44 AM
> To: [email protected]
> Cc: [email protected]; David S. Miller <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; akpm@linux-
> foundation.org; Paul E. McKenney <[email protected]>;
> [email protected]
> Subject: [Intel-wired-lan] [PATCH tip/core/rcu 13/41] ethernet/intel/ixgbe:
> Replace synchronize_sched() with synchronize_rcu()
>
> Now that synchronize_rcu() waits for preempt-disable regions of code as
> well as RCU read-side critical sections, synchronize_sched() can be replaced
> by synchronize_rcu(). This commit therefore makes this change.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Jeff Kirsher <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: <[email protected]>
> ---
> drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)

Tested-by: Andrew Bowers <[email protected]>



2018-11-13 18:09:44

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 20/41] kprobes: eplace synchronize_sched() with synchronize_rcu()

On Sun, 11 Nov 2018 19:19:16 -0800
"Paul E. McKenney" <[email protected]> wrote:

> On Mon, Nov 12, 2018 at 12:00:48PM +0900, Masami Hiramatsu wrote:
> > On Sun, 11 Nov 2018 11:43:49 -0800
> > "Paul E. McKenney" <[email protected]> wrote:
> >
> > > Now that synchronize_rcu() waits for preempt-disable regions of code
> > > as well as RCU read-side critical sections, synchronize_sched() can be
> > > replaced by synchronize_rcu(). This commit therefore makes this change.
> >
> > Would you mean synchronize_rcu() can ensure that any interrupt handler
> > (which should run under preempt-disable state) run out (even on non-preemptive
> > kernel)?
>
> Yes, but only as of this merge window. See this commit:
>
> 3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")

OK, I also found that now those are same.

45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")

Acked-by: Masami Hiramatsu <[email protected]>

Thank you!

>
> Don't try this in v4.19 or earlier, but v4.20 and later is OK. ;-)
>
> Thanx, Paul
>
> > If so, I agree with these changes.
> >
> > Thank you,
> >
> > >
> > > Signed-off-by: Paul E. McKenney <[email protected]>
> > > Cc: "Naveen N. Rao" <[email protected]>
> > > Cc: Anil S Keshavamurthy <[email protected]>
> > > Cc: "David S. Miller" <[email protected]>
> > > Cc: Masami Hiramatsu <[email protected]>
> > > ---
> > > kernel/kprobes.c | 10 +++++-----
> > > 1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > > index 90e98e233647..08e31d863191 100644
> > > --- a/kernel/kprobes.c
> > > +++ b/kernel/kprobes.c
> > > @@ -229,7 +229,7 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
> > > struct kprobe_insn_page *kip, *next;
> > >
> > > /* Ensure no-one is interrupted on the garbages */
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > >
> > > list_for_each_entry_safe(kip, next, &c->pages, list) {
> > > int i;
> > > @@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> > > if (ret) {
> > > ap->flags |= KPROBE_FLAG_DISABLED;
> > > list_del_rcu(&p->list);
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > }
> > > }
> > > }
> > > @@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
> > > ret = arm_kprobe(p);
> > > if (ret) {
> > > hlist_del_rcu(&p->hlist);
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > goto out;
> > > }
> > > }
> > > @@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
> > > kps[i]->addr = NULL;
> > > mutex_unlock(&kprobe_mutex);
> > >
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > for (i = 0; i < num; i++)
> > > if (kps[i]->addr)
> > > __unregister_kprobe_bottom(kps[i]);
> > > @@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
> > > rps[i]->kp.addr = NULL;
> > > mutex_unlock(&kprobe_mutex);
> > >
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > for (i = 0; i < num; i++) {
> > > if (rps[i]->kp.addr) {
> > > __unregister_kprobe_bottom(&rps[i]->kp);
> > > --
> > > 2.17.1
> > >
> >
> >
> > --
> > Masami Hiramatsu <[email protected]>
> >
>


--
Masami Hiramatsu <[email protected]>

2018-11-13 19:22:52

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()

On Tue, Nov 13, 2018 at 07:48:03AM -0800, Tejun Heo wrote:
> On Sun, Nov 11, 2018 at 11:43:54AM -0800, Paul E. McKenney wrote:
> > Now that call_rcu()'s callback is not invoked until after all
> > preempt-disable regions of code have completed (in addition to explicitly
> > marked RCU read-side critical sections), call_rcu() can be used in place
> > of call_rcu_sched(). This commit therefore makes that change.
> >
> > Signed-off-by: Paul E. McKenney <[email protected]>
> > Cc: Tejun Heo <[email protected]>
> > Cc: Lai Jiangshan <[email protected]>
>
> Acked-by: Tejun Heo <[email protected]>

Applied all four, thank you!

Thanx, Paul


2018-11-13 19:24:18

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 20/41] kprobes: eplace synchronize_sched() with synchronize_rcu()

On Tue, Nov 13, 2018 at 10:08:36AM -0800, Masami Hiramatsu wrote:
> On Sun, 11 Nov 2018 19:19:16 -0800
> "Paul E. McKenney" <[email protected]> wrote:
>
> > On Mon, Nov 12, 2018 at 12:00:48PM +0900, Masami Hiramatsu wrote:
> > > On Sun, 11 Nov 2018 11:43:49 -0800
> > > "Paul E. McKenney" <[email protected]> wrote:
> > >
> > > > Now that synchronize_rcu() waits for preempt-disable regions of code
> > > > as well as RCU read-side critical sections, synchronize_sched() can be
> > > > replaced by synchronize_rcu(). This commit therefore makes this change.
> > >
> > > Would you mean synchronize_rcu() can ensure that any interrupt handler
> > > (which should run under preempt-disable state) run out (even on non-preemptive
> > > kernel)?
> >
> > Yes, but only as of this merge window. See this commit:
> >
> > 3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")
>
> OK, I also found that now those are same.
>
> 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
>
> Acked-by: Masami Hiramatsu <[email protected]>

Applied, thank you!

Thanx, Paul

> Thank you!
>
> >
> > Don't try this in v4.19 or earlier, but v4.20 and later is OK. ;-)
> >
> > Thanx, Paul
> >
> > > If so, I agree with these changes.
> > >
> > > Thank you,
> > >
> > > >
> > > > Signed-off-by: Paul E. McKenney <[email protected]>
> > > > Cc: "Naveen N. Rao" <[email protected]>
> > > > Cc: Anil S Keshavamurthy <[email protected]>
> > > > Cc: "David S. Miller" <[email protected]>
> > > > Cc: Masami Hiramatsu <[email protected]>
> > > > ---
> > > > kernel/kprobes.c | 10 +++++-----
> > > > 1 file changed, 5 insertions(+), 5 deletions(-)
> > > >
> > > > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > > > index 90e98e233647..08e31d863191 100644
> > > > --- a/kernel/kprobes.c
> > > > +++ b/kernel/kprobes.c
> > > > @@ -229,7 +229,7 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
> > > > struct kprobe_insn_page *kip, *next;
> > > >
> > > > /* Ensure no-one is interrupted on the garbages */
> > > > - synchronize_sched();
> > > > + synchronize_rcu();
> > > >
> > > > list_for_each_entry_safe(kip, next, &c->pages, list) {
> > > > int i;
> > > > @@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> > > > if (ret) {
> > > > ap->flags |= KPROBE_FLAG_DISABLED;
> > > > list_del_rcu(&p->list);
> > > > - synchronize_sched();
> > > > + synchronize_rcu();
> > > > }
> > > > }
> > > > }
> > > > @@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
> > > > ret = arm_kprobe(p);
> > > > if (ret) {
> > > > hlist_del_rcu(&p->hlist);
> > > > - synchronize_sched();
> > > > + synchronize_rcu();
> > > > goto out;
> > > > }
> > > > }
> > > > @@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
> > > > kps[i]->addr = NULL;
> > > > mutex_unlock(&kprobe_mutex);
> > > >
> > > > - synchronize_sched();
> > > > + synchronize_rcu();
> > > > for (i = 0; i < num; i++)
> > > > if (kps[i]->addr)
> > > > __unregister_kprobe_bottom(kps[i]);
> > > > @@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
> > > > rps[i]->kp.addr = NULL;
> > > > mutex_unlock(&kprobe_mutex);
> > > >
> > > > - synchronize_sched();
> > > > + synchronize_rcu();
> > > > for (i = 0; i < num; i++) {
> > > > if (rps[i]->kp.addr) {
> > > > __unregister_kprobe_bottom(&rps[i]->kp);
> > > > --
> > > > 2.17.1
> > > >
> > >
> > >
> > > --
> > > Masami Hiramatsu <[email protected]>
> > >
> >
>
>
> --
> Masami Hiramatsu <[email protected]>
>


2018-11-16 05:59:46

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 09/41] crypto/pcrypt: Replace synchronize_rcu_bh() with synchronize_rcu()

Paul E. McKenney <[email protected]> wrote:
> Now that synchronize_rcu() waits for bh-disable regions of code as
> well as RCU read-side critical sections, the synchronize_rcu_bh() in
> pcrypt_cpumask_change_notify() can be replaced by synchronize_rcu().
> This commit therefore makes this change.
>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Steffen Klassert <[email protected]>
> Cc: <[email protected]>

Acked-by: Herbert Xu <[email protected]>
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2018-11-26 20:01:25

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 23/41] sched: Replace synchronize_sched() with synchronize_rcu()

On Mon, Nov 12, 2018 at 02:21:12PM -0800, Paul E. McKenney wrote:
> On Mon, Nov 12, 2018 at 07:17:41PM +0100, Peter Zijlstra wrote:
> > On Mon, Nov 12, 2018 at 05:28:52AM -0800, Paul E. McKenney wrote:
> > > On Mon, Nov 12, 2018 at 10:00:47AM +0100, Peter Zijlstra wrote:
> >
> > > > Still, better safe than sorry. It was a rather big change in behaviour,
> > > > so it wouldn't have been strange to call that out.
> > >
> > > This guy:
> > >
> > > 45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
> > >
> > > Has a commit log that says:
> > >
> > > Now that RCU-preempt knows about preemption disabling, its
> > > implementation of synchronize_rcu() works for synchronize_sched(),
> > > and likewise for the other RCU-sched update-side API members.
> > > This commit therefore confines the RCU-sched update-side code
> > > to CONFIG_PREEMPT=n builds, and defines RCU-sched's update-side
> > > API members in terms of those of RCU-preempt.
> > >
> > > That last phrase seems pretty explicit. What am I missing here?
> >
> > That does not explicitly state that because RCU-preempt
> > synchornize_rcu() can take _much_ longer, the new synchronize_sched()
> > can now take _much_ longer too.
> >
> > So when someone bisects a problem to this commit; and he reads the
> > Changelog, he might get the impression that was unexpected.
>
> Of course, a preempt_disable() section of code can still be preempted
> by the underlying hypervisor, so in a surprisingly large fraction of
> the installed base, there really isn't that much difference.
>
> > > Not that it matters, given that I know of no way to change a mainlined
> > > commit log. I suppose I could ask Jon if he would be willing to take
> > > a 2018 RCU API LWN article, if that would help.
> >
> > Yes, it is water under the bridge; but Changelogs should be explicit
> > about behavioural changes.
> >
> > And while the merged RCU has the semantic behaviour required, the timing
> > behaviour did change significantly.
>
> When running on bare metal, potentially. From what I see, preemption
> of RCU read-side critical sections is the exception rather than the rule.
> And again, when running on hypervisors, even irq-disable regions of code
> can be preempted. (And yes, there is work in flight to allow RCU to deal
> with this.)
>
> > > > > > Again, the patch didn't say that.
> > > > > >
> > > > > > If the Changelog would've read something like:
> > > > > >
> > > > > > "Since synchronize_sched() is now equivalent to synchronize_rcu(),
> > > > > > replace the synchronize_sched() usage such that we can eventually remove
> > > > > > the interface."
> > > > > >
> > > > > > It would've been clear that the patch is a nop and what the purpose
> > > > > > was.
> > > > >
> > > > > I can easily make that change.
> > > >
> > > > Please, sufficient doesn't imply necessary etc.. A changelog should
> > > > always clarify why we do the patch.
> > >
> > > ??? Did you mean to say "necessary doesn't imply sufficient"? If so,
> > > what else do you feel is missing?
> >
> > No, I meant to say that your original Changelog only states that
> > sync_rcu now covers rcu-sched behaviour. Which means that the change is
> > sufficient.
> >
> > It completely and utterly fails to explain _why_ you're doing the
> > change. Ie. you do not address why it is necessary.
> >
> > A Changelog should always explain why the change is needed.
> >
> > In this case because you want to get rid of the sync_sched() api.
>
> Right, which is stated in your suggested wording above. So I am still
> not seeing what you want added to this:
>
> "Since synchronize_sched() is now equivalent to synchronize_rcu(),
> replace the synchronize_sched() usage such that we can eventually
> remove the interface."

Finally getting back to this. I removed this commit from the group that
I intend to send in next week's -tip pull request, and updated its commit
log as shown below. Does this work for you?

Thanx, Paul

------------------------------------------------------------------------

commit 52ffe7fbe615e8989f054432c76a7e43b8c35607
Author: Paul E. McKenney <[email protected]>
Date: Tue Nov 6 19:13:54 2018 -0800

sched: Replace synchronize_sched() with synchronize_rcu()

Now that synchronize_rcu() waits for preempt-disable regions of
code as well as RCU read-side critical sections, synchronize_sched()
can be replaced by synchronize_rcu(), in fact, synchronize_sched()
is now completely equivalent to synchronize_rcu(). This commit
therefore replaces synchronize_sched() with synchronize_rcu() so that
synchronize_sched() can eventually be removed entirely.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>

diff --git a/kernel/sched/cpufreq.c b/kernel/sched/cpufreq.c
index 5e54cbcae673..90fee8e01280 100644
--- a/kernel/sched/cpufreq.c
+++ b/kernel/sched/cpufreq.c
@@ -51,8 +51,8 @@ EXPORT_SYMBOL_GPL(cpufreq_add_update_util_hook);
*
* Clear the update_util_data pointer for the given CPU.
*
- * Callers must use RCU-sched callbacks to free any memory that might be
- * accessed via the old update_util_data pointer or invoke synchronize_sched()
+ * Callers must use RCU callbacks to free any memory that might be
+ * accessed via the old update_util_data pointer or invoke synchronize_rcu()
* right after this function to avoid use-after-free.
*/
void cpufreq_remove_update_util_hook(int cpu)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 3fffad3bc8a8..6a1bb76afbd1 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -839,7 +839,7 @@ static void sugov_stop(struct cpufreq_policy *policy)
for_each_cpu(cpu, policy->cpus)
cpufreq_remove_update_util_hook(cpu);

- synchronize_sched();
+ synchronize_rcu();

if (!policy->fast_switch_enabled) {
irq_work_sync(&sg_policy->irq_work);


2018-11-27 17:22:59

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH tip/core/rcu 09/41] crypto/pcrypt: Replace synchronize_rcu_bh() with synchronize_rcu()

On Fri, Nov 16, 2018 at 01:56:40PM +0800, Herbert Xu wrote:
> Paul E. McKenney <[email protected]> wrote:
> > Now that synchronize_rcu() waits for bh-disable regions of code as
> > well as RCU read-side critical sections, the synchronize_rcu_bh() in
> > pcrypt_cpumask_change_notify() can be replaced by synchronize_rcu().
> > This commit therefore makes this change.
> >
> > Signed-off-by: Paul E. McKenney <[email protected]>
> > Cc: Steffen Klassert <[email protected]>
> > Cc: <[email protected]>
>
> Acked-by: Herbert Xu <[email protected]>

Applied, thank you!

Thanx, Paul