Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751719AbdGaTWD (ORCPT ); Mon, 31 Jul 2017 15:22:03 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48358 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751659AbdGaTWB (ORCPT ); Mon, 31 Jul 2017 15:22:01 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 7B5203F77D8 Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=pbonzini@redhat.com Date: Mon, 31 Jul 2017 15:21:59 -0400 (EDT) From: Paolo Bonzini To: Peter Zijlstra Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, mingo@kernel.org, dvyukov@google.com, hpa@zytor.com, linux-tip-commits@vger.kernel.org, Steven Rostedt , Dima Zavin Message-ID: <1660601690.358772.1501528919840.JavaMail.zimbra@redhat.com> In-Reply-To: <20170731154556.w7c3vfdgottrcwi5@hirez.programming.kicks-ass.net> References: <1466527937-69798-1-git-send-email-pbonzini@redhat.com> <20170731093612.7v23kxkk47i56io6@hirez.programming.kicks-ass.net> <671056c7-5fc7-60c0-4035-b11d43d95bf1@redhat.com> <20170731133349.gphnv5pqkn2xibwf@hirez.programming.kicks-ass.net> <645f28cb-56c8-7298-568f-357a1b8929b9@redhat.com> <20170731154556.w7c3vfdgottrcwi5@hirez.programming.kicks-ass.net> Subject: Re: [tip:locking/urgent] locking/static_key: Fix concurrent static_key_slow_inc() MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [94.39.192.75, 10.4.196.26, 10.4.195.15] Thread-Topic: locking/urgent] locking/static_key: Fix concurrent static_key_slow_inc() Thread-Index: /6Ly4Cl4iXwn4T37J7F3hsNhmoz77A== X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Mon, 31 Jul 2017 19:22:01 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7617 Lines: 255 > > > Isn't that what we have static_key_enable() for? Which btw also uses > > > static_key_count() outside of the mutex. > > > > Yes, they should be fixed and net/ can then use static_key_enable. > > Right, let me try and fix _enable(). Here is what I scribbled before leaving the office. (What was missing: documentation for how to use static_key_enabled/count, testing). >From c0bdcc89e26fb16cdc564485232bebcd1e0cc102 Mon Sep 17 00:00:00 2001 From: Paolo Bonzini Date: Mon, 31 Jul 2017 16:46:04 +0200 Subject: [PATCH 1/3] jump_labels: fix concurrent static_key_enable/disable() static_key_enable/disable are trying to cap the static key count to 0/1. However, their use of key->enabled is outside jump_label_lock so they do not really ensure that. Rewrite them to do a quick check for an already enabled (respectively, already disabled) key, and then recheck under the jump label lock. Unlike static_key_slow_inc/dec, a failed check under the jump label lock does not modify key->enabled. Signed-off-by: Paolo Bonzini --- include/linux/jump_label.h | 22 +++++++++-------- kernel/jump_label.c | 59 +++++++++++++++++++++++++++++----------------- 2 files changed, 49 insertions(+), 32 deletions(-) diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index 2afd74b9d844..8fc5649476ca 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -234,22 +234,24 @@ static inline int jump_label_apply_nops(struct module *mod) static inline void static_key_enable(struct static_key *key) { - int count = static_key_count(key); - - WARN_ON_ONCE(count < 0 || count > 1); + STATIC_KEY_CHECK_USE(); - if (!count) - static_key_slow_inc(key); + if (atomic_read(&key->enabled) != 0) { + WARN_ON_ONCE(atomic_read(&key->enabled) != 1); + return; + } + atomic_set(&key->enabled, 1); } static inline void static_key_disable(struct static_key *key) { - int count = static_key_count(key); - - WARN_ON_ONCE(count < 0 || count > 1); + STATIC_KEY_CHECK_USE(); - if (count) - static_key_slow_dec(key); + if (atomic_read(&key->enabled) != 1) { + WARN_ON_ONCE(atomic_read(&key->enabled) != 0); + return; + } + atomic_set(&key->enabled, 0); } #define STATIC_KEY_INIT_TRUE { .enabled = ATOMIC_INIT(1) } diff --git a/kernel/jump_label.c b/kernel/jump_label.c index d11c506a6ac3..47a3e927b87e 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -79,28 +79,6 @@ int static_key_count(struct static_key *key) } EXPORT_SYMBOL_GPL(static_key_count); -void static_key_enable(struct static_key *key) -{ - int count = static_key_count(key); - - WARN_ON_ONCE(count < 0 || count > 1); - - if (!count) - static_key_slow_inc(key); -} -EXPORT_SYMBOL_GPL(static_key_enable); - -void static_key_disable(struct static_key *key) -{ - int count = static_key_count(key); - - WARN_ON_ONCE(count < 0 || count > 1); - - if (count) - static_key_slow_dec(key); -} -EXPORT_SYMBOL_GPL(static_key_disable); - void static_key_slow_inc(struct static_key *key) { int v, v1; @@ -139,6 +117,43 @@ void static_key_slow_inc(struct static_key *key) } EXPORT_SYMBOL_GPL(static_key_slow_inc); +void static_key_enable(struct static_key *key) +{ + STATIC_KEY_CHECK_USE(); + if (atomic_read(&key->enabled) != 0) { + WARN_ON_ONCE(atomic_read(&key->enabled) != 1); + return; + } + + cpus_read_lock(); + jump_label_lock(); + if (atomic_read(&key->enabled) == 0) { + atomic_set(&key->enabled, -1); + jump_label_update(key); + atomic_set(&key->enabled, 1); + } + jump_label_unlock(); + cpus_read_unlock(); +} +EXPORT_SYMBOL_GPL(static_key_enable); + +void static_key_disable(struct static_key *key) +{ + STATIC_KEY_CHECK_USE(); + if (atomic_read(&key->enabled) != 1) { + WARN_ON_ONCE(atomic_read(&key->enabled) != 0); + return; + } + + cpus_read_lock(); + jump_label_lock(); + if (atomic_cmpxchg(&key->enabled, 1, 0)) + jump_label_update(key); + jump_label_unlock(); + cpus_read_unlock(); +} +EXPORT_SYMBOL_GPL(static_key_disable); + static void __static_key_slow_dec(struct static_key *key, unsigned long rate_limit, struct delayed_work *work) { -- 2.13.3 >From 89d89520915bb12fd330069ca8aed32a0c216ab6 Mon Sep 17 00:00:00 2001 From: Paolo Bonzini Date: Mon, 31 Jul 2017 16:48:05 +0200 Subject: [PATCH 2/3] jump_labels: do not use unserialized static_key_enabled Any use of key->enabled (that is static_key_enabled and static_key_count) outside jump_label_lock should handle its own serialization. The only two that are not doing so are the UDP encapsulation static keys. Change them to use static_key_enable, which now correctly tests key->enabled under the jump label lock. Signed-off-by: Paolo Bonzini --- net/ipv4/udp.c | 3 +-- net/ipv6/udp.c | 3 +-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index b057653ceca9..9b305776fe14 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1811,8 +1811,7 @@ static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) static struct static_key udp_encap_needed __read_mostly; void udp_encap_enable(void) { - if (!static_key_enabled(&udp_encap_needed)) - static_key_slow_inc(&udp_encap_needed); + static_key_enable(&udp_encap_needed); } EXPORT_SYMBOL(udp_encap_enable); diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 4a3e65626e8b..27057c41d648 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -569,8 +569,7 @@ static __inline__ void udpv6_err(struct sk_buff *skb, static struct static_key udpv6_encap_needed __read_mostly; void udpv6_encap_enable(void) { - if (!static_key_enabled(&udpv6_encap_needed)) - static_key_slow_inc(&udpv6_encap_needed); + static_key_enable(&udpv6_encap_needed); } EXPORT_SYMBOL(udpv6_encap_enable); -- 2.13.3 >From 7d8f5a373c0357663683197dfc64f20abf31a892 Mon Sep 17 00:00:00 2001 From: Paolo Bonzini Date: Mon, 31 Jul 2017 16:52:07 +0200 Subject: [PATCH 3/3] cpuset: make nr_cpusets private Any use of key->enabled (that is static_key_enabled and static_key_count) outside jump_label_lock should handle its own serialization. In the case of cpusets_enabled_key, the key is always incremented/decremented under cpuset_mutex, and hence the same rule applies to nr_cpusets. The rule *is* respected currently, but the mutex is static so nr_cpusets should be static too. Signed-off-by: Paolo Bonzini --- include/linux/cpuset.h | 6 ------ kernel/cgroup/cpuset.c | 7 +++++++ 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 119a3f9604b0..cedcc910b7a7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -24,12 +24,6 @@ static inline bool cpusets_enabled(void) return static_branch_unlikely(&cpusets_enabled_key); } -static inline int nr_cpusets(void) -{ - /* jump label reference count + the top-level cpuset */ - return static_key_count(&cpusets_enabled_key.key) + 1; -} - static inline void cpuset_inc(void) { static_branch_inc(&cpusets_enabled_key); diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index ca8376e5008c..6ddca2480276 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -576,6 +576,13 @@ static void update_domain_attr_tree(struct sched_domain_attr *dattr, rcu_read_unlock(); } +/* Must be called with cpuset_mutex held. */ +static inline int nr_cpusets(void) +{ + /* jump label reference count + the top-level cpuset */ + return static_key_count(&cpusets_enabled_key.key) + 1; +} + /* * generate_sched_domains() * -- 2.13.3