Received: by 2002:ac0:a591:0:0:0:0:0 with SMTP id m17-v6csp1643290imm; Fri, 6 Jul 2018 04:02:06 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcxxebST1ngDhV7IGLrZRAy03nyC5A84afCJeniOL2KO7DOX+8yzmVrNnjERGqn6l50rvDj X-Received: by 2002:a62:8389:: with SMTP id h131-v6mr9430912pfe.105.1530874926104; Fri, 06 Jul 2018 04:02:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530874926; cv=none; d=google.com; s=arc-20160816; b=kTtj4il8Mk3IKxwyWfiWjXYHlUpmC4Ir1p32pgq/4hyjZn+CRKhZ71EhFxZARBdunm eKCn/vLtOgaGUp/cEBZtLf2NLKlApzpDOhqAFIsNC12Vl1sdaFs04x+rdIVXbNHxNr2q AqwYyOIc3h/9uq8Ec4j7Bup+lC2h/n3YThveITE22CMPHzW7icNdz1BO+U9baKiU0qN4 qVfpysRn696yaxQUDX/hlT2HsTa0UnYLNFsCOhao5QF6SPoVOOjeQbKcPQYmhdg0Ago6 oNOnCdU2HxZnq5ez14OJunkKFWQHw/dV0G51zImdX9EWmkUlu5IlCMUaGhtukdIzXLVw 9pDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=GrKGtrLqezUJS9itJxAtfNSpa/nHurNWwFObUNd4kbk=; b=xl7BN+UPmGtRDHUOFlcPOzAJXdD3piU5TVwuKDShmhfStUjilFhuB6SANjdLbZyrlJ wSXjzc/hp8jaBulvEZW3dDi/uJuSsGrhvtmOQ6CKE/FKp56Adq0n69JP8GEBCuyKECOV eBDll1sy7MjZbCz/gaaHM8eiVvvyAZ5dhaRmuvMgPueQP2Kqrcq2HzrESbS7vLwhqUHD oekFQhnu9o6/UQAjOtMRkeEejIGyyh1kvtn6S50iRzYUMSqQUdUsjg0t3565+XoQKtj0 2/cPvnH9BoGrJiwsAseT4bIeSzhI1c+ib3n4sVu1yp5MtMmHb3YoKJBgi4ES+JSNIswF MHHg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v4-v6si7348753pgv.624.2018.07.06.04.01.49; Fri, 06 Jul 2018 04:02:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754066AbeGFLAU convert rfc822-to-8bit (ORCPT + 99 others); Fri, 6 Jul 2018 07:00:20 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:54071 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753755AbeGFK7G (ORCPT ); Fri, 6 Jul 2018 06:59:06 -0400 Received: from bigeasy by Galois.linutronix.de with local (Exim 4.80) (envelope-from ) id 1fbOS2-00066c-Ed; Fri, 06 Jul 2018 12:58:58 +0200 Date: Fri, 6 Jul 2018 12:58:57 +0200 From: Sebastian Andrzej Siewior To: Joe Korty Cc: Julia Cartwright , tglx@linutronix.de, rostedt@goodmis.org, linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org, Peter Zijlstra Subject: [PATCH RT v2] sched/migrate_disable: fallback to preempt_disable() instead barrier() Message-ID: <20180706105857.5tgi5irdxdryet64@linutronix.de> References: <20180704173519.GA24614@zipoli.concurrent-rt.com> <20180705155034.s6q2lsqc3o7srzwp@linutronix.de> <20180705161807.GA5800@zipoli.concurrent-rt.com> <20180705165937.5orx3md3krg4akaz@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8BIT In-Reply-To: <20180705165937.5orx3md3krg4akaz@linutronix.de> User-Agent: NeoMutt/20180512 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On SMP + !RT migrate_disable() is still around. It is not part of spin_lock() anymore so it has almost no users. However the futex code has a workaround for the !in_atomic() part of migrate disable which fails because the matching migrade_disable() is no longer part of spin_lock(). On !SMP + !RT migrate_disable() is reduced to barrier(). This is not optimal because we few spots where a "preempt_disable()" statement was replaced with "migrate_disable()". We also used the migration_disable counter to figure out if a sleeping lock is acquired so RCU does not complain about schedule() during rcu_read_lock() while a sleeping lock is held. This changed, we no longer use it, we have now a sleeping_lock counter for the RCU purpose. This means we can now: - for SMP + RT_BASE full migration program, nothing changes here - for !SMP + RT_BASE the migration counting is no longer required. It used to ensure that the task is not migrated to another CPU and that this CPU remains online. !SMP ensures that already. Move it to CONFIG_SCHED_DEBUG so the counting is done for debugging purpose only. - for all other cases including !RT fallback to preempt_disable(). The only remaining users of migrate_disable() are those which were converted from preempt_disable() and the futex workaround which is already in the preempt_disable() section due to the spin_lock that is held. Cc: stable-rt@vger.kernel.org Reported-by: joe.korty@concurrent-rt.com Signed-off-by: Sebastian Andrzej Siewior --- v1…v2: limit migrate_disable to RT only. Use preempt_disable() for !RT if migrate_disable() is used. include/linux/preempt.h | 6 +++--- include/linux/sched.h | 4 ++-- kernel/sched/core.c | 23 +++++++++++------------ kernel/sched/debug.c | 2 +- 4 files changed, 17 insertions(+), 18 deletions(-) --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -204,7 +204,7 @@ do { \ #define preemptible() (preempt_count() == 0 && !irqs_disabled()) -#ifdef CONFIG_SMP +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) extern void migrate_disable(void); extern void migrate_enable(void); @@ -221,8 +221,8 @@ static inline int __migrate_disabled(str } #else -#define migrate_disable() barrier() -#define migrate_enable() barrier() +#define migrate_disable() preempt_disable() +#define migrate_enable() preempt_enable() static inline int __migrate_disabled(struct task_struct *p) { return 0; --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -645,7 +645,7 @@ struct task_struct { int nr_cpus_allowed; const cpumask_t *cpus_ptr; cpumask_t cpus_mask; -#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP) +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) int migrate_disable; int migrate_disable_update; # ifdef CONFIG_SCHED_DEBUG @@ -653,8 +653,8 @@ struct task_struct { # endif #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) - int migrate_disable; # ifdef CONFIG_SCHED_DEBUG + int migrate_disable; int migrate_disable_atomic; # endif #endif --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1059,7 +1059,7 @@ void set_cpus_allowed_common(struct task p->nr_cpus_allowed = cpumask_weight(new_mask); } -#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP) +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) int __migrate_disabled(struct task_struct *p) { return p->migrate_disable; @@ -1098,7 +1098,7 @@ static void __do_set_cpus_allowed_tail(s void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { -#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP) +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) if (__migrate_disabled(p)) { lockdep_assert_held(&p->pi_lock); @@ -1171,7 +1171,7 @@ static int __set_cpus_allowed_ptr(struct if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p)) goto out; -#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP) +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) if (__migrate_disabled(p)) { p->migrate_disable_update = 1; goto out; @@ -7134,7 +7134,7 @@ const u32 sched_prio_to_wmult[40] = { /* 15 */ 119304647, 148102320, 186737708, 238609294, 286331153, }; -#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP) +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) static inline void update_nr_migratory(struct task_struct *p, long delta) @@ -7282,45 +7282,44 @@ EXPORT_SYMBOL(migrate_enable); #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) void migrate_disable(void) { +#ifdef CONFIG_SCHED_DEBUG struct task_struct *p = current; if (in_atomic() || irqs_disabled()) { -#ifdef CONFIG_SCHED_DEBUG p->migrate_disable_atomic++; -#endif return; } -#ifdef CONFIG_SCHED_DEBUG + if (unlikely(p->migrate_disable_atomic)) { tracing_off(); WARN_ON_ONCE(1); } -#endif p->migrate_disable++; +#endif + barrier(); } EXPORT_SYMBOL(migrate_disable); void migrate_enable(void) { +#ifdef CONFIG_SCHED_DEBUG struct task_struct *p = current; if (in_atomic() || irqs_disabled()) { -#ifdef CONFIG_SCHED_DEBUG p->migrate_disable_atomic--; -#endif return; } -#ifdef CONFIG_SCHED_DEBUG if (unlikely(p->migrate_disable_atomic)) { tracing_off(); WARN_ON_ONCE(1); } -#endif WARN_ON_ONCE(p->migrate_disable <= 0); p->migrate_disable--; +#endif + barrier(); } EXPORT_SYMBOL(migrate_enable); #endif --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -1030,7 +1030,7 @@ void proc_sched_show_task(struct task_st P(dl.runtime); P(dl.deadline); } -#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP) +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) P(migrate_disable); #endif P(nr_cpus_allowed);