Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp1511689imm; Wed, 10 Oct 2018 16:15:36 -0700 (PDT) X-Google-Smtp-Source: ACcGV60BF19kCtz5QXvURRYOFrI7CsF7HXP4pP3mRHcfKiL8o2kr+0nenz55LlNpdrxTUlmynq+M X-Received: by 2002:a62:c99c:: with SMTP id l28-v6mr20162638pfk.188.1539213336199; Wed, 10 Oct 2018 16:15:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539213336; cv=none; d=google.com; s=arc-20160816; b=DMjs+UYhjEY7nbOyEwS8VQw6hsLZOtFz6Fr6rGQqCYV1p3S0UT885aBUeDhgRfyXf6 II1+A7Sw+K0GYYFqRBHTglKu9cbQBL3x4ykvCNg/k/OLW8/BJCOXP2pmIBXm30r69dmN 7LAXALMfCYFSxiwM5Sfh2x1RM30TJwpZ6TW1dswy8ps7KT3L4qmvi5pUw0I1HAJWD476 tEk4kqylBeSO/CKKEupvrtbEZl1pVLGpr7sbATCp9OmLeaIrSgMyooP5HhwS9IFUf2Xj cm6p2ZQfEvcsYh5xYzZ13ypTdSi+OR8UF9ttJ/qIWWOScLbVHFNjt5gBiVdw1/tewic+ qZWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=eGcEkYQAk05C3l4yUBfgE0U1yUBCN4Y0vowFnXShvM8=; b=gvLZaom4h310PWBte5PXlA4GBb0KBcvzC6uBI6GCKvMSzkm0K+aZ6mf3nns9Qi+FPB 5WbzNG/wOZBPSmojLrThryP3TETh4llVjiSHRwH494aq+7SDnV+YCKS2qCTNWk4MSFcC K8CW+kMvRSsSi9Rg+MMjLSd+9+imtYzlz7QLzrsOIGfbg6FsPMRyB11Kzs6HDwAayYf9 vU/QWVSqWEAZRP3KJJcXoC416hb0Si2XlKdhL9gJJUF7C9ZtLukU9hhRojB9qXq0ZKZv zsIOSl49tWJ+a+lq3zf+48AJyaMsUO96xuBo3fXes0qCJiqPh2C16juizCUyxPmdQAwJ nb9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=hSLmLxNk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f11-v6si29419317plm.244.2018.10.10.16.15.21; Wed, 10 Oct 2018 16:15:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=hSLmLxNk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727781AbeJKGiM (ORCPT + 99 others); Thu, 11 Oct 2018 02:38:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:36412 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727691AbeJKGiL (ORCPT ); Thu, 11 Oct 2018 02:38:11 -0400 Received: from lerouge.suse.de (LFbn-NCY-1-241-207.w83-194.abo.wanadoo.fr [83.194.85.207]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 12B0321526; Wed, 10 Oct 2018 23:13:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1539213229; bh=QCITUOR/M/rykBCpevVgO29id+sUFV5Be5gnbG9bQK8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hSLmLxNknAGHteVK2zCZrz4Hkicr6p4wMAzTipETvLE1a8KMOhKNMQiIEqtJkHAtN FUlPgSp5C5dk3AeU6Fhtx7wv81vic4YruMd2fpZljbnugsXL1AYFUUh9/nGyrbwtNs QMURbMkB70nDBuflPIWibHd2MFIthEI5Sg14rA7Y= From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Sebastian Andrzej Siewior , Peter Zijlstra , "David S . Miller" , Linus Torvalds , Thomas Gleixner , "Paul E . McKenney" , Ingo Molnar , Frederic Weisbecker , Mauro Carvalho Chehab Subject: [RFC PATCH 25/30] softirq: Push down softirq mask to __local_bh_disable_ip() Date: Thu, 11 Oct 2018 01:12:12 +0200 Message-Id: <1539213137-13953-26-git-send-email-frederic@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539213137-13953-1-git-send-email-frederic@kernel.org> References: <1539213137-13953-1-git-send-email-frederic@kernel.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that all callers are ready, we can push down the softirq enabled mask to the core from callers such as spin_lock_bh(), local_bh_disable(), rcu_read_lock_bh(), etc... It is applied to the CPU vector enabled mask on __local_bh_disable_ip() which then returns the old value to be restored on __local_bh_enable_ip(). Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Linus Torvalds Cc: David S. Miller Cc: Mauro Carvalho Chehab Cc: Paul E. McKenney --- include/linux/bottom_half.h | 19 ++++++++++--------- include/linux/rwlock_api_smp.h | 14 ++++++++------ include/linux/spinlock_api_smp.h | 10 +++++----- kernel/softirq.c | 28 +++++++++++++++++++--------- 4 files changed, 42 insertions(+), 29 deletions(-) diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h index 31fcdae..f8a68c8 100644 --- a/include/linux/bottom_half.h +++ b/include/linux/bottom_half.h @@ -37,9 +37,10 @@ enum #ifdef CONFIG_TRACE_IRQFLAGS -extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt); +extern unsigned int __local_bh_disable_ip(unsigned long ip, unsigned int cnt, + unsigned int mask); #else -static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int cnt) +static __always_inline unsigned int __local_bh_disable_ip(unsigned long ip, unsigned int cnt) { preempt_count_add(cnt); barrier(); @@ -48,21 +49,21 @@ static __always_inline void __local_bh_disable_ip(unsigned long ip, unsigned int static inline unsigned int local_bh_disable(unsigned int mask) { - __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET); - return 0; + return __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET, mask); } -extern void local_bh_enable_no_softirq(void); -extern void __local_bh_enable_ip(unsigned long ip, unsigned int cnt); +extern void local_bh_enable_no_softirq(unsigned int bh); +extern void __local_bh_enable_ip(unsigned long ip, + unsigned int cnt, unsigned int bh); -static inline void local_bh_enable_ip(unsigned long ip) +static inline void local_bh_enable_ip(unsigned long ip, unsigned int bh) { - __local_bh_enable_ip(ip, SOFTIRQ_DISABLE_OFFSET); + __local_bh_enable_ip(ip, SOFTIRQ_DISABLE_OFFSET, bh); } static inline void local_bh_enable(unsigned int bh) { - __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET); + __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET, bh); } extern void local_bh_disable_all(void); diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index fb66489..90ba7bf 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -173,10 +173,11 @@ static inline void __raw_read_lock_irq(rwlock_t *lock) static inline unsigned int __raw_read_lock_bh(rwlock_t *lock, unsigned int mask) { - __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); + unsigned int bh; + bh = __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, mask); rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); LOCK_CONTENDED(lock, do_raw_read_trylock, do_raw_read_lock); - return 0; + return bh; } static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock) @@ -202,10 +203,11 @@ static inline void __raw_write_lock_irq(rwlock_t *lock) static inline unsigned int __raw_write_lock_bh(rwlock_t *lock, unsigned int mask) { - __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); + unsigned int bh; + bh = __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, mask); rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock); - return 0; + return bh; } static inline void __raw_write_lock(rwlock_t *lock) @@ -253,7 +255,7 @@ static inline void __raw_read_unlock_bh(rwlock_t *lock, { rwlock_release(&lock->dep_map, 1, _RET_IP_); do_raw_read_unlock(lock); - __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); + __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, bh); } static inline void __raw_write_unlock_irqrestore(rwlock_t *lock, @@ -278,7 +280,7 @@ static inline void __raw_write_unlock_bh(rwlock_t *lock, { rwlock_release(&lock->dep_map, 1, _RET_IP_); do_raw_write_unlock(lock); - __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); + __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, bh); } #endif /* __LINUX_RWLOCK_API_SMP_H */ diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h index 42bbf68..6602a56 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -132,9 +132,9 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) static inline unsigned int __raw_spin_lock_bh(raw_spinlock_t *lock, unsigned int mask) { - unsigned int bh = 0; + unsigned int bh; - __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); + bh = __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, mask); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock); @@ -179,19 +179,19 @@ static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock, { spin_release(&lock->dep_map, 1, _RET_IP_); do_raw_spin_unlock(lock); - __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); + __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, bh); } static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock, unsigned int *bh, unsigned int mask) { - __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); + *bh = __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, mask); if (do_raw_spin_trylock(lock)) { spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); return 1; } - __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); + __local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET, *bh); return 0; } diff --git a/kernel/softirq.c b/kernel/softirq.c index 22cc0a7..e2435b0 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -107,13 +107,16 @@ static bool ksoftirqd_running(unsigned long pending) * where hardirqs are disabled legitimately: */ #ifdef CONFIG_TRACE_IRQFLAGS -void __local_bh_disable_ip(unsigned long ip, unsigned int cnt) +unsigned int __local_bh_disable_ip(unsigned long ip, unsigned int cnt, + unsigned int mask) { unsigned long flags; + unsigned int enabled; WARN_ON_ONCE(in_irq()); raw_local_irq_save(flags); + /* * The preempt tracer hooks into preempt_count_add and will break * lockdep because it calls back into lockdep after SOFTIRQ_OFFSET @@ -127,6 +130,9 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int cnt) */ if (softirq_count() == (cnt & SOFTIRQ_MASK)) trace_softirqs_off(ip); + + enabled = local_softirq_enabled(); + softirq_enabled_nand(mask); raw_local_irq_restore(flags); if (preempt_count() == cnt) { @@ -135,6 +141,7 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int cnt) #endif trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip()); } + return enabled; } EXPORT_SYMBOL(__local_bh_disable_ip); #endif /* CONFIG_TRACE_IRQFLAGS */ @@ -143,11 +150,13 @@ EXPORT_SYMBOL(__local_bh_disable_ip); * Special-case - softirqs can safely be enabled by __do_softirq(), * without processing still-pending softirqs: */ -void local_bh_enable_no_softirq(void) +void local_bh_enable_no_softirq(unsigned int bh) { WARN_ON_ONCE(in_irq()); lockdep_assert_irqs_disabled(); + softirq_enabled_set(bh); + if (preempt_count() == SOFTIRQ_DISABLE_OFFSET) trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip()); @@ -155,17 +164,18 @@ void local_bh_enable_no_softirq(void) trace_softirqs_on(_RET_IP_); __preempt_count_sub(SOFTIRQ_DISABLE_OFFSET); - } EXPORT_SYMBOL(local_bh_enable_no_softirq); -void __local_bh_enable_ip(unsigned long ip, unsigned int cnt) +void __local_bh_enable_ip(unsigned long ip, unsigned int cnt, unsigned int bh) { WARN_ON_ONCE(in_irq()); lockdep_assert_irqs_enabled(); #ifdef CONFIG_TRACE_IRQFLAGS local_irq_disable(); #endif + softirq_enabled_set(bh); + /* * Are softirqs going to be turned on now: */ @@ -177,6 +187,7 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt) */ preempt_count_sub(cnt - 1); + if (unlikely(!in_interrupt() && local_softirq_pending())) { /* * Run softirq if any pending. And do it in its own stack @@ -246,9 +257,6 @@ static void local_bh_exit(void) __preempt_count_sub(SOFTIRQ_OFFSET); } - - - /* * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times, * but break the loop if need_resched() is set or after 2 ms. @@ -395,15 +403,17 @@ asmlinkage __visible void do_softirq(void) */ void irq_enter(void) { + unsigned int bh; + rcu_irq_enter(); if (is_idle_task(current) && !in_interrupt()) { /* * Prevent raise_softirq from needlessly waking up ksoftirqd * here, as softirq will be serviced on return from interrupt. */ - local_bh_disable(SOFTIRQ_ALL_MASK); + bh = local_bh_disable(SOFTIRQ_ALL_MASK); tick_irq_enter(); - local_bh_enable_no_softirq(); + local_bh_enable_no_softirq(bh); } __irq_enter(); -- 2.7.4