Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp865414ybt; Wed, 24 Jun 2020 13:13:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/Sle8yx1MlTIxWF+tEaHQL4zRbdiu4ZNCm9T3rsrYcjnQJKEFf6uMvf8DPyG/92sfxrDs X-Received: by 2002:aa7:d9d6:: with SMTP id v22mr29717866eds.66.1593029615051; Wed, 24 Jun 2020 13:13:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593029615; cv=none; d=google.com; s=arc-20160816; b=TwjhrcZFwAdjz29lijb/RwjUhia7maV3VdhM/3zC7tIoMRRyxfn1TTYHAleKsUQb+O 3Z8Q5VGvstgpLSwFNE5bkzhLKN3PVuAjd8wTUI70sym27qjloQUsPNE7on3il/0S6UW1 1JaazwQauxVUPubyy48KAX9YIZFD8yhKeNAHR0wAtKFY/8xm/6QsIrN1znKAFQ7ojVf5 P/9HzAdTi4KGIN4dBjQSqN1LOUttrQqrHvhsRxcKJBjbyNh8tEK+Ycxc3EVzESwht4Og +s5Ie74ze0Z8VLqmfHmdkqIl0Xj7NyjIJwdWNCAsEd6Xk5qnQpWJv6oaM7rjaoCzLQoY Hsug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=rF0H3bYmtMh13XcGrTrS0jsurebt3vYCe6a/MC6/Msg=; b=riUF6xWQYJVBXtEj5GZX3U7ANr419v2wKwKeNdeOeH2rgAI0CTdK/sgumnuxxU+hOB pCvVOHUK153xu4Wo9jfx6GlhHmCTIV/yORo6sHbbw9I3wRbdM+2WfoR/mz1EexlQPrVa CVIOF7CgxeBVEQivhu0oxD5A2S76AIdKODqq2aKmJjb5PmJhKbfoq8OVgA6nv4vvm/Ur bHb8iBgzCYt73CRPSNjmPWquT/kQcTFsvwB0EvRm+NIXTKy9FGxmGRziwCwUFWrUup3W gEBwt0qWRJ8iuKDY5MaEr8WBHsz0jj/k7NADso99OaY+spQulVfKq6bjT5wiWrdmW8C4 qyBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=I5QVpWUv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id oq5si5509106ejb.110.2020.06.24.13.13.10; Wed, 24 Jun 2020 13:13:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=I5QVpWUv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406410AbgFXUMe (ORCPT + 99 others); Wed, 24 Jun 2020 16:12:34 -0400 Received: from mail.kernel.org ([198.145.29.99]:52406 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406271AbgFXUM3 (ORCPT ); Wed, 24 Jun 2020 16:12:29 -0400 Received: from paulmck-ThinkPad-P72.home (50-39-105-78.bvtn.or.frontiernet.net [50.39.105.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4F69F20885; Wed, 24 Jun 2020 20:12:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593029548; bh=vcy/8t17m4cgmRq5RCI0nkupUPMBoViZmO448RsmEtc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I5QVpWUv1ITUGk3KanKMkAGY5dsuW8PCFyA4O74d9LDCyq3SEf7Ara0g8TBgG84DR x4MB2W0sjs7Mud2ztuy/i5Z8bbgL/ihOMymAGPRvOrc0x7ZlL6365nyU1I/k+yaecr wNtTAI/3KofCEIwfH2Ira8IqPEmSjqBgJ0G4TH1U= From: paulmck@kernel.org To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, joel@joelfernandes.org, bigeasy@linutronix.de, Uladzislau Rezki , "Paul E . McKenney" Subject: [PATCH tip/core/rcu 02/17] rcu/tree: Keep kfree_rcu() awake during lock contention Date: Wed, 24 Jun 2020 13:12:11 -0700 Message-Id: <20200624201226.21197-2-paulmck@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20200624201200.GA28901@paulmck-ThinkPad-P72> References: <20200624201200.GA28901@paulmck-ThinkPad-P72> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Joel Fernandes (Google)" On PREEMPT_RT kernels, the krcp spinlock gets converted to an rt-mutex and causes kfree_rcu() callers to sleep. This makes it unusable for callers in purely atomic sections such as non-threaded IRQ handlers and raw spinlock sections. Fix it by converting the spinlock to a raw spinlock. Vetting all code paths, there is no reason to believe that the raw spinlock will hurt RT latencies as it is not held for a long time. Cc: bigeasy@linutronix.de Cc: Uladzislau Rezki Reviewed-by: Uladzislau Rezki Signed-off-by: Joel Fernandes (Google) Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 912d466..64592b4 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2998,7 +2998,7 @@ struct kfree_rcu_cpu { struct kfree_rcu_bulk_data *bhead; struct kfree_rcu_bulk_data *bcached; struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; - spinlock_t lock; + raw_spinlock_t lock; struct delayed_work monitor_work; bool monitor_todo; bool initialized; @@ -3031,12 +3031,12 @@ static void kfree_rcu_work(struct work_struct *work) krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); krcp = krwp->krcp; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); head = krwp->head_free; krwp->head_free = NULL; bhead = krwp->bhead_free; krwp->bhead_free = NULL; - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); /* "bhead" is now private, so traverse locklessly. */ for (; bhead; bhead = bnext) { @@ -3139,14 +3139,14 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp, krcp->monitor_todo = false; if (queue_kfree_rcu_work(krcp)) { // Success! Our job is done here. - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); return; } // Previous RCU batch still in progress, try again later. krcp->monitor_todo = true; schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } /* @@ -3159,11 +3159,11 @@ static void kfree_rcu_monitor(struct work_struct *work) struct kfree_rcu_cpu *krcp = container_of(work, struct kfree_rcu_cpu, monitor_work.work); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } static inline bool @@ -3234,7 +3234,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) local_irq_save(flags); // For safely calling this_cpu_ptr(). krcp = this_cpu_ptr(&krc); if (krcp->initialized) - spin_lock(&krcp->lock); + raw_spin_lock(&krcp->lock); // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(head)) { @@ -3265,7 +3265,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) unlock_return: if (krcp->initialized) - spin_unlock(&krcp->lock); + raw_spin_unlock(&krcp->lock); local_irq_restore(flags); } EXPORT_SYMBOL_GPL(kfree_call_rcu); @@ -3297,11 +3297,11 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); count = krcp->count; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); sc->nr_to_scan -= count; freed += count; @@ -3328,15 +3328,15 @@ void __init kfree_rcu_scheduler_running(void) for_each_online_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (!krcp->head || krcp->monitor_todo) { - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); continue; } krcp->monitor_todo = true; schedule_delayed_work_on(cpu, &krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } } @@ -4232,7 +4232,7 @@ static void __init kfree_rcu_batch_init(void) for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_init(&krcp->lock); + raw_spin_lock_init(&krcp->lock); for (i = 0; i < KFREE_N_BATCHES; i++) { INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp; -- 2.9.5