Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp3746392ybz; Mon, 20 Apr 2020 08:42:59 -0700 (PDT) X-Google-Smtp-Source: APiQypLvwkCVyRuF25LUBre0AO7vRouZX/N+KCTejK/FMjUGy7TKKUJPratNQbDxArXtmiqtLdkr X-Received: by 2002:a50:d5c8:: with SMTP id g8mr15196623edj.370.1587397379107; Mon, 20 Apr 2020 08:42:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587397379; cv=none; d=google.com; s=arc-20160816; b=ehDpSQuLyE2wsUloMrnIhnQ2ZBznZFwq4w34hZw/EMSOujVZMJS5NnMZSOL+NiODDa cVcVveJC8dOpNvgOQ8O8a/Uoulr90tdc40XisBOJa8LySUe9h+qCVm4/hs590fF43toB ApI+6QwmzJ/14sHLO793p2SRhVNqRMlORbYiiLIldZCCgOmy9jGm+exIvr9lLdtY7Wud S4InKl0I0s1ee5WnFB+iUDB+s72r+LwIPWrkkQDgEhsMU7UEyTzHjcMfcwPsd5Dxabrl hgv2757Mg//VDlnJ0GQZNPGLGI5xYF35OHTB99/Jke/PYNciyoQBDhBKyVBF1mn8MS2o 2aeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=KzIr8u4DpHRRhffIhSna4BhsKf3lZyZ3vmj3ZUPE2OE=; b=hiq4KsZTGZQ1QClWypr4WIe6l6QvZ09iZ/osD5oOIE7CjqNwRslXCqpYPQADuKwsE2 sAm7U6F5pbFXYWhqp9lg+PBhTrb4HC5wbHYBowUjhUpHySzqFkMC4+w4uxp8U3+lI045 JuWJ26Rrd63VIjfp9rl6+EClykhyw87cYMTEzEFCh2o3MkFo4YY/Kd3Lxd7t7pgWQvXm 7Jpd7xGOGAtojtGx8o0SUQcfYlk6RQK875pnJR2V2K0riH/mahYiFIBGikK8DOyErwUY +y9dHoMe7jZMsrgwujqUMTjUiMRkus31H7vaHHGRCN1mNkL7gZEY4FGbrXsRyay/e8Ju OyXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b="P/kTwmQ6"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 95si783629edq.375.2020.04.20.08.42.35; Mon, 20 Apr 2020 08:42:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b="P/kTwmQ6"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728393AbgDTPjB (ORCPT + 99 others); Mon, 20 Apr 2020 11:39:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728141AbgDTPjA (ORCPT ); Mon, 20 Apr 2020 11:39:00 -0400 Received: from mail-qt1-x842.google.com (mail-qt1-x842.google.com [IPv6:2607:f8b0:4864:20::842]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF26AC061A0F for ; Mon, 20 Apr 2020 08:38:59 -0700 (PDT) Received: by mail-qt1-x842.google.com with SMTP id l60so8789144qtd.8 for ; Mon, 20 Apr 2020 08:38:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KzIr8u4DpHRRhffIhSna4BhsKf3lZyZ3vmj3ZUPE2OE=; b=P/kTwmQ6z7Xx4aRk9mcKvEj3fqJe7JgwQU8GQZaJ7P67JgNlw7hy43SAJ8u32ykeOa JBQvIgfw8GfLp18TZ3Um2JtuqT35K8xy2oEhodsFo44Tu2P5EmfgOnfBPQOozzzcTqbG TzEi22HFUsOdKrhf0Pz+/ojZjxSZOrXuf47xg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KzIr8u4DpHRRhffIhSna4BhsKf3lZyZ3vmj3ZUPE2OE=; b=EjuWPOZtUxmknv6gZ/Q6aVd8X6p+KJLUfCfKWoQ/Fn+VtsD3udFHB4dqYsdONOu97J LwGQAJ6jsl1mk4m6F8LAfQy4g/7wKntbFjOUWTpMPYUn6weJL9RXvu9eZ9Uqobkh+vkW MS36mH//iOSB8BMM2upKNoImnmT/YGjmWXPxVPcLWoq8/VtbjvQ24P+Bfz4+NoR3cFMJ A/z+ua19nsEYlt3zZaA5QHqWWT+RWKvhYwPdwfVGcVQZ9ZEPhawDzf6HIpOgYyu0a9W2 J2HUxWaVLVCYFoKhNU+5sYD/U2V4Oze4rmDok3Iqmlb2xRUo5vnOQ5Q973MpdQRkLYfs XO5w== X-Gm-Message-State: AGi0Pub446QSfFhyU24D7s/rrxgsbfg8yMyxDKlGL54g55XI8ZhopqeE C9gd+Z5FWTiuG92sH0dnDoSr0m9E/6s= X-Received: by 2002:ac8:5653:: with SMTP id 19mr16379209qtt.252.1587397138520; Mon, 20 Apr 2020 08:38:58 -0700 (PDT) Received: from joelaf.cam.corp.google.com ([2620:15c:6:12:9c46:e0da:efbf:69cc]) by smtp.gmail.com with ESMTPSA id v16sm765453qkf.80.2020.04.20.08.38.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Apr 2020 08:38:58 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , urezki@gmail.com, bigeasy@linutronix.de, Josh Triplett , Lai Jiangshan , Mathieu Desnoyers , "Paul E. McKenney" , rcu@vger.kernel.org, Steven Rostedt Subject: [PATCH rcu/dev -fixes 1/4] rcu/tree: Keep kfree_rcu() awake during lock contention Date: Mon, 20 Apr 2020 11:38:34 -0400 Message-Id: <20200420153837.194532-2-joel@joelfernandes.org> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog In-Reply-To: <20200420153837.194532-1-joel@joelfernandes.org> References: <20200420153837.194532-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On PREEMPT_RT kernels, contending on the krcp spinlock can cause sleeping as on these kernels, the spinlock is converted to an rt-mutex. To prevent breakage of possible usage of kfree_rcu() now or in the future, make use of raw spinlocks which are not subject to such conversions. Vetting all code paths, there is no reason to believe that the raw spinlock will be held for long time so PREEMPT_RT should not suffer from lengthy acquirals of the lock. Cc: urezki@gmail.com Cc: bigeasy@linutronix.de Signed-off-by: Joel Fernandes (Google) --- kernel/rcu/tree.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f288477ee1c26..cf68d3d9f5b81 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2905,7 +2905,7 @@ struct kfree_rcu_cpu { struct kfree_rcu_bulk_data *bhead; struct kfree_rcu_bulk_data *bcached; struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; - spinlock_t lock; + raw_spinlock_t lock; struct delayed_work monitor_work; bool monitor_todo; bool initialized; @@ -2939,12 +2939,12 @@ static void kfree_rcu_work(struct work_struct *work) krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); krcp = krwp->krcp; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); head = krwp->head_free; krwp->head_free = NULL; bhead = krwp->bhead_free; krwp->bhead_free = NULL; - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); /* "bhead" is now private, so traverse locklessly. */ for (; bhead; bhead = bnext) { @@ -3047,14 +3047,14 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp, krcp->monitor_todo = false; if (queue_kfree_rcu_work(krcp)) { // Success! Our job is done here. - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); return; } // Previous RCU batch still in progress, try again later. krcp->monitor_todo = true; schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } /* @@ -3067,11 +3067,11 @@ static void kfree_rcu_monitor(struct work_struct *work) struct kfree_rcu_cpu *krcp = container_of(work, struct kfree_rcu_cpu, monitor_work.work); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } static inline bool @@ -3142,7 +3142,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) local_irq_save(flags); // For safely calling this_cpu_ptr(). krcp = this_cpu_ptr(&krc); if (krcp->initialized) - spin_lock(&krcp->lock); + raw_spin_lock(&krcp->lock); // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(head)) { @@ -3173,7 +3173,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) unlock_return: if (krcp->initialized) - spin_unlock(&krcp->lock); + raw_spin_unlock(&krcp->lock); local_irq_restore(flags); } EXPORT_SYMBOL_GPL(kfree_call_rcu); @@ -3205,11 +3205,11 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); count = krcp->count; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); sc->nr_to_scan -= count; freed += count; @@ -3236,15 +3236,15 @@ void __init kfree_rcu_scheduler_running(void) for_each_online_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (!krcp->head || krcp->monitor_todo) { - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); continue; } krcp->monitor_todo = true; schedule_delayed_work_on(cpu, &krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } } @@ -4140,7 +4140,7 @@ static void __init kfree_rcu_batch_init(void) for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_init(&krcp->lock); + raw_spin_lock_init(&krcp->lock); for (i = 0; i < KFREE_N_BATCHES; i++) { INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp; -- 2.26.1.301.g55bc3eb7cb9-goog