Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp4575524ybz; Tue, 28 Apr 2020 14:01:34 -0700 (PDT) X-Google-Smtp-Source: APiQypIqZqKHfA1qyltiZ0TENaNNV3PWGXG1G4OCPfiOz3KOD8buVSj8vi3I+x9+aHK2q3IaHRR9 X-Received: by 2002:aa7:cd01:: with SMTP id b1mr24736409edw.163.1588107693999; Tue, 28 Apr 2020 14:01:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588107693; cv=none; d=google.com; s=arc-20160816; b=sXATAQfeGDy50Qj6uspdczbNTPu3cUtMMwnFqqVKJfmf8C3xTsUkXmGJ/iah9zhCqK VT/mI0N+PG+6ijQ002js3H57QRgKH26Jeg1ARrqxB0kUcnfj0sZkBvEvrfcLikqLUrgP Im3MO77CSLZrZvoT54Owry61TeSIXz20Ny53JcUclWbWmCpfWMcL9Coo7Kra3oE0Xszc AQ2bpQ29Ej6ywhS/b2jKvEXBvHNplUYxST/1FX2pORFdKJjeFSbF39rJ0ra7XvG/gw/k DRCHATHPDXekCPGD6Z7yowUGtDaysisr8Hm295YTa6Qlmx9s6UUbWGPBMzGoucNfsWaD l14A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=20juLYXIkeuaSxH0ZOfDE2aYGH0+hGPgd5TgVWNgKV0=; b=lHmwL6VfkMFX7OkcTIgI2iHh5Mo0aNp3zngxNlRRcfGO83AgtJRtQHeQgKAJtuV8QP l2k6I89VcHaILgcL7uBZ3maIwRo1dd6rBo+ZxRDjr+14HVjBjWHcdBcJ+iLTRABGolU9 bwjj67NWnY84Q1o0hfPLuKfPRUR1YjSZOUmEJ305/IPpllEHwRcktYCyR08ag7gvgMfK 2ZZ/oEKus+AQyiBOxeOt2fdDeAIgXnHVdClHe6yMfv7POUxC++2eaL75srfqoYzKkT3Z L0y2kRHlgC2ZClkqsyo9d7qy9nImEE4X9CAIYPjNcsl42w32uzaJCtjzSM96M1mrzArD BrXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=IiSiQD7I; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ks15si2672970ejb.223.2020.04.28.14.01.10; Tue, 28 Apr 2020 14:01:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=IiSiQD7I; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726744AbgD1U73 (ORCPT + 99 others); Tue, 28 Apr 2020 16:59:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726355AbgD1U72 (ORCPT ); Tue, 28 Apr 2020 16:59:28 -0400 Received: from mail-lj1-x241.google.com (mail-lj1-x241.google.com [IPv6:2a00:1450:4864:20::241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B1FEC03C1AC; Tue, 28 Apr 2020 13:59:28 -0700 (PDT) Received: by mail-lj1-x241.google.com with SMTP id l19so196562lje.10; Tue, 28 Apr 2020 13:59:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=20juLYXIkeuaSxH0ZOfDE2aYGH0+hGPgd5TgVWNgKV0=; b=IiSiQD7Iu4uhe8rSKiFsy+4MM1PYZAifb/4X08Jtk+cZrudizBa+RKHTmspM1N+54J zqJhTvKu/5RQ6kvBm1U2XnFtEFoT5WK+zwPhKmJkjxoJF7GUThJw2k4x9UKsHyN3uVGr t1obO+H+JCn5et6c+tkfu4seuB0gVaBP66MA3RnOVoiWWR6yNpPVLWRXguro8HTAimgy dWMKPS6AWlRiBSdUP8vLP6smxvt8NoPstE7Me+W9nAo+jVxPe1UXCLY3/kLTgCPzwEGN UqOgOltC5P1qNzlwU+eozjY3EfiF2YNtHyaDio7PldqVoXdf4YBXbYJs+pJcbBW0vSZH qZLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=20juLYXIkeuaSxH0ZOfDE2aYGH0+hGPgd5TgVWNgKV0=; b=D5x6PWvjIvvDkOlzC8ZGQs1HuuFnrOMAAIYq+ZZHzMOMPBOus0fb77G2T+Xoa5Thyu u+t26XNum7+iX54g2DUx8HqVkSpsOLhpxnfWOYze8SkK60JhOE7oeTgyGI4P9GwioFnk adjPimzfU7Tt84YWSUNYPyFrmUgpVj0qlFu0LSWMSU8C8LUD3tppEiX+kQed90/dXG1V 8U+eFxUZfD2fuxvM2bYU2cXQaxJbd2iDsfCayVlVS6fQAtZHOkecmP+LixOttB5WJ2Z6 ry/OJL3WtEiu1U2GYtJoYBqU+rGU4IbsdHZ+y6XZhlwGZZ4fkgnNxLZub0RcoLiQ3uMi 1D8Q== X-Gm-Message-State: AGi0Pualo/iUg5MrGqFrJlcC5JRxqSz9iTaIoY4OZUedSDoK2Sn1wILt b3X9NXAA9fIl/a6CSVsl0F60TabnnBY9tg== X-Received: by 2002:a19:7008:: with SMTP id h8mr20057304lfc.43.1588107566139; Tue, 28 Apr 2020 13:59:26 -0700 (PDT) Received: from pc638.lan (h5ef52e31.seluork.dyn.perspektivbredband.net. [94.245.46.49]) by smtp.gmail.com with ESMTPSA id z21sm295483ljh.42.2020.04.28.13.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Apr 2020 13:59:23 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: LKML , linux-mm@kvack.org Cc: Andrew Morton , "Paul E . McKenney" , "Theodore Y . Ts'o" , Matthew Wilcox , Joel Fernandes , RCU , Uladzislau Rezki , Oleksiy Avramchenko , bigeasy@linutronix.de Subject: [PATCH 01/24] rcu/tree: Keep kfree_rcu() awake during lock contention Date: Tue, 28 Apr 2020 22:58:40 +0200 Message-Id: <20200428205903.61704-2-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200428205903.61704-1-urezki@gmail.com> References: <20200428205903.61704-1-urezki@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Joel Fernandes (Google)" On PREEMPT_RT kernels, contending on the krcp spinlock can cause sleeping as on these kernels, the spinlock is converted to an rt-mutex. To prevent breakage of possible usage of kfree_rcu() now or in the future, make use of raw spinlocks which are not subject to such conversions. Vetting all code paths, there is no reason to believe that the raw spinlock will be held for long time so PREEMPT_RT should not suffer from lengthy acquirals of the lock. Cc: bigeasy@linutronix.de Cc: Uladzislau Rezki Reviewed-by: Uladzislau Rezki Signed-off-by: Joel Fernandes (Google) Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f288477ee1c2..cf68d3d9f5b8 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2905,7 +2905,7 @@ struct kfree_rcu_cpu { struct kfree_rcu_bulk_data *bhead; struct kfree_rcu_bulk_data *bcached; struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; - spinlock_t lock; + raw_spinlock_t lock; struct delayed_work monitor_work; bool monitor_todo; bool initialized; @@ -2939,12 +2939,12 @@ static void kfree_rcu_work(struct work_struct *work) krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); krcp = krwp->krcp; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); head = krwp->head_free; krwp->head_free = NULL; bhead = krwp->bhead_free; krwp->bhead_free = NULL; - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); /* "bhead" is now private, so traverse locklessly. */ for (; bhead; bhead = bnext) { @@ -3047,14 +3047,14 @@ static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp, krcp->monitor_todo = false; if (queue_kfree_rcu_work(krcp)) { // Success! Our job is done here. - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); return; } // Previous RCU batch still in progress, try again later. krcp->monitor_todo = true; schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } /* @@ -3067,11 +3067,11 @@ static void kfree_rcu_monitor(struct work_struct *work) struct kfree_rcu_cpu *krcp = container_of(work, struct kfree_rcu_cpu, monitor_work.work); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } static inline bool @@ -3142,7 +3142,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) local_irq_save(flags); // For safely calling this_cpu_ptr(). krcp = this_cpu_ptr(&krc); if (krcp->initialized) - spin_lock(&krcp->lock); + raw_spin_lock(&krcp->lock); // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(head)) { @@ -3173,7 +3173,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) unlock_return: if (krcp->initialized) - spin_unlock(&krcp->lock); + raw_spin_unlock(&krcp->lock); local_irq_restore(flags); } EXPORT_SYMBOL_GPL(kfree_call_rcu); @@ -3205,11 +3205,11 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); count = krcp->count; - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (krcp->monitor_todo) kfree_rcu_drain_unlock(krcp, flags); else - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); sc->nr_to_scan -= count; freed += count; @@ -3236,15 +3236,15 @@ void __init kfree_rcu_scheduler_running(void) for_each_online_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_irqsave(&krcp->lock, flags); + raw_spin_lock_irqsave(&krcp->lock, flags); if (!krcp->head || krcp->monitor_todo) { - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); continue; } krcp->monitor_todo = true; schedule_delayed_work_on(cpu, &krcp->monitor_work, KFREE_DRAIN_JIFFIES); - spin_unlock_irqrestore(&krcp->lock, flags); + raw_spin_unlock_irqrestore(&krcp->lock, flags); } } @@ -4140,7 +4140,7 @@ static void __init kfree_rcu_batch_init(void) for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - spin_lock_init(&krcp->lock); + raw_spin_lock_init(&krcp->lock); for (i = 0; i < KFREE_N_BATCHES; i++) { INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp; -- 2.20.1