Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp2728087pxb; Mon, 31 Jan 2022 03:03:37 -0800 (PST) X-Google-Smtp-Source: ABdhPJyEBEmWouj4lBJGqQsMFBS93RLEaaKvibmJjkfOuBDEfKowbRf1StCKnBA+wYJqNUtSChaZ X-Received: by 2002:a62:b618:: with SMTP id j24mr19711873pff.69.1643627016945; Mon, 31 Jan 2022 03:03:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643627016; cv=none; d=google.com; s=arc-20160816; b=fyctfPV48Fc/aacGDOkmsgeZU3seyZ+mNTn/YlheODDKPR2FKt4O+nQ7sEVF7Ya7Hp 8F4xJwgBKrNfOe68t6KaPLrk/y8CcGcGTwOZDg/XfC93Xoj1pPz2Eseugs3UYRJl7qlU 0eI/8BKjQ/W4rM8WhSa5Q4YcNt5eOe2eE/boFJt1Jd1nHx6nfExGZAnyNnmsmt36HFpZ ufCNC2mjAX4EZIQzOOJ43FjNAwBYNtrS7zrdwKCO/+T3DUNFF/nEmUa972P626//LO3v As7eVb6mybVB7Z+lHgzCcahVg6MEQloVfLxDlzbp9uPriayoSRRM7nqa+nY+BGzmqnkI 4KVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=OOok3oAXLMDgG2pHbvztom5rL8XJmkTzwxFo3hW65dY=; b=YfKpcZOXAb3pyeK14K2M+aNZVeQfcN4SlecFUtTtRbS3NVRxefxvHdeHRc8SssVYHf ZnsQKETJCb5jgvWcHWBiV9b0UwsIakS4E0PA2rEAi3xltt5ekkAHTe6GI1OTZBJFIPQr Vr2y/BDrX4DWb+zZrdLMy+sE9rPhqBGO2X8+YFDOk/11ToyMAelbRjzNNMtNIGeTMStF gbv+Nd9JkQhGQsUgCLBm+9W9raMLGDFeozH5ZDXg/h8KvXHE+N65HGVxgBVhs5penBCd 6460oyIuSO/vfVNg+TnY7pmIAmSRPCp+Eko1ajBHmHnyfPrfwVjEyZN4YU1WEByU0+b7 gnPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (no key) header.i=@lespinasse.org header.s=srv-52-ed; dkim=pass (test mode) header.i=@lespinasse.org header.s=srv-52-rsa header.b=7jHZVpmU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=lespinasse.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ha2si10869610pjb.70.2022.01.31.03.03.26; Mon, 31 Jan 2022 03:03:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=neutral (no key) header.i=@lespinasse.org header.s=srv-52-ed; dkim=pass (test mode) header.i=@lespinasse.org header.s=srv-52-rsa header.b=7jHZVpmU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=lespinasse.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349191AbiA1NUo (ORCPT + 99 others); Fri, 28 Jan 2022 08:20:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348763AbiA1NTL (ORCPT ); Fri, 28 Jan 2022 08:19:11 -0500 Received: from server.lespinasse.org (server.lespinasse.org [IPv6:2001:470:82ab::100:0]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AA1CC061757 for ; Fri, 28 Jan 2022 05:19:09 -0800 (PST) DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=lespinasse.org; i=@lespinasse.org; q=dns/txt; s=srv-52-ed; t=1643375407; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : from; bh=OOok3oAXLMDgG2pHbvztom5rL8XJmkTzwxFo3hW65dY=; b=apOsMK7sYOiHe+UmWef7PYiCm3i1aOipU8SikDOdqQIVQ92th1dViE6N0eom1nHudhL5n 4/szWqoaviTOG4LCg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lespinasse.org; i=@lespinasse.org; q=dns/txt; s=srv-52-rsa; t=1643375407; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : from; bh=OOok3oAXLMDgG2pHbvztom5rL8XJmkTzwxFo3hW65dY=; b=7jHZVpmUHaV2hihoS/xDYZ7lIHZFiNosHjwrjMvHdF49E3nYRWUd6B0f1WZbzTOTJ3DbR KdoA+Tk76uCvQdhq/sopUjEvWhuCJjuvJQc8/G1yZ/OyLY6M4cq0fNS3bDS0MU2I06fz25j YiRmitgizvCq5ZyOFmdCiJVrVCui+9lSU2DRXUyV0dSwntrbLtIIY9VqGev1BV8uIRUsKsJ PXqDwUkAU6QwETRspdM3sJvxgXFDgnHHdQOdh4r2QQb2pZGwRMwbdT0yCFjPQ6WywdVwJoL kwdzpN8FW1tj5a9YAAtuAb61eSHPIKy+MDAGv2lLbceDf265pbntCbCh2Hrw== Received: from zeus.lespinasse.org (zeus.lespinasse.org [10.0.0.150]) by server.lespinasse.org (Postfix) with ESMTPS id 25423160AAA; Fri, 28 Jan 2022 05:10:07 -0800 (PST) Received: by zeus.lespinasse.org (Postfix, from userid 1000) id 093D420472; Fri, 28 Jan 2022 05:10:07 -0800 (PST) From: Michel Lespinasse To: Linux-MM , linux-kernel@vger.kernel.org, Andrew Morton Cc: kernel-team@fb.com, Laurent Dufour , Jerome Glisse , Peter Zijlstra , Michal Hocko , Vlastimil Babka , Davidlohr Bueso , Matthew Wilcox , Liam Howlett , Rik van Riel , Paul McKenney , Song Liu , Suren Baghdasaryan , Minchan Kim , Joel Fernandes , David Rientjes , Axel Rasmussen , Andy Lutomirski , Michel Lespinasse Subject: [PATCH v2 22/35] percpu-rwsem: enable percpu_sem destruction in atomic context Date: Fri, 28 Jan 2022 05:09:53 -0800 Message-Id: <20220128131006.67712-23-michel@lespinasse.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220128131006.67712-1-michel@lespinasse.org> References: <20220128131006.67712-1-michel@lespinasse.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Suren Baghdasaryan Calling percpu_free_rwsem in atomic context results in "scheduling while atomic" bug being triggered: BUG: scheduling while atomic: klogd/158/0x00000002 ... __schedule_bug+0x191/0x290 schedule_debug+0x97/0x180 __schedule+0xdc/0xba0 schedule+0xda/0x250 schedule_timeout+0x92/0x2d0 __wait_for_common+0x25b/0x430 wait_for_completion+0x1f/0x30 rcu_barrier+0x440/0x4f0 rcu_sync_dtor+0xaa/0x190 percpu_free_rwsem+0x41/0x80 Introduce percpu_rwsem_destroy function to perform semaphore destruction in a worker thread. Signed-off-by: Suren Baghdasaryan Signed-off-by: Michel Lespinasse --- include/linux/percpu-rwsem.h | 13 ++++++++++++- kernel/locking/percpu-rwsem.c | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 44 insertions(+), 1 deletion(-) diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h index 5fda40f97fe9..bf1668fc9c5e 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h @@ -13,7 +13,14 @@ struct percpu_rw_semaphore { struct rcu_sync rss; unsigned int __percpu *read_count; struct rcuwait writer; - wait_queue_head_t waiters; + /* + * destroy_list_entry is used during object destruction when waiters + * can't be used, therefore reusing the same space. + */ + union { + wait_queue_head_t waiters; + struct list_head destroy_list_entry; + }; atomic_t block; #ifdef CONFIG_DEBUG_LOCK_ALLOC struct lockdep_map dep_map; @@ -127,8 +134,12 @@ extern void percpu_up_write(struct percpu_rw_semaphore *); extern int __percpu_init_rwsem(struct percpu_rw_semaphore *, const char *, struct lock_class_key *); +/* Can't be called in atomic context. */ extern void percpu_free_rwsem(struct percpu_rw_semaphore *); +/* Invokes percpu_free_rwsem and frees the semaphore from a worker thread. */ +extern void percpu_rwsem_async_destroy(struct percpu_rw_semaphore *sem); + #define percpu_init_rwsem(sem) \ ({ \ static struct lock_class_key rwsem_key; \ diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c index 70a32a576f3f..a3d37bf83c60 100644 --- a/kernel/locking/percpu-rwsem.c +++ b/kernel/locking/percpu-rwsem.c @@ -7,6 +7,7 @@ #include #include #include +#include #include int __percpu_init_rwsem(struct percpu_rw_semaphore *sem, @@ -268,3 +269,34 @@ void percpu_up_write(struct percpu_rw_semaphore *sem) rcu_sync_exit(&sem->rss); } EXPORT_SYMBOL_GPL(percpu_up_write); + +static LIST_HEAD(destroy_list); +static DEFINE_SPINLOCK(destroy_list_lock); + +static void destroy_list_workfn(struct work_struct *work) +{ + struct percpu_rw_semaphore *sem, *sem2; + LIST_HEAD(to_destroy); + + spin_lock(&destroy_list_lock); + list_splice_init(&destroy_list, &to_destroy); + spin_unlock(&destroy_list_lock); + + if (list_empty(&to_destroy)) + return; + + list_for_each_entry_safe(sem, sem2, &to_destroy, destroy_list_entry) { + percpu_free_rwsem(sem); + kfree(sem); + } +} + +static DECLARE_WORK(destroy_list_work, destroy_list_workfn); + +void percpu_rwsem_async_destroy(struct percpu_rw_semaphore *sem) +{ + spin_lock(&destroy_list_lock); + list_add_tail(&sem->destroy_list_entry, &destroy_list); + spin_unlock(&destroy_list_lock); + schedule_work(&destroy_list_work); +} -- 2.20.1