Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp848060ybt; Fri, 19 Jun 2020 15:41:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw1OM2d0dsRRmldVlOekBry2/WOHvDTimtNzc0s46HXJra94EZvQaSGbboXZMrY7QpPuMW+ X-Received: by 2002:aa7:d4c5:: with SMTP id t5mr5411582edr.357.1592606477119; Fri, 19 Jun 2020 15:41:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592606477; cv=none; d=google.com; s=arc-20160816; b=wnNyARCNpvlueVbrtfWinNBgZBeGVz6AvaShmvnytas5HkeUXDP/kuLBpUeTWR1PbT F5JSBcQ9CoJZvHKAf4EYGCFTU0bXKvvHox1x8Zx3qe6MjWAS4JOs/Fx00crb4AhwnCr7 AM5571y1zlc7ts6IRLhYGyDdQ5aOCTslHFbDArqziusar2CIjev6FknktnL3SQB9SZuJ pHFSwAwVC9PYkCcGVe7LBtbDWGafpKbSU+oWAklctlMobza6hH0ibkW/7AsUljLD4jUO eX2I2URljkVD79nhV639ByHceoNAlvW7KFsazxVyh0Yd7NjUIGc6D9yqm7IJhxUzbgGY JBKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=1WmaCl6Q8WVXp84/QB9ZzFeJ+xu4FhhiHNmnV95kfUs=; b=WbinKV7Y8l5OWJbMjXnYMEaViggu9P7XLV1RtIbC7vMrIdJrP8gkQOwGbYC0h117n2 owR+fO6z5u5nwbC1jghNQp97kvnT4uerye+iZ9E7Ysvvpi+ZMjvXniWVH8tENMr2QyxQ jDSxB0GWrwnQ3Ehgg/hyjLS+S35ial/RTKkjWUkXt6rOyPUMeoeBq0EyNdcZ9vGVM+TY Pt2qcu2pLLrsEBwtaqKKbRAasJ1lZdgJx/mVBwmOacUinDmsFBJPM1IJXIOKWNbD2Lfc MtISzN5sD1HbNmd2r28rc8HeR/Uf21ua03ZDd4Ei4wONraUPLOinPUn6njlSlkjHd4Pi fXSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c29si5138465edn.61.2020.06.19.15.40.54; Fri, 19 Jun 2020 15:41:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392750AbgFSPUu (ORCPT + 99 others); Fri, 19 Jun 2020 11:20:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2392503AbgFSPRg (ORCPT ); Fri, 19 Jun 2020 11:17:36 -0400 Received: from Galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AC33C0613F0; Fri, 19 Jun 2020 08:17:35 -0700 (PDT) Received: from [5.158.153.53] (helo=g2noscherz.lab.linutronix.de.) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.80) (envelope-from ) id 1jmIln-0007JW-Up; Fri, 19 Jun 2020 17:17:32 +0200 From: John Ogness To: Jens Axboe Cc: Sebastian Andrzej Siewior , Thomas Gleixner , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] block: remove unnecessary ioc nested locking Date: Fri, 19 Jun 2020 17:23:17 +0206 Message-Id: <20200619151718.22338-2-john.ogness@linutronix.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200619151718.22338-1-john.ogness@linutronix.de> References: <20200619151718.22338-1-john.ogness@linutronix.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The legacy CFQ IO scheduler could call put_io_context() in its exit_icq() elevator callback. This led to a lockdep warning, which was fixed in commit d8c66c5d5924 ("block: fix lockdep warning on io_context release put_io_context()") by using a nested subclass for the ioc spinlock. However, with commit f382fb0bcef4 ("block: remove legacy IO schedulers") the CFQ IO scheduler no longer exists. The BFQ IO scheduler also implements the exit_icq() elevator callback but does not call put_io_context(). The nested subclass for the ioc spinlock is no longer needed. Since it existed as an exception and no longer applies, remove the nested subclass usage. Signed-off-by: John Ogness --- block/blk-ioc.c | 26 ++++++-------------------- 1 file changed, 6 insertions(+), 20 deletions(-) diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 9df50fb507ca..5dbcfa1b872e 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -96,15 +96,7 @@ static void ioc_release_fn(struct work_struct *work) { struct io_context *ioc = container_of(work, struct io_context, release_work); - unsigned long flags; - - /* - * Exiting icq may call into put_io_context() through elevator - * which will trigger lockdep warning. The ioc's are guaranteed to - * be different, use a different locking subclass here. Use - * irqsave variant as there's no spin_lock_irq_nested(). - */ - spin_lock_irqsave_nested(&ioc->lock, flags, 1); + spin_lock_irq(&ioc->lock); while (!hlist_empty(&ioc->icq_list)) { struct io_cq *icq = hlist_entry(ioc->icq_list.first, @@ -115,13 +107,13 @@ static void ioc_release_fn(struct work_struct *work) ioc_destroy_icq(icq); spin_unlock(&q->queue_lock); } else { - spin_unlock_irqrestore(&ioc->lock, flags); + spin_unlock_irq(&ioc->lock); cpu_relax(); - spin_lock_irqsave_nested(&ioc->lock, flags, 1); + spin_lock_irq(&ioc->lock); } } - spin_unlock_irqrestore(&ioc->lock, flags); + spin_unlock_irq(&ioc->lock); kmem_cache_free(iocontext_cachep, ioc); } @@ -170,7 +162,6 @@ void put_io_context(struct io_context *ioc) */ void put_io_context_active(struct io_context *ioc) { - unsigned long flags; struct io_cq *icq; if (!atomic_dec_and_test(&ioc->active_ref)) { @@ -178,19 +169,14 @@ void put_io_context_active(struct io_context *ioc) return; } - /* - * Need ioc lock to walk icq_list and q lock to exit icq. Perform - * reverse double locking. Read comment in ioc_release_fn() for - * explanation on the nested locking annotation. - */ - spin_lock_irqsave_nested(&ioc->lock, flags, 1); + spin_lock_irq(&ioc->lock); hlist_for_each_entry(icq, &ioc->icq_list, ioc_node) { if (icq->flags & ICQ_EXITED) continue; ioc_exit_icq(icq); } - spin_unlock_irqrestore(&ioc->lock, flags); + spin_unlock_irq(&ioc->lock); put_io_context(ioc); } -- 2.20.1