Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp1359139rdb; Wed, 6 Dec 2023 17:11:42 -0800 (PST) X-Google-Smtp-Source: AGHT+IF12NEJ2syOeAbNxV5VR9E0yD25GdkMEoyFpIhzGjG4e7EMZmtsBuldWYMBK8Qme/mBLZIG X-Received: by 2002:a05:6a20:1454:b0:18a:d791:6629 with SMTP id a20-20020a056a20145400b0018ad7916629mr2667705pzi.11.1701911502106; Wed, 06 Dec 2023 17:11:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701911502; cv=none; d=google.com; s=arc-20160816; b=S79EX03LV958kWoge13niX8N93J+yV+5/tszSLbIQ0DdX8pCC+CE1PEbIW4R9s1YUR 7Sm8z/JnxBIg4jU98TyDshHY0Wz6BuReDBLM50ySn7Pabj2zVG3hrAE0jW20pxWLNoLr UWt+tcJeCxY/ve0EmYAm/5rrYv/OBO0WFwG0ZIZogFwfWulW7FC8dD8qfZF7mXt5LJpQ NVL7Tx0mNfjdko5F3Oj2O5E0ljRn5JNbxwrnYBgvhJ/TQpwEHwNH1fcNd4eUZb3CV2CU TlVVls0yoNX1L7o/UMzbnDH3blqayVxGP/z38fksD1RJ+M/bZqu8Jok5yg+4JU+hwzXX Se1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=ExJdtbKmNQpxsox6HxA6icn3NiLLvZaWuFWfZqOZ2U4=; fh=yX9AQeTAdXdOVjRXo5SX0hMd6F0M0+bazq+PVAfaLxc=; b=W3hrqcCIO0+XuaZIKabx9VIBDyzgCBJRS8PpLIEjsVPrmdBxH8TAIzw5XZoluFX8cs tgMMnMNzZnnU+U1GqkiFSXBvpeHT2fzHTMNVCEf2PCzHWBqZumFSk5MewMUBObXscnsu ym5aiGoumjvx9G3zhzDoxzbBknOkGIorTmAezKLRCiGt40Xi4Y7EUljO3vszngQj8+/W CXu4x3y24kdkf3P3VIRSE4Q8IDyLd1TiR7TydkolPwOEuiw9abKI1S9NkBv2yPNkwcLM SbpRJmS7DTwDdHe2qAAjJigjfeN6kWANfooEeVExpZc1GNZ5evHzZGNx2O2mvCajzYOq u5pw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XsPkfrC0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id fa32-20020a056a002d2000b006cb68d85220si242651pfb.286.2023.12.06.17.11.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 17:11:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XsPkfrC0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 9EA5F8312AC6; Wed, 6 Dec 2023 17:11:39 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442019AbjLGBLU (ORCPT + 99 others); Wed, 6 Dec 2023 20:11:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1441961AbjLGBLR (ORCPT ); Wed, 6 Dec 2023 20:11:17 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67203A4 for ; Wed, 6 Dec 2023 17:11:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701911482; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ExJdtbKmNQpxsox6HxA6icn3NiLLvZaWuFWfZqOZ2U4=; b=XsPkfrC0JflgYC1YZnYxISmZ2R4ygD2F+qlX5Qm1grUpDoD67G4DzjXwY3K7wOYLwqCZHM 8HpxWalVdlj/XetPUfXhxZ228bp/NKfc5OdkyzRddQzh/rFyGAlJyrob8L+6rxhE63+euZ ocIYrp5tfYcQP302sl9zPtchPfjrTQg= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-609-MjrAbh1mMma7D6M1fsK8pQ-1; Wed, 06 Dec 2023 20:11:21 -0500 X-MC-Unique: MjrAbh1mMma7D6M1fsK8pQ-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AF34E283780C; Thu, 7 Dec 2023 01:11:20 +0000 (UTC) Received: from fedora (unknown [10.72.120.12]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 238F7492BC6; Thu, 7 Dec 2023 01:11:13 +0000 (UTC) Date: Thu, 7 Dec 2023 09:11:09 +0800 From: Ming Lei To: Yury Norov Cc: Thomas Gleixner , Andrew Morton , linux-kernel@vger.kernel.org, Keith Busch , linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, Yi Zhang , Guangwu Zhang , Chengming Zhou , Jens Axboe Subject: Re: [PATCH V4 resend] lib/group_cpus.c: avoid to acquire cpu hotplug lock in group_cpus_evenly Message-ID: References: <20231120083559.285174-1-ming.lei@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Wed, 06 Dec 2023 17:11:39 -0800 (PST) On Wed, Dec 06, 2023 at 04:41:44PM -0800, Yury Norov wrote: > Hi Ming, > > On Mon, Nov 20, 2023 at 04:35:59PM +0800, Ming Lei wrote: > > group_cpus_evenly() could be part of storage driver's error handler, > > such as nvme driver, when may happen during CPU hotplug, in which > > storage queue has to drain its pending IOs because all CPUs associated > > with the queue are offline and the queue is becoming inactive. And > > handling IO needs error handler to provide forward progress. > > > > Then dead lock is caused: > > > > 1) inside CPU hotplug handler, CPU hotplug lock is held, and blk-mq's > > handler is waiting for inflight IO > > > > 2) error handler is waiting for CPU hotplug lock > > > > 3) inflight IO can't be completed in blk-mq's CPU hotplug handler because > > error handling can't provide forward progress. > > > > Solve the deadlock by not holding CPU hotplug lock in group_cpus_evenly(), > > in which two stage spreads are taken: 1) the 1st stage is over all present > > CPUs; 2) the end stage is over all other CPUs. > > > > Turns out the two stage spread just needs consistent 'cpu_present_mask', and > > remove the CPU hotplug lock by storing it into one local cache. This way > > doesn't change correctness, because all CPUs are still covered. > > > > Cc: Keith Busch > > Cc: linux-nvme@lists.infradead.org > > Cc: linux-block@vger.kernel.org > > Reported-by: Yi Zhang > > Reported-by: Guangwu Zhang > > Tested-by: Guangwu Zhang > > Reviewed-by: Chengming Zhou > > Reviewed-by: Jens Axboe > > Signed-off-by: Ming Lei > > --- > > lib/group_cpus.c | 22 ++++++++++++++++------ > > 1 file changed, 16 insertions(+), 6 deletions(-) > > > > diff --git a/lib/group_cpus.c b/lib/group_cpus.c > > index aa3f6815bb12..ee272c4cefcc 100644 > > --- a/lib/group_cpus.c > > +++ b/lib/group_cpus.c > > @@ -366,13 +366,25 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps) > > if (!masks) > > goto fail_node_to_cpumask; > > > > - /* Stabilize the cpumasks */ > > - cpus_read_lock(); > > build_node_to_cpumask(node_to_cpumask); > > > > + /* > > + * Make a local cache of 'cpu_present_mask', so the two stages > > + * spread can observe consistent 'cpu_present_mask' without holding > > + * cpu hotplug lock, then we can reduce deadlock risk with cpu > > + * hotplug code. > > + * > > + * Here CPU hotplug may happen when reading `cpu_present_mask`, and > > + * we can live with the case because it only affects that hotplug > > + * CPU is handled in the 1st or 2nd stage, and either way is correct > > + * from API user viewpoint since 2-stage spread is sort of > > + * optimization. > > + */ > > + cpumask_copy(npresmsk, data_race(cpu_present_mask)); > > Now that you initialize the npresmsk explicitly, you can allocate it > using alloc_cpumask_var(). Indeed, but this way is actually before this patch, and not related with this fix. > > The same actually holds for nmsk too, and even before this patch. Maybe > fix it in a separate prepending patch? Yeah, 'nmsk' is similar with 'npresmsk', and it is not fix, just one optimization. group_cpus_evenly() is only run in slow path, so this kind of micro-optimization is not urgent and should be done in standalone patch, and even we can live with it. > > > + > > /* grouping present CPUs first */ > > ret = __group_cpus_evenly(curgrp, numgrps, node_to_cpumask, > > - cpu_present_mask, nmsk, masks); > > + npresmsk, nmsk, masks); > > if (ret < 0) > > goto fail_build_affinity; > > nr_present = ret; > > @@ -387,15 +399,13 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps) > > curgrp = 0; > > else > > curgrp = nr_present; > > - cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask); > > + cpumask_andnot(npresmsk, cpu_possible_mask, npresmsk); > > ret = __group_cpus_evenly(curgrp, numgrps, node_to_cpumask, > > npresmsk, nmsk, masks); > > The first thing the helper does is checking if nprepmask is empty. > cpumask_andnot() returns false in that case. So, assuming that present > cpumask in the previous call can't be empty, we can save few cycles if > drop corresponding check in the helper and do like this: > > if (cpumask_andnot(npresmsk, cpu_possible_mask, npresmsk) == 0) { > nr_others = 0; > goto fail_build_affinity; > } > > ret = __group_cpus_evenly(curgrp, numgrps, node_to_cpumask, > npresmsk, nmsk, masks); > > Although, it's not related to this patch directly. So, if you fix > zalloc_cpumask_var(), the patch looks good to me. I'd rather not make things complicated, as mentioned this API is only run in slow path. > > Reviewed-by: Yury Norov Thanks for the review! Thanks, Ming