Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1360953pxb; Fri, 21 Jan 2022 16:30:52 -0800 (PST) X-Google-Smtp-Source: ABdhPJy6gdMJhLtHweuxSb262sD0lV2w0H8JSbHkfDOyuJLg5E7hFjiYlel5W8V30payzRz9213Q X-Received: by 2002:a17:902:9305:b0:14a:db23:eb5d with SMTP id bc5-20020a170902930500b0014adb23eb5dmr5778965plb.73.1642811452404; Fri, 21 Jan 2022 16:30:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642811452; cv=none; d=google.com; s=arc-20160816; b=PBd8dwWelOcwvpnvLcVVdQbAKUBmpdPE3aYzVbnklByk8yfMBnUbqj7RSHgHL8Fyqg eUV68jfTWMaL3sTGYhYOC3KNHM0G9GBeB/EAeWh2lDQHzr8mvhOhtd2mT8qE6W6vReiy uTqZXuqHGeChkxKtpQp8GXAhA84sgHfcUlkCn3uTbUQw4ilAHOgqUagsm+sUEeM0Jz4m FhSPgjptqcyss0ifuWn3O9ZGEDOBUE9ECwlxjpVn8P0VSwYygK2QQzV+BlFCuba+Q4fn gy+h6Au3Ok3WIkAxbMPSSjV/G13vwpP58p+NGOaSrjzT3WWp331qs5WWFocXU2Rhp8+u /sgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=BrAYdR0OalgA+vePiuCUA/afPRo6zt3XjQ5U2q+Z+yI=; b=tQIRFUIEORjwKYizzvXTy6FF777xoY0qoHei3PEm66ajXGouKELBh4EanNsL+y2uVN O9O/wgJ+/+KXUgSxrtNe++UUKLkeyGYD4akCXmPrBGhrk2fF73b3va1IBQTaNXD+eVcz VmHzPGnEN7iDXV32lUrqFapEaj4P9SRObUbeK0UgeL04Y3PNVSuHhjLYYwdHNYiNx9ow +XvitMlAYpdYmSN1bS4/e6shGy5fFLjmmF2ynpxi2EmExvdYVswtZ3ym30jHyAjOk6nj xk0k6GVIUw0aLdiGmXLBgjtdG1hJHSVj8fr0B+8g1jkbqvCMqrCa3sWZ7Li+lXLHA2DT 4NEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=UWQID3dO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 195si7378881pga.696.2022.01.21.16.30.40; Fri, 21 Jan 2022 16:30:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=UWQID3dO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378494AbiAUCWD (ORCPT + 99 others); Thu, 20 Jan 2022 21:22:03 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:36896 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348465AbiAUCWD (ORCPT ); Thu, 20 Jan 2022 21:22:03 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9A97D60AF7; Fri, 21 Jan 2022 02:22:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA1A6C340E0; Fri, 21 Jan 2022 02:22:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642731722; bh=Ib+VTbQ6ifDdK93Yrsa7iKnj1Orhw5IdSp6Qa3JMJnQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=UWQID3dOoEeuy/VyEJaIPm8nXOCwL1j8Uyqea1A4WJZSjpeCvnAhqI5nhA2hNwhdX EKGl8xibiNh7CFEtwRqEk7oE7SqIVG79JVcULOvzMavGBwnh0CMHlSE4/FYVNNsv9g lQ9Jono3igXtvjZeou7BfVW597dRTyUTKIWUwXRHwRQic5kgpMLlGNZ1H899LMFR0q 78Fwuv2NRT+SlfeGeaTdYbRYt99giFZzWIrVKaXq1c1w7iBESemP2mYURrjWZOjFS9 s2ocsZKkPjjbZGCEP49TtcdoHsfYPs8jiqw9YrA06lAZoqeCu13+m5Lqf/G8jret3l +ISlNdqZspUKQ== Date: Thu, 20 Jan 2022 18:22:00 -0800 From: "Darrick J. Wong" To: Shiyang Ruan Cc: Christoph Hellwig , Dan Williams , Linux Kernel Mailing List , linux-xfs , Linux NVDIMM , Linux MM , linux-fsdevel , david , Jane Chu Subject: Re: [PATCH v9 02/10] dax: Introduce holder for dax_device Message-ID: <20220121022200.GG13563@magnolia> References: <20220105181230.GC398655@magnolia> <20220105185626.GE398655@magnolia> <20220105224727.GG398655@magnolia> <20220105235407.GN656707@magnolia> <76f5ed28-2df9-890e-0674-3ef2f18e2c2f@fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <76f5ed28-2df9-890e-0674-3ef2f18e2c2f@fujitsu.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 21, 2022 at 09:26:52AM +0800, Shiyang Ruan wrote: > > > 在 2022/1/20 16:46, Christoph Hellwig 写道: > > On Wed, Jan 05, 2022 at 04:12:04PM -0800, Dan Williams wrote: > > > We ended up with explicit callbacks after hch balked at a notifier > > > call-chain, but I think we're back to that now. The partition mistake > > > might be unfixable, but at least bdev_dax_pgoff() is dead. Notifier > > > call chains have their own locking so, Ruan, this still does not need > > > to touch dax_read_lock(). > > > > I think we have a few options here: > > > > (1) don't allow error notifications on partitions. And error return from > > the holder registration with proper error handling in the file > > system would give us that Hm, so that means XFS can only support dax+pmem when there aren't partitions in use? Ew. > > (2) extent the holder mechanism to cover a rangeo I don't think I was around for the part where "hch balked at a notifier call chain" -- what were the objections there, specifically? I would hope that pmem problems would be infrequent enough that the locking contention (or rcu expiration) wouldn't be an issue...? > > (3) bite the bullet and create a new stacked dax_device for each > > partition > > > > I think (1) is the best option for now. If people really do need > > partitions we'll have to go for (3) > > Yes, I agree. I'm doing it the first way right now. > > I think that since we can use namespace to divide a big NVDIMM into multiple > pmems, partition on a pmem seems not so meaningful. I'll try to find out what will happen if pmem suddenly stops supporting partitions... --D > > -- > Thanks, > Ruan. > >