Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp346941pxb; Tue, 12 Apr 2022 03:26:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyUc/6w9IDl4PDKGFhIF/SnzLhfOVSgKQHIpLio/IXvQ2XCdLaVmfYYSieTl572BePY4KEJ X-Received: by 2002:a17:906:7ac6:b0:6e8:7c7b:8fe6 with SMTP id k6-20020a1709067ac600b006e87c7b8fe6mr11395808ejo.244.1649759164079; Tue, 12 Apr 2022 03:26:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649759164; cv=none; d=google.com; s=arc-20160816; b=aPv98Mvh+GzxrnbRZxDDq0KBc5vHoKn7a0gKVQl4CVIZDzCQ/NCkqnEjpdjKfO0VXb 1etGdn+dRGpkIkw+PwZF17bKiiCfHzCJxJRjSo756u4nXMtRPo3ChHJNUWkM3MMeH2hP jvsms6I1vrOPxT4/6nVH3xdmDo3V50sgx26MmuqpCSzQG7gKJWazZnMGvXKmeAzVfcRd muqw1frEVr6TolqIsC0dK2GK4kDd+t8sbx3S7XZ0ExpT88wP6vTMMXpW468eTGEl9/N/ kPHh5hlPjdvUnSdIEQmJ9S1h5B9UHPTCdsZbumtXTvtt3AM1tlpJmqxDHbqEZSwBqKMt 4xcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=2wEkt+Fl5QHii5v8XtDT8LMRWz6S8eR3qRxTt2Sw5V4=; b=lpP6FbfBatH3ltq74+PCgDWsROvkQmNueSon8wX7XsOgurG2ZiVD+ocptnoImhx5BY j9thMDZQ9IEqTsJ6hbfwyVYehjwhlp3nZWmT9UPtG/TVfW2+ithLmzgBCS7NL54TYHhU +qyknST+8gte8zaKo8tvikDhE8TJ91bkaM7qD+HBpMHu1qy9RWCD5rO52+3JRyewur1H H7dGjnj/PCa1RT8J9oaDv6CqAusP1rKIxo/fka8KhfdBvyo24WBrkeA5SejocFAxgcfy wYgQl6da2+l6Zyn0cZAJILGoHO2p0AfspZQLXjp3VlIR/9zwMLMlrHa0nO5Cg+pOXjxR B8KQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20210112.gappssmtp.com header.s=20210112 header.b=O5GYMczR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a24-20020a170906275800b006e89691632bsi3381165ejd.919.2022.04.12.03.25.36; Tue, 12 Apr 2022 03:26:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20210112.gappssmtp.com header.s=20210112 header.b=O5GYMczR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239272AbiDLAKv (ORCPT + 99 others); Mon, 11 Apr 2022 20:10:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238262AbiDLAKt (ORCPT ); Mon, 11 Apr 2022 20:10:49 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF7BD19290 for ; Mon, 11 Apr 2022 17:08:32 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id o5-20020a17090ad20500b001ca8a1dc47aso924801pju.1 for ; Mon, 11 Apr 2022 17:08:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=2wEkt+Fl5QHii5v8XtDT8LMRWz6S8eR3qRxTt2Sw5V4=; b=O5GYMczRyjp9uULNTyB4HDTn7FN9FuUeovnT56eEy6ngCmuYvzf1umOPz47prHSQNf sjm/mmaBMH9kX3OdAcZ7e/mJAQAcMPie9u2zSpjWdnb1xMYgeQIiHu/x3QzFfgMPwLcn ChcH10leiDHwQJUEVJeAwZiwajcffDTsWWV8oFMN6dAi0MmppUO+1s4Tub7/W9P9bpJo C/zenybAGnDMDQP2ii5SwFJs5SuLp7PIFO7OKXo4uROOns9yBTBle9VJuXUdM3HT/PBz G5aoKrJ8LlOqXpDBI/i4othZBimrMI7KGMfeW3P4doPTZZLtAGAaKDecV9y6sDmmSnos PvTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=2wEkt+Fl5QHii5v8XtDT8LMRWz6S8eR3qRxTt2Sw5V4=; b=Df9ulaQXSyebx9C1pGtuzsP/3OJ6WlwrS7LRPtl+/mxvUlg+GaYTGF0r0vlPNeWsVj VhpjB5151GvRQV3bvmLuRircIEdfrBYszE4uzwFvRqZcORaZxvQJG7Az7vmAbh/q3aFe W7y8XB2SmxILnLfoX9LOk0VZfNGq5EIiRLJgprIukDw2RIw9g5r130oirfXJf1DsJ3FD 40vUAxa4jILIxBjcxmM/8Yg25ZwDDcCpmnjvVaf2XY5QRAhgU/ozXylP4TXadGH7T6K0 bus3TVE80PiOyr23o8wKrI8cXThgimmj0kHsh7AavITy52XBppS6/44JXwG0zotQQibG iIvQ== X-Gm-Message-State: AOAM5321U1oxsSXanKgRsUDB7HFmcQFolhfpX5vBFC1nZFcJ4kJ0ccho 45oFddG2TybQt3OrOeTKvPrfalHCeoAYtKhf6ZhuJQ== X-Received: by 2002:a17:903:32c5:b0:156:b466:c8ed with SMTP id i5-20020a17090332c500b00156b466c8edmr34561265plr.34.1649722112368; Mon, 11 Apr 2022 17:08:32 -0700 (PDT) MIME-Version: 1.0 References: <20220405194747.2386619-1-jane.chu@oracle.com> <20220405194747.2386619-5-jane.chu@oracle.com> In-Reply-To: <20220405194747.2386619-5-jane.chu@oracle.com> From: Dan Williams Date: Mon, 11 Apr 2022 17:08:21 -0700 Message-ID: Subject: Re: [PATCH v7 4/6] dax: add DAX_RECOVERY flag and .recovery_write dev_pgmap_ops To: Jane Chu Cc: david , "Darrick J. Wong" , Christoph Hellwig , Vishal L Verma , Dave Jiang , Alasdair Kergon , Mike Snitzer , device-mapper development , "Weiny, Ira" , Matthew Wilcox , Vivek Goyal , linux-fsdevel , Linux NVDIMM , Linux Kernel Mailing List , linux-xfs , X86 ML Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 5, 2022 at 12:48 PM Jane Chu wrote: > > Introduce DAX_RECOVERY flag to dax_direct_access(). The flag is > not set by default in dax_direct_access() such that the helper > does not translate a pmem range to kernel virtual address if the > range contains uncorrectable errors. When the flag is set, > the helper ignores the UEs and return kernel virtual adderss so > that the caller may get on with data recovery via write. > > Also introduce a new dev_pagemap_ops .recovery_write function. > The function is applicable to FSDAX device only. The device > page backend driver provides .recovery_write function if the > device has underlying mechanism to clear the uncorrectable > errors on the fly. > > Signed-off-by: Jane Chu > --- > drivers/dax/super.c | 17 ++++++++-- > drivers/md/dm-linear.c | 4 +-- > drivers/md/dm-log-writes.c | 5 +-- > drivers/md/dm-stripe.c | 4 +-- > drivers/md/dm-target.c | 2 +- > drivers/md/dm-writecache.c | 5 +-- > drivers/md/dm.c | 5 +-- > drivers/nvdimm/pmem.c | 57 +++++++++++++++++++++++++++------ > drivers/nvdimm/pmem.h | 2 +- > drivers/s390/block/dcssblk.c | 4 +-- > fs/dax.c | 24 ++++++++++---- > fs/fuse/dax.c | 4 +-- > include/linux/dax.h | 11 +++++-- > include/linux/device-mapper.h | 2 +- > include/linux/memremap.h | 7 ++++ > tools/testing/nvdimm/pmem-dax.c | 2 +- > 16 files changed, 116 insertions(+), 39 deletions(-) > > diff --git a/drivers/dax/super.c b/drivers/dax/super.c > index 0211e6f7b47a..8252858cd25a 100644 > --- a/drivers/dax/super.c > +++ b/drivers/dax/super.c > @@ -13,6 +13,7 @@ > #include > #include > #include > +#include > #include "dax-private.h" > > /** > @@ -117,6 +118,7 @@ enum dax_device_flags { > * @dax_dev: a dax_device instance representing the logical memory range > * @pgoff: offset in pages from the start of the device to translate > * @nr_pages: number of consecutive pages caller can handle relative to @pfn > + * @flags: by default 0, set to DAX_RECOVERY to kick start dax recovery > * @kaddr: output parameter that returns a virtual address mapping of pfn > * @pfn: output parameter that returns an absolute pfn translation of @pgoff > * > @@ -124,7 +126,7 @@ enum dax_device_flags { > * pages accessible at the device relative @pgoff. > */ > long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, > - void **kaddr, pfn_t *pfn) > + int flags, void **kaddr, pfn_t *pfn) > { > long avail; > > @@ -137,7 +139,7 @@ long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages, > if (nr_pages < 0) > return -EINVAL; > > - avail = dax_dev->ops->direct_access(dax_dev, pgoff, nr_pages, > + avail = dax_dev->ops->direct_access(dax_dev, pgoff, nr_pages, flags, > kaddr, pfn); > if (!avail) > return -ERANGE; > @@ -194,6 +196,17 @@ int dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff, > } > EXPORT_SYMBOL_GPL(dax_zero_page_range); > > +size_t dax_recovery_write(struct dax_device *dax_dev, pgoff_t pgoff, > + pfn_t pfn, void *addr, size_t bytes, struct iov_iter *iter) > +{ > + struct dev_pagemap *pgmap = get_dev_pagemap(pfn_t_to_pfn(pfn), NULL); > + > + if (!pgmap || !pgmap->ops || !pgmap->ops->recovery_write) > + return 0; > + return pgmap->ops->recovery_write(pgmap, pgoff, addr, bytes, iter); > +} > +EXPORT_SYMBOL_GPL(dax_recovery_write); > + > #ifdef CONFIG_ARCH_HAS_PMEM_API > void arch_wb_cache_pmem(void *addr, size_t size); > void dax_flush(struct dax_device *dax_dev, void *addr, size_t size) > diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c > index 76b486e4d2be..9e6d8bdf3b2a 100644 > --- a/drivers/md/dm-linear.c > +++ b/drivers/md/dm-linear.c > @@ -172,11 +172,11 @@ static struct dax_device *linear_dax_pgoff(struct dm_target *ti, pgoff_t *pgoff) > } > > static long linear_dax_direct_access(struct dm_target *ti, pgoff_t pgoff, > - long nr_pages, void **kaddr, pfn_t *pfn) > + long nr_pages, int flags, void **kaddr, pfn_t *pfn) > { > struct dax_device *dax_dev = linear_dax_pgoff(ti, &pgoff); > > - return dax_direct_access(dax_dev, pgoff, nr_pages, kaddr, pfn); > + return dax_direct_access(dax_dev, pgoff, nr_pages, flags, kaddr, pfn); > } > > static int linear_dax_zero_page_range(struct dm_target *ti, pgoff_t pgoff, > diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c > index c9d036d6bb2e..e23f062ade5f 100644 > --- a/drivers/md/dm-log-writes.c > +++ b/drivers/md/dm-log-writes.c > @@ -889,11 +889,12 @@ static struct dax_device *log_writes_dax_pgoff(struct dm_target *ti, > } > > static long log_writes_dax_direct_access(struct dm_target *ti, pgoff_t pgoff, > - long nr_pages, void **kaddr, pfn_t *pfn) > + long nr_pages, int flags, > + void **kaddr, pfn_t *pfn) > { > struct dax_device *dax_dev = log_writes_dax_pgoff(ti, &pgoff); > > - return dax_direct_access(dax_dev, pgoff, nr_pages, kaddr, pfn); > + return dax_direct_access(dax_dev, pgoff, nr_pages, flags, kaddr, pfn); > } > > static int log_writes_dax_zero_page_range(struct dm_target *ti, pgoff_t pgoff, > diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c > index c81d331d1afe..b89339c78702 100644 > --- a/drivers/md/dm-stripe.c > +++ b/drivers/md/dm-stripe.c > @@ -315,11 +315,11 @@ static struct dax_device *stripe_dax_pgoff(struct dm_target *ti, pgoff_t *pgoff) > } > > static long stripe_dax_direct_access(struct dm_target *ti, pgoff_t pgoff, > - long nr_pages, void **kaddr, pfn_t *pfn) > + long nr_pages, int flags, void **kaddr, pfn_t *pfn) > { > struct dax_device *dax_dev = stripe_dax_pgoff(ti, &pgoff); > > - return dax_direct_access(dax_dev, pgoff, nr_pages, kaddr, pfn); > + return dax_direct_access(dax_dev, pgoff, nr_pages, flags, kaddr, pfn); > } > > static int stripe_dax_zero_page_range(struct dm_target *ti, pgoff_t pgoff, > diff --git a/drivers/md/dm-target.c b/drivers/md/dm-target.c > index 64dd0b34fcf4..24b1e5628f3a 100644 > --- a/drivers/md/dm-target.c > +++ b/drivers/md/dm-target.c > @@ -142,7 +142,7 @@ static void io_err_release_clone_rq(struct request *clone, > } > > static long io_err_dax_direct_access(struct dm_target *ti, pgoff_t pgoff, > - long nr_pages, void **kaddr, pfn_t *pfn) > + long nr_pages, int flags, void **kaddr, pfn_t *pfn) > { > return -EIO; > } > diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c > index 5630b470ba42..180ca8fa383e 100644 > --- a/drivers/md/dm-writecache.c > +++ b/drivers/md/dm-writecache.c > @@ -286,7 +286,8 @@ static int persistent_memory_claim(struct dm_writecache *wc) > > id = dax_read_lock(); > > - da = dax_direct_access(wc->ssd_dev->dax_dev, offset, p, &wc->memory_map, &pfn); > + da = dax_direct_access(wc->ssd_dev->dax_dev, offset, p, 0, > + &wc->memory_map, &pfn); > if (da < 0) { > wc->memory_map = NULL; > r = da; > @@ -309,7 +310,7 @@ static int persistent_memory_claim(struct dm_writecache *wc) > do { > long daa; > daa = dax_direct_access(wc->ssd_dev->dax_dev, offset + i, p - i, > - NULL, &pfn); > + 0, NULL, &pfn); > if (daa <= 0) { > r = daa ? daa : -EINVAL; > goto err3; > diff --git a/drivers/md/dm.c b/drivers/md/dm.c > index ad2e0bbeb559..a8c697bb6603 100644 > --- a/drivers/md/dm.c > +++ b/drivers/md/dm.c > @@ -1087,7 +1087,8 @@ static struct dm_target *dm_dax_get_live_target(struct mapped_device *md, > } > > static long dm_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, > - long nr_pages, void **kaddr, pfn_t *pfn) > + long nr_pages, int flags, void **kaddr, > + pfn_t *pfn) > { > struct mapped_device *md = dax_get_private(dax_dev); > sector_t sector = pgoff * PAGE_SECTORS; > @@ -1105,7 +1106,7 @@ static long dm_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, > if (len < 1) > goto out; > nr_pages = min(len, nr_pages); > - ret = ti->type->direct_access(ti, pgoff, nr_pages, kaddr, pfn); > + ret = ti->type->direct_access(ti, pgoff, nr_pages, flags, kaddr, pfn); > > out: > dm_put_live_table(md, srcu_idx); > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c > index 30c71a68175b..0400c5a7ba39 100644 > --- a/drivers/nvdimm/pmem.c > +++ b/drivers/nvdimm/pmem.c > @@ -238,12 +238,23 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector, > > /* see "strong" declaration in tools/testing/nvdimm/pmem-dax.c */ > __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, > - long nr_pages, void **kaddr, pfn_t *pfn) > + long nr_pages, int flags, void **kaddr, pfn_t *pfn) > { > resource_size_t offset = PFN_PHYS(pgoff) + pmem->data_offset; > + sector_t sector = PFN_PHYS(pgoff) >> SECTOR_SHIFT; > + unsigned int num = PFN_PHYS(nr_pages) >> SECTOR_SHIFT; > + struct badblocks *bb = &pmem->bb; > + sector_t first_bad; > + int num_bad; > + bool bad_in_range; > + long actual_nr; > + > + if (!bb->count) > + bad_in_range = false; > + else > + bad_in_range = !!badblocks_check(bb, sector, num, &first_bad, &num_bad); Why all this change... > > - if (unlikely(is_bad_pmem(&pmem->bb, PFN_PHYS(pgoff) / 512, > - PFN_PHYS(nr_pages)))) ...instead of adding "&& !(flags & DAX_RECOVERY)" to this statement? > + if (bad_in_range && !(flags & DAX_RECOVERY)) > return -EIO; > > if (kaddr) > @@ -251,13 +262,26 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, > if (pfn) > *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); > > + if (!bad_in_range) { > + /* > + * If badblock is present but not in the range, limit known good range > + * to the requested range. > + */ > + if (bb->count) > + return nr_pages; > + return PHYS_PFN(pmem->size - pmem->pfn_pad - offset); > + } > + > /* > - * If badblocks are present, limit known good range to the > - * requested range. > + * In case poison is found in the given range and DAX_RECOVERY flag is set, > + * recovery stride is set to kernel page size because the underlying driver and > + * firmware clear poison functions don't appear to handle large chunk (such as > + * 2MiB) reliably. > */ > - if (unlikely(pmem->bb.count)) > - return nr_pages; > - return PHYS_PFN(pmem->size - pmem->pfn_pad - offset); > + actual_nr = PHYS_PFN(PAGE_ALIGN((first_bad - sector) << SECTOR_SHIFT)); > + dev_dbg(pmem->bb.dev, "start sector(%llu), nr_pages(%ld), first_bad(%llu), actual_nr(%ld)\n", > + sector, nr_pages, first_bad, actual_nr); > + return (actual_nr == 0) ? 1 : actual_nr; Similar feedback as above that this is more change than I would expect. I think just adding... if (flags & DAX_RECOVERY) return 1; ...before the typical return path is enough.