Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp1779628pxb; Fri, 20 Aug 2021 13:54:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx9YWky8+mNX1FB8SH91JJoJWdzGjaJGp5BAc5OMg4Sj0PXMezTLVVgEH+/lciKF7UBcQN1 X-Received: by 2002:aa7:d815:: with SMTP id v21mr24240633edq.262.1629492854611; Fri, 20 Aug 2021 13:54:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629492854; cv=none; d=google.com; s=arc-20160816; b=DtLPtzh4FlUjpA+XCiISj0CMb2c89lxPAP6liiTmTtCqqHzsh+vXnoV6EoGKpygFKI jh1Jdmp7naG87tZ9oyhHQegOnNlfvyBCfz12CqM+3adzL0QPmDBY7sjWzEzORJZyI+Ly Xe+vt71avUJkfpu2ABkk7LBaRpEtxNQuBU5W43Q933XaBeJcv1jhUey2BWtN/BERfJJ6 SzH1QhiX8vYXHeX5bomU1rm+g7/yNZR9i/h8wLe5l+Y5nC3U33GIUFY7usCJzo3yPVyG RavldKheSgsINyyEeijbY6VIDmlT1bNtfzWBbBykm0p+bxfMORcj2cYAZqJE7xJnuZf0 EXGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=ULkGRivL56ptdhLg0qblOi29E6UuI3GKppQ73W5RcQM=; b=kbPqzxQzI36vy7FXQ6MeyzP0c89QayWfEpg4jsk5p94Pl0n6DSsjPe//iTGx1QxLiv Wy0raDfzKBurZ8irdMA8paDeJnUwwJ6s1CK3JLO2SokzWsSg0SkW0BhLkZiXhvwMHaLJ rOQ45u5pSyRjQ2qZzPj++/1wO8HuEPzPgvSKmFlMHjEkkEMSme8Qcgnq6WcQzQnnjhND h47ZgpgnFa3+1IGKg8vYykmOa+vAu6kZD6PkE+ttLDcgbNytq50z7N/6Jx+Ho9jlAhGg 55/j+cHLUrFeKGsMhuedud8Ijf84V8nppGnQuryVQpcTvIMqDSgGEM/nR8kj4GwYIMbV ahTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=wljVRLX4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id eb8si9510053edb.361.2021.08.20.13.53.29; Fri, 20 Aug 2021 13:54:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=wljVRLX4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239000AbhHTUwO (ORCPT + 99 others); Fri, 20 Aug 2021 16:52:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239615AbhHTUwM (ORCPT ); Fri, 20 Aug 2021 16:52:12 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15140C061756 for ; Fri, 20 Aug 2021 13:51:34 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id oa17so8164996pjb.1 for ; Fri, 20 Aug 2021 13:51:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ULkGRivL56ptdhLg0qblOi29E6UuI3GKppQ73W5RcQM=; b=wljVRLX4qPe+NrAUHjhvsefxSXN6ijVZv63Pk+E8YjbzTLnFXsawQ4zasGGHJ9DI/j LNnvEAJOChgaeq15qhAKQrNdM+zSHl83TKm21KHMzu/Kj34ogt2J8PlNoComIFiSxTux ZKAeASlP7cb89cYcVuiKawdWSszkqhTR6dDUO6JiMBsI2TJxtUKHqwu8/Mu+C8IAY6TH IBg2Cuwk7G5a8G6lOMJjraA6x87/5aIT/X9Nt616MtxhG+Oyu8CGTIEk74nvOt3pIg5R jyMa0gao/WvU4WUXcSgzEhn+z0TTdpm2b2ioKkK5gyn1+vGrSRBrUOY03LKG9SxDZltK zElQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ULkGRivL56ptdhLg0qblOi29E6UuI3GKppQ73W5RcQM=; b=oxCCuEGRPMurE3a/E8gencIwMKWGzIdQclrtcAzMiNtsS2L0hjFvEJhQrvPfYUtp4U 11UdK0vfBvwTkMaW2qe7XhVdEPnTYr9pgxmUgae8uOsjZFUpM2MEn/Esct0G+96qUsHc 0sGharetfBH4Oa2EHYVxNeA1kN4h3hP1hVKCY0bfV69AYNHUjq/5VQpkC0aBowfgKuiH 2sbqNDwHaqtlN18RA5EIl7U07fKMQiQHh683uQtYztIOsSWWCy9xt2K5QjWYUD5yOMYm UMLTPdD0cx6Xsk4nZZCMhNZaOQTNZ3HWMTCCdEfTatiGJFi6bnark+KvHnm4V/wmtR6n lTDQ== X-Gm-Message-State: AOAM530e373GaPwfKiFtmOrgypetcSZ3pQvs7/FFDkcBIQtunHWbbhlN ofEOZe3b0J2XNxXJCUqoo3dEyInJeHzDzRBeFOR8Lg== X-Received: by 2002:a17:902:e54e:b0:12d:cca1:2c1f with SMTP id n14-20020a170902e54e00b0012dcca12c1fmr17574055plf.79.1629492693543; Fri, 20 Aug 2021 13:51:33 -0700 (PDT) MIME-Version: 1.0 References: <20210730100158.3117319-1-ruansy.fnst@fujitsu.com> <20210730100158.3117319-5-ruansy.fnst@fujitsu.com> In-Reply-To: <20210730100158.3117319-5-ruansy.fnst@fujitsu.com> From: Dan Williams Date: Fri, 20 Aug 2021 13:51:22 -0700 Message-ID: Subject: Re: [PATCH RESEND v6 4/9] pmem,mm: Implement ->memory_failure in pmem driver To: Shiyang Ruan Cc: Linux Kernel Mailing List , linux-xfs , Linux NVDIMM , Linux MM , linux-fsdevel , device-mapper development , "Darrick J. Wong" , david , Christoph Hellwig , Alasdair Kergon , Mike Snitzer Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 30, 2021 at 3:02 AM Shiyang Ruan wrote: > > With dax_holder notify support, we are able to notify the memory failure > from pmem driver to upper layers. If there is something not support in > the notify routine, memory_failure will fall back to the generic hanlder. How about: "Any layer can return -EOPNOTSUPP to force memory_failure() to fall back to its generic implementation." > > Signed-off-by: Shiyang Ruan > --- > drivers/nvdimm/pmem.c | 13 +++++++++++++ > mm/memory-failure.c | 14 ++++++++++++++ > 2 files changed, 27 insertions(+) > > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c > index 1e0615b8565e..fea4ffc333b8 100644 > --- a/drivers/nvdimm/pmem.c > +++ b/drivers/nvdimm/pmem.c > @@ -362,9 +362,22 @@ static void pmem_release_disk(void *__pmem) > del_gendisk(pmem->disk); > } > > +static int pmem_pagemap_memory_failure(struct dev_pagemap *pgmap, > + unsigned long pfn, unsigned long nr_pfns, int flags) > +{ > + struct pmem_device *pmem = > + container_of(pgmap, struct pmem_device, pgmap); > + loff_t offset = PFN_PHYS(pfn) - pmem->phys_addr - pmem->data_offset; > + > + return dax_holder_notify_failure(pmem->dax_dev, offset, > + page_size(pfn_to_page(pfn)) * nr_pfns, I do not understand the usage of page_size() here? memory_failure() assumes PAGE_SIZE pages. DAX pages also do not populate the compound metadata yet, but even if they did I would expect memory_failure() to be responsible for doing something like: pgmap->ops->memory_failure(pgmap, pfn, size >> PAGE_SHIFT, flags); ...where @size is calculated from dev_pagemap_mapping_shift(). > + &flags); Why is the local flags variable passed by reference? At a minimum the memory_failure() flags should be translated to a new set dax-notify flags, because memory_failure() will not be the only user of this notification interface. See NVDIMM_REVALIDATE_POISON, and the discussion Dave and I had about using this notification to signal unsafe hot-removal of a memory device. > +} > + > static const struct dev_pagemap_ops fsdax_pagemap_ops = { > .kill = pmem_pagemap_kill, > .cleanup = pmem_pagemap_cleanup, > + .memory_failure = pmem_pagemap_memory_failure, > }; > > static int pmem_attach_disk(struct device *dev, > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 3bdfcb45f66e..ab3eda335acd 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1600,6 +1600,20 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, > */ > SetPageHWPoison(page); > > + /* > + * Call driver's implementation to handle the memory failure, otherwise > + * fall back to generic handler. > + */ > + if (pgmap->ops->memory_failure) { > + rc = pgmap->ops->memory_failure(pgmap, pfn, 1, flags); > + /* > + * Fall back to generic handler too if operation is not > + * supported inside the driver/device/filesystem. > + */ > + if (rc != EOPNOTSUPP) > + goto out; > + } > + > mf_generic_kill_procs(pfn, flags); > out: > /* drop pgmap ref acquired in caller */ > -- > 2.32.0 > > >