Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp136100pxb; Wed, 22 Sep 2021 18:09:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy4pjP8pGC/eKfF63uZUvWucRpJsabxBIexHqRGngjV6fZLnXRAA6ZzvcXmYZpwirP9wcMR X-Received: by 2002:a05:6e02:1c8a:: with SMTP id w10mr1537319ill.192.1632359385278; Wed, 22 Sep 2021 18:09:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632359385; cv=none; d=google.com; s=arc-20160816; b=TW4KIFHm0Qvc4sm2YPFE0Z82itMSjnaPJCFM3IzEY3Wm6YGo9bIn/jImXNIxYSXB48 2f8vh+NKDb3DCmz/dOmpo0D9yTT1yqYrCsTMBIXyD/Du/YR/s5YoiDG6kIxNW3svm9dm H6iItq0kWeZCCzXjSj9fc05L1/zHWKr5rn9j2iF24FuTXwq5wlroL8x49N+Pw7bJkliP PveSV9I2a4yHrEcApyhHYX/MgaJCpngymTqzLMht+bV/R/gIlKC8e2XYcgS7JnJTrcIc 36ckFg41wrPooheUsec+FIMHMVAgumkASbmtJP4bx82CU5InG3cnY964X75bFhme346s zMFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-disposition:mime-version:message-id :subject:cc:to:from:date:dkim-signature; bh=hIuDXqdTij5H1cQeY2tdet2HJoVuLF/eeBImd4tOUHQ=; b=BQZ4f2ZE/wtGVPgJK7nYGFOyrqmZggWogBDqD/1AH8kEJfHRCj9pZJnaBZIZ2SZCIT omaBoYQOEVm2anoD39d00IP+BhO+CNA1KABkmZnNm59BN8leb7pVPqq2unPio0df0HTY q0bSo4UNFEgoKLhy+qP14FOShJDsknh6Hk2uoI9Bq8k1Xt5w/EI2uD1bDF9RnOvF3dBS h4uJvo8ObrW/uWKC6boyYMrHZKUT9HD0lEXA4pZp1GE9UFBJTAOjjVA3khBJWanMihbY 6UWb25XRh7i64bk+wpWXBudtHS1U6pwOhvx6aVNr4puyPizeoZC/eDCiL6Wx7SGJZpwD Eb+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=btY1Bdc3; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r6si4113525iob.3.2021.09.22.18.09.19; Wed, 22 Sep 2021 18:09:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=btY1Bdc3; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238757AbhIWBKr (ORCPT + 99 others); Wed, 22 Sep 2021 21:10:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:60530 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237852AbhIWBKq (ORCPT ); Wed, 22 Sep 2021 21:10:46 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id DF19C6109E; Thu, 23 Sep 2021 01:09:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1632359356; bh=iZahWkK6AgIBtkoqokByR0E1+eyKtejBXJCDZTfg70U=; h=Date:From:To:Cc:Subject:From; b=btY1Bdc3iKmT2lK7y9xS1uoL0VxIJONnSOfkpehkXqPFPUznCrj258Rm2lBl0qVKB TRQUBY4tps+TSFfV9SrxzfuG6/Yv+HbU6FIUSCihuIsy1frzFrCvfR7W6HpjkHPyOP h2A2pQNNgEGRzt4+lW7BOABBOPKDcKllUXYABfnf98rJpGyIHuKdyJ/e+OJzLd9Okz C234Cp/HciJjGOW49J/JzjUQuIuSDjmp/xiFWTg0qgMrvAkSlt6KdAbb7f+TjzFfmF aY56agwhdUBuHPL5b/X/oM6u/wazVuaTZlVpNEYGuOMtNPlidHlDDtXL5h5xaGT18f t/+ncHhKyufeA== Date: Wed, 22 Sep 2021 18:09:15 -0700 From: "Darrick J. Wong" To: Christoph Hellwig Cc: Dan Williams , Vishal Verma , Dave Jiang , Mike Snitzer , Matthew Wilcox , linux-xfs@vger.kernel.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org Subject: [PATCH] dax: remove silly single-page limitation in dax_zero_page_range Message-ID: <20210923010915.GQ570615@magnolia> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org From: Darrick J. Wong It's totally silly that the dax zero_page_range implementations are required to accept a page count, but one of the four implementations silently ignores the page count and the wrapper itself errors out if you try to do more than one page. Fix the nvdimm implementation to loop over the page count and remove the artificial limitation. Signed-off-by: Darrick J. Wong --- drivers/dax/super.c | 7 ------- drivers/nvdimm/pmem.c | 14 +++++++++++--- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/drivers/dax/super.c b/drivers/dax/super.c index fc89e91beea7..ca61a01f9ccd 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -353,13 +353,6 @@ int dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff, { if (!dax_alive(dax_dev)) return -ENXIO; - /* - * There are no callers that want to zero more than one page as of now. - * Once users are there, this check can be removed after the - * device mapper code has been updated to split ranges across targets. - */ - if (nr_pages != 1) - return -EIO; return dax_dev->ops->zero_page_range(dax_dev, pgoff, nr_pages); } diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 72de88ff0d30..3ef40bf74168 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -288,10 +288,18 @@ static int pmem_dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff, size_t nr_pages) { struct pmem_device *pmem = dax_get_private(dax_dev); + int ret = 0; - return blk_status_to_errno(pmem_do_write(pmem, ZERO_PAGE(0), 0, - PFN_PHYS(pgoff) >> SECTOR_SHIFT, - PAGE_SIZE)); + for (; nr_pages > 0 && ret == 0; pgoff++, nr_pages--) { + blk_status_t status; + + status = pmem_do_write(pmem, ZERO_PAGE(0), 0, + PFN_PHYS(pgoff) >> SECTOR_SHIFT, + PAGE_SIZE); + ret = blk_status_to_errno(status); + } + + return ret; } static long pmem_dax_direct_access(struct dax_device *dax_dev,