Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3527396pxf; Mon, 15 Mar 2021 11:32:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw42nDFr77v3XZPo1MzqvZnGgjFX3i/OvwNHWLjF+53h/jFBOuRF6xyUMwS2XWpJSKO2ARG X-Received: by 2002:a17:906:a106:: with SMTP id t6mr25509230ejy.63.1615833157834; Mon, 15 Mar 2021 11:32:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615833157; cv=none; d=google.com; s=arc-20160816; b=jC+x8TV8OxydCwb29KEu4IykbfNC76EvIRAJKyvhPGq3J3UfPqjBCu2xRrAQgRbe18 wWGjg7isfaygGHZLuhhmyMJC+YekEYd58iNwchiF0paVrclS1uCaQMq3iI7ze29sJp7u sOQNVOlaAzYA86WdFqbfGrgV0vzQ9fknmhS7jmGzo25PyU9fQaGo8k6QGDN4WdAWaxL0 1k3ip+y8vNwVDjeinuYZuxM7ncz1j9I3rtezpeTGlrvGMSrAAYBIAEkpUfXEOQpUtdG5 Uz5hL13evxf/NbEgAXfliugKXArr7tar3oZwep8mtNvUxxqN2uriSekxlAcjy84ldN71 ZspA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=whNbblBzJeP0oUoGqfYEciAuRHBGcY2aglROiObK0dQ=; b=YIyLH6GH0bJwRWeDN7/8sVk7o8tmHWvBPpObSjXl2DS67Rp9m9zn8EGkPchbhAhjbz Im8yUwiJX+Fgd39DtZPZYLll/NRamcYff5sxmDx1Sqn7YShPTBAerH1kb3Oel57UHdcS PnBc/2j0GvePxN5YkCM4uJ/ISv7e/6Z26ete0TLgwOOokeYIsCPJDIqpLWKH6UD+IRIg KDOZrd2tNcrAwZd35sS6Z4BiSNQod9ifO6An38o9TbKjYCHHFK1/3axfJulsMdtlURdG wD8had49rUj/P54f4FIRbjJ5kmAISl1tsfAp5QknAbEBnNnvy6cTiikupZWhRVo2NpM/ nu7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=kSKb9xv7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id jt20si11803675ejb.157.2021.03.15.11.32.15; Mon, 15 Mar 2021 11:32:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=kSKb9xv7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240512AbhCOOiD (ORCPT + 99 others); Mon, 15 Mar 2021 10:38:03 -0400 Received: from mail.kernel.org ([198.145.29.99]:48170 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233493AbhCOOBw (ORCPT ); Mon, 15 Mar 2021 10:01:52 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 22DCC64F44; Mon, 15 Mar 2021 14:01:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1615816911; bh=b/wjv+TaXy3JctKRELBAG1GtOJTlYgRS9cSqdaiFkdY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kSKb9xv7CDA0+ERaMjKiLphRCwG/DpCg/XsFzccCKvXbSadqYgAufnDfGCAOA9n1k 4Tma1ntHwKdtGkaUN2W07/9r7DsvEX2sXsHezY3S8++08omQjlCmS8H2rk01Y6M6de XABOvkZD0rWYk1hsG+EfRdMV/y9/IVS5vIeMsTBk= From: gregkh@linuxfoundation.org To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Shinichiro Kawasaki , Christoph Hellwig , Johannes Thumshirn , Jens Axboe Subject: [PATCH 5.10 184/290] block: Discard page cache of zone reset target range Date: Mon, 15 Mar 2021 14:54:37 +0100 Message-Id: <20210315135548.137253859@linuxfoundation.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210315135541.921894249@linuxfoundation.org> References: <20210315135541.921894249@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Greg Kroah-Hartman From: Shin'ichiro Kawasaki commit e5113505904ea1c1c0e1f92c1cfa91fbf4da1694 upstream. When zone reset ioctl and data read race for a same zone on zoned block devices, the data read leaves stale page cache even though the zone reset ioctl zero clears all the zone data on the device. To avoid non-zero data read from the stale page cache after zone reset, discard page cache of reset target zones in blkdev_zone_mgmt_ioctl(). Introduce the helper function blkdev_truncate_zone_range() to discard the page cache. Ensure the page cache discarded by calling the helper function before and after zone reset in same manner as fallocate does. This patch can be applied back to the stable kernel version v5.10.y. Rework is needed for older stable kernels. Signed-off-by: Shin'ichiro Kawasaki Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls") Cc: # 5.10+ Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Link: https://lore.kernel.org/r/20210311072546.678999-1-shinichiro.kawasaki@wdc.com Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/blk-zoned.c | 38 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 36 insertions(+), 2 deletions(-) --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -318,6 +318,22 @@ int blkdev_report_zones_ioctl(struct blo return 0; } +static int blkdev_truncate_zone_range(struct block_device *bdev, fmode_t mode, + const struct blk_zone_range *zrange) +{ + loff_t start, end; + + if (zrange->sector + zrange->nr_sectors <= zrange->sector || + zrange->sector + zrange->nr_sectors > get_capacity(bdev->bd_disk)) + /* Out of range */ + return -EINVAL; + + start = zrange->sector << SECTOR_SHIFT; + end = ((zrange->sector + zrange->nr_sectors) << SECTOR_SHIFT) - 1; + + return truncate_bdev_range(bdev, mode, start, end); +} + /* * BLKRESETZONE, BLKOPENZONE, BLKCLOSEZONE and BLKFINISHZONE ioctl processing. * Called from blkdev_ioctl. @@ -329,6 +345,7 @@ int blkdev_zone_mgmt_ioctl(struct block_ struct request_queue *q; struct blk_zone_range zrange; enum req_opf op; + int ret; if (!argp) return -EINVAL; @@ -352,6 +369,11 @@ int blkdev_zone_mgmt_ioctl(struct block_ switch (cmd) { case BLKRESETZONE: op = REQ_OP_ZONE_RESET; + + /* Invalidate the page cache, including dirty pages. */ + ret = blkdev_truncate_zone_range(bdev, mode, &zrange); + if (ret) + return ret; break; case BLKOPENZONE: op = REQ_OP_ZONE_OPEN; @@ -366,8 +388,20 @@ int blkdev_zone_mgmt_ioctl(struct block_ return -ENOTTY; } - return blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors, - GFP_KERNEL); + ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors, + GFP_KERNEL); + + /* + * Invalidate the page cache again for zone reset: writes can only be + * direct for zoned devices so concurrent writes would not add any page + * to the page cache after/during reset. The page cache may be filled + * again due to concurrent reads though and dropping the pages for + * these is fine. + */ + if (!ret && cmd == BLKRESETZONE) + ret = blkdev_truncate_zone_range(bdev, mode, &zrange); + + return ret; } static inline unsigned long *blk_alloc_zone_bitmap(int node,