Received: by 2002:a05:6a10:87d6:0:0:0:0 with SMTP id g22csp189928pxr; Sun, 10 Apr 2022 11:24:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxMgok1ctrU+ohhG+K7vYiJoU6b6TtzxTnQtRA8KfA9s7dPERQ5sVII5nKPazZoaNJOZHgb X-Received: by 2002:a17:907:6d9c:b0:6e8:556d:8bcf with SMTP id sb28-20020a1709076d9c00b006e8556d8bcfmr10027201ejc.466.1649615052247; Sun, 10 Apr 2022 11:24:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649615052; cv=none; d=google.com; s=arc-20160816; b=BjLOlAWFqqeC+TYdENNS2/rikRFOOVd6R/qAGE6R4tiFSVfX34sdoW/BXjw2vaNGRv DLGSDcneiwHT4LBHUDxBBqDFBpXtXhFnBFf1tr5wInZ1cw++SB5IuNTkPYecmvLQyOZh 64sF6zjhFlWHeT9jA81HeIGAMYI3rNNJWA5RlWRPxeU9mvEINIdZBj0TyXeD0KQqg0yv 7HuyKxlOW/QuUpsIFIWHauwuVI1V4QpsB1/JqEJ0JAdo0RNckJhuFJLJxhJOx+absdHN x9VhaV2gUyn8/pXcUZ3Ay7n1uNR2hzMmfQlY6DxS/ErjvFRacl+KHyGA5Ej0DOs9C1rj 7tOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZlX6tQSx9Ltba6eWidCjog7i9CmwA2DoDl+4xkJfBJs=; b=b2yeFvQVaIbMxKq+0w4zecitH956BSsx+1LnEiHtETcattY4z1RNoLFloQXciKPjve YmJCrpwugQkgslm6nrN6qeQ7yDYyXvclQqlto7tGT+trxER8gZxu7NMO8l1c/bG9t5GC tNpwXN5S1D0sD3vX48tKCCtpFVXN7oW53CGWN1KZvwmVHQHz+DkG6bb9djv+OmhwYQBb ELYljOMgu+Bd/PZ/9G3GX4cM+P7ZVEnTfzrGCuSk+Y/oRI9iNS+7Ekxt+/y6r/eNb0sR zyhaATr+EWbiOojk/t6N5sbj76EX6eIeBpiuehSitYrugr4gbjk7Td99OgW7bWSykOCY wkUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20210309 header.b="t0/VFWQl"; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i12-20020a1709063c4c00b006e8421577ddsi5352807ejg.324.2022.04.10.11.23.47; Sun, 10 Apr 2022 11:24:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20210309 header.b="t0/VFWQl"; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234978AbiDIFD4 (ORCPT + 99 others); Sat, 9 Apr 2022 01:03:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241051AbiDIEyd (ORCPT ); Sat, 9 Apr 2022 00:54:33 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D4D2C6267; Fri, 8 Apr 2022 21:51:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ZlX6tQSx9Ltba6eWidCjog7i9CmwA2DoDl+4xkJfBJs=; b=t0/VFWQlthrg9ygdtBQvibmrtw LuAylodlmYryYYUrMFw+2E7ZI5/Nq3RPLajmI/sJOOANuV/p8y8jXZN9h723Ey9aR6CP5rTxfbRrr HuAz3hVO1/AFxLmiM5Wzc1FzjN390iT78tp1sPhD61fljInCDQYIAXixGJQUZcGA9CdOLnoKsm4+1 +bPP76gPoCLkZnj6nQKNpV2BXZ/SNnWDXNRm9L27BxTM3H51l1sJ/IJZihGZ7lk1wjRXmX+JN2rsw fyasqFUEgGEDqBdftLTs06VMMbXb+A4p+dlovB84cg6XaBod0TxhfjVz6byOWHzm9QbWUe2lXWUhy 7thBMaGA==; Received: from 213-147-167-116.nat.highway.webapn.at ([213.147.167.116] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nd34M-0020w4-RN; Sat, 09 Apr 2022 04:51:31 +0000 From: Christoph Hellwig To: Jens Axboe Cc: dm-devel@redhat.com, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-um@lists.infradead.org, linux-block@vger.kernel.org, drbd-dev@lists.linbit.com, nbd@other.debian.org, ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, jfs-discussion@lists.sourceforge.net, linux-nilfs@vger.kernel.org, ntfs3@lists.linux.dev, ocfs2-devel@oss.oracle.com, linux-mm@kvack.org, "Martin K . Petersen" Subject: [PATCH 13/27] block: add a bdev_fua helper Date: Sat, 9 Apr 2022 06:50:29 +0200 Message-Id: <20220409045043.23593-14-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220409045043.23593-1-hch@lst.de> References: <20220409045043.23593-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Add a helper to check the FUA flag based on the block_device instead of having to poke into the block layer internal request_queue. Signed-off-by: Christoph Hellwig Reviewed-by: Martin K. Petersen --- drivers/block/rnbd/rnbd-srv.c | 3 +-- drivers/target/target_core_iblock.c | 3 +-- fs/iomap/direct-io.c | 3 +-- include/linux/blkdev.h | 6 +++++- 4 files changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c index f8cc3c5fecb4b..beaef43a67b9d 100644 --- a/drivers/block/rnbd/rnbd-srv.c +++ b/drivers/block/rnbd/rnbd-srv.c @@ -533,7 +533,6 @@ static void rnbd_srv_fill_msg_open_rsp(struct rnbd_msg_open_rsp *rsp, struct rnbd_srv_sess_dev *sess_dev) { struct rnbd_dev *rnbd_dev = sess_dev->rnbd_dev; - struct request_queue *q = bdev_get_queue(rnbd_dev->bdev); rsp->hdr.type = cpu_to_le16(RNBD_MSG_OPEN_RSP); rsp->device_id = @@ -560,7 +559,7 @@ static void rnbd_srv_fill_msg_open_rsp(struct rnbd_msg_open_rsp *rsp, rsp->cache_policy = 0; if (bdev_write_cache(rnbd_dev->bdev)) rsp->cache_policy |= RNBD_WRITEBACK; - if (blk_queue_fua(q)) + if (bdev_fua(rnbd_dev->bdev)) rsp->cache_policy |= RNBD_FUA; } diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c index 03013e85ffc03..c4a903b8a47fc 100644 --- a/drivers/target/target_core_iblock.c +++ b/drivers/target/target_core_iblock.c @@ -727,14 +727,13 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents, if (data_direction == DMA_TO_DEVICE) { struct iblock_dev *ib_dev = IBLOCK_DEV(dev); - struct request_queue *q = bdev_get_queue(ib_dev->ibd_bd); /* * Force writethrough using REQ_FUA if a volatile write cache * is not enabled, or if initiator set the Force Unit Access bit. */ opf = REQ_OP_WRITE; miter_dir = SG_MITER_TO_SG; - if (test_bit(QUEUE_FLAG_FUA, &q->queue_flags)) { + if (bdev_fua(ib_dev->ibd_bd)) { if (cmd->se_cmd_flags & SCF_FUA) opf |= REQ_FUA; else if (!bdev_write_cache(ib_dev->ibd_bd)) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index b08f5dc31780d..62da020d02a11 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -265,8 +265,7 @@ static loff_t iomap_dio_bio_iter(const struct iomap_iter *iter, * cache flushes on IO completion. */ if (!(iomap->flags & (IOMAP_F_SHARED|IOMAP_F_DIRTY)) && - (dio->flags & IOMAP_DIO_WRITE_FUA) && - blk_queue_fua(bdev_get_queue(iomap->bdev))) + (dio->flags & IOMAP_DIO_WRITE_FUA) && bdev_fua(iomap->bdev)) use_fua = true; } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 807a49aa5a27a..075b16d4560e7 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -602,7 +602,6 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); REQ_FAILFAST_DRIVER)) #define blk_queue_quiesced(q) test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags) #define blk_queue_pm_only(q) atomic_read(&(q)->pm_only) -#define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags) #define blk_queue_registered(q) test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags) #define blk_queue_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) @@ -1336,6 +1335,11 @@ static inline bool bdev_write_cache(struct block_device *bdev) return test_bit(QUEUE_FLAG_WC, &bdev_get_queue(bdev)->queue_flags); } +static inline bool bdev_fua(struct block_device *bdev) +{ + return test_bit(QUEUE_FLAG_FUA, &bdev_get_queue(bdev)->queue_flags); +} + static inline enum blk_zoned_model bdev_zoned_model(struct block_device *bdev) { struct request_queue *q = bdev_get_queue(bdev); -- 2.30.2