Received: by 2002:a05:6358:45e:b0:b5:b6eb:e1f9 with SMTP id 30csp699480rwe; Thu, 1 Sep 2022 06:28:09 -0700 (PDT) X-Google-Smtp-Source: AA6agR7PYfEUtoyqb7rV+l0Tzz+KOOATrzObFUym1TXnhsFoI/kLUXhEEjA88fWZdy3N62xYRS2W X-Received: by 2002:a17:906:8cb0:b0:731:5149:a983 with SMTP id qr48-20020a1709068cb000b007315149a983mr23889143ejc.549.1662038889506; Thu, 01 Sep 2022 06:28:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662038889; cv=none; d=google.com; s=arc-20160816; b=PsUj39b6UXdiz9clt3CA91wY5FzVctPp+oLRAu2qOZDDMW+gjFKYQbEjhyYamtJBu6 OmLpW6Jnxz2lKByPvVb/U3N2UCFIVVGUcF53Ch0pYj/Ta8R93Yz9RIXey5ULW8i80i6R Mo9chE1s2ndwjRG5EEPGb2gFpRzv6VsZK9LLLIqREf/gZCvFKaIOiHEmXZ+c2AG8gy6y OLv22fjLl7cRORAxF/IkCqkSRsCvQLwGNuVupyuR9j6uPV+PvmkMnojJI7+BWDflQ2nP TU0E2/T6S7/kArjInXZmmquPiEjF6m85o0+JuqE/Lg/VfARhCZ36ZkYmywXWB78duqge eaOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=YaQpYm1Hly5y0hisSUJ1ZEBDJy147kBmdasH/3D+Uv0=; b=lFBLqWuPdQPa5apbNsEQTr9jmm20TFIpi63ZIscAfZsWkfajoan66/MbT3egPT/fXT V+0LzjsQhqhDAHqSsC1CpMI/e2jOaHIQB7Sxq7/SlLsLTQMLwldTCty1kfkeuwfkZlhj aD735e7wkzXIrEzQ/+225xylFlkdOxNVZ0i9Rk4z4pKglN7IjX/uUhhlXMfnhdMnREeJ dBBY2fFw0kgeSMhrs12y5FMIGTen8upg9a8aTJf7Fv2mSaifMe3unlAk9qo2jCX8pWVA jOTmoZS4+0UzWQvirN84taOFNnGHvRNrgbo1sIyyHqOTXeOdlBOedcItpxBFC51VWyXK e6eA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e11-20020a056402088b00b00447d566ae3bsi1589899edy.457.2022.09.01.06.27.45; Thu, 01 Sep 2022 06:28:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234402AbiIAN0h (ORCPT + 99 others); Thu, 1 Sep 2022 09:26:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234405AbiIANYf (ORCPT ); Thu, 1 Sep 2022 09:24:35 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45CCE15722; Thu, 1 Sep 2022 06:24:12 -0700 (PDT) Received: from canpemm500005.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4MJM9S2QXMz1N7cR; Thu, 1 Sep 2022 21:20:28 +0800 (CST) Received: from huawei.com (10.175.127.227) by canpemm500005.china.huawei.com (7.192.104.229) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 1 Sep 2022 21:24:08 +0800 From: Zhang Yi To: , , , , , , , CC: , , , , , , , , , , , , Subject: [PATCH v2 12/14] fs/buffer: remove ll_rw_block() helper Date: Thu, 1 Sep 2022 21:35:03 +0800 Message-ID: <20220901133505.2510834-13-yi.zhang@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220901133505.2510834-1-yi.zhang@huawei.com> References: <20220901133505.2510834-1-yi.zhang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500005.china.huawei.com (7.192.104.229) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Now that all ll_rw_block() users has been replaced to new safe helpers, we just remove it here. Signed-off-by: Zhang Yi Reviewed-by: Jan Kara --- fs/buffer.c | 63 +++---------------------------------- include/linux/buffer_head.h | 1 - 2 files changed, 4 insertions(+), 60 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index aec568b3ae52..2cccc7586b99 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -152,7 +152,7 @@ static void __end_buffer_read_notouch(struct buffer_head *bh, int uptodate) /* * Default synchronous end-of-IO handler.. Just mark it up-to-date and - * unlock the buffer. This is what ll_rw_block uses too. + * unlock the buffer. */ void end_buffer_read_sync(struct buffer_head *bh, int uptodate) { @@ -491,8 +491,8 @@ int inode_has_buffers(struct inode *inode) * all already-submitted IO to complete, but does not queue any new * writes to the disk. * - * To do O_SYNC writes, just queue the buffer writes with ll_rw_block as - * you dirty the buffers, and then use osync_inode_buffers to wait for + * To do O_SYNC writes, just queue the buffer writes with write_dirty_buffer + * as you dirty the buffers, and then use osync_inode_buffers to wait for * completion. Any other dirty buffers which are not yet queued for * write will not be flushed to disk by the osync. */ @@ -1806,7 +1806,7 @@ int __block_write_full_page(struct inode *inode, struct page *page, /* * The page was marked dirty, but the buffers were * clean. Someone wrote them back by hand with - * ll_rw_block/submit_bh. A rare case. + * write_dirty_buffer/submit_bh. A rare case. */ end_page_writeback(page); @@ -2713,61 +2713,6 @@ int submit_bh(blk_opf_t opf, struct buffer_head *bh) } EXPORT_SYMBOL(submit_bh); -/** - * ll_rw_block: low-level access to block devices (DEPRECATED) - * @opf: block layer request operation and flags. - * @nr: number of &struct buffer_heads in the array - * @bhs: array of pointers to &struct buffer_head - * - * ll_rw_block() takes an array of pointers to &struct buffer_heads, and - * requests an I/O operation on them, either a %REQ_OP_READ or a %REQ_OP_WRITE. - * @opf contains flags modifying the detailed I/O behavior, most notably - * %REQ_RAHEAD. - * - * This function drops any buffer that it cannot get a lock on (with the - * BH_Lock state bit), any buffer that appears to be clean when doing a write - * request, and any buffer that appears to be up-to-date when doing read - * request. Further it marks as clean buffers that are processed for - * writing (the buffer cache won't assume that they are actually clean - * until the buffer gets unlocked). - * - * ll_rw_block sets b_end_io to simple completion handler that marks - * the buffer up-to-date (if appropriate), unlocks the buffer and wakes - * any waiters. - * - * All of the buffers must be for the same device, and must also be a - * multiple of the current approved size for the device. - */ -void ll_rw_block(const blk_opf_t opf, int nr, struct buffer_head *bhs[]) -{ - const enum req_op op = opf & REQ_OP_MASK; - int i; - - for (i = 0; i < nr; i++) { - struct buffer_head *bh = bhs[i]; - - if (!trylock_buffer(bh)) - continue; - if (op == REQ_OP_WRITE) { - if (test_clear_buffer_dirty(bh)) { - bh->b_end_io = end_buffer_write_sync; - get_bh(bh); - submit_bh(opf, bh); - continue; - } - } else { - if (!buffer_uptodate(bh)) { - bh->b_end_io = end_buffer_read_sync; - get_bh(bh); - submit_bh(opf, bh); - continue; - } - } - unlock_buffer(bh); - } -} -EXPORT_SYMBOL(ll_rw_block); - void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags) { lock_buffer(bh); diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 6d09785bed9f..b415d8bc2a09 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -223,7 +223,6 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags); void free_buffer_head(struct buffer_head * bh); void unlock_buffer(struct buffer_head *bh); void __lock_buffer(struct buffer_head *bh); -void ll_rw_block(blk_opf_t, int, struct buffer_head * bh[]); int sync_dirty_buffer(struct buffer_head *bh); int __sync_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags); void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags); -- 2.31.1