Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp495400ybb; Wed, 25 Mar 2020 04:07:16 -0700 (PDT) X-Google-Smtp-Source: ADFU+vteKMYzkpIKF5Tib3Bqe+QcMEhd9SQAmUGaJe6aLgqgIzF1Sp7ctfi+tFo4b/RpPNZguP3J X-Received: by 2002:a05:6830:4008:: with SMTP id h8mr2083105ots.295.1585134436131; Wed, 25 Mar 2020 04:07:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585134436; cv=none; d=google.com; s=arc-20160816; b=YogQVHTDa+tCB6oJ5Z5mqqCWFJ4ZnOeI4AoPAkZScCSeFQagZlDUUKLmuQNVDA6NLD 7n15Mf1GP+2M7oTCnIabuNqLvyrqtk3UxQR0nWptj1nmclKDOSTdAmJLbVISFP/Rc4xX MCR5Id8C+WtHO/EQ9zs1NcWVzIFylWDzetEnKrsdKSl0oU7a+ktDYBIbdf6hHLy7K69R boY4tNQR/t+I3QmgnwVn4RsXmtZSXY+s9x/fx8CihuBDxvx5jNFnWBPVwPZq3gC7zVaR L8j3fsWaU1D/x7d7Ck2eq8G3zwBh9VEnAeyDf/Oy/l/B61bIAoGV/C/CC/Acc5t5phsH qrAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GGAihrgfazY+J09FVF3Cu9uvpbH1gaOwZUdXdKRs6jY=; b=gq/IONTeShTasefMBMe+uzWRMZofpmEf9rsbPFgwbCTnD1PrPScjalqFnptorvT8nK 0dLFZtOEs/T/TgJQ+lmA9Wc491VPYs9QmJCIQ4Oj8uj4KhbIyfjbnZAAvb0y2cRotiQ2 +z74IzeKZ1jA56qlBHmvXCwUS1gpYUCGDffzSuoFFCMbbKYrDdkoxzsr16NNTZaSrTAe GnqO3sS14Nrrf8Wlnbr44Isv0NtuJy0Y2hi4D60mJejdvnNq2muefGoELHFIYcfufCDk 1cmxB/5L2wRhDAbbU4jxkDNmDKIV68dsjvSSUC4ahxKVqguBVS7MPyk40Yj1k4fKJq6q 8YAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=tCaF+DKi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m18si889460otn.322.2020.03.25.04.07.02; Wed, 25 Mar 2020 04:07:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=tCaF+DKi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727640AbgCYLGK (ORCPT + 99 others); Wed, 25 Mar 2020 07:06:10 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:57868 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727380AbgCYLGJ (ORCPT ); Wed, 25 Mar 2020 07:06:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=GGAihrgfazY+J09FVF3Cu9uvpbH1gaOwZUdXdKRs6jY=; b=tCaF+DKiCMrD2brRQl9hRH0WMX W8aIuu0o+0ij0/Df0v58BpY5knZ/e3+l5lmLPhn2pTh8F6Nt0RP37/ReuAp/dPwtyl7m5DfWvKWrR bEUGtvwhJ6b0UaG2PYZsxVgQJ+EtS4VTYDsTn8U2HtEwzcmjF7KGyBFsda6Cwd/UAhijjT04tNNsI Dryq4vITWsjx1RUh2YmW6ZVTpvuq55BhiTtvdw8JvICWvJs7/z4Zcljvt/BrCzhMCjp3Fpurwf6LQ JqbkamSsC+dVUd3ZCdHPZG9TyvC+7DAVZDUsMl4EhI3iuhCqJCRPN43pidZfchf9Do5C1IXZ8HNr6 ZIdtO9Gg==; Received: from [2001:4bb8:18c:2a9e:999c:283e:b14a:9189] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jH3rN-0002kn-5F; Wed, 25 Mar 2020 11:06:09 +0000 From: Christoph Hellwig To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/8] block: move guard_bio_eod to bio.c Date: Wed, 25 Mar 2020 12:03:36 +0100 Message-Id: <20200325110338.1029232-7-hch@lst.de> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200325110338.1029232-1-hch@lst.de> References: <20200325110338.1029232-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is bio layer functionality and not related to buffer heads. Signed-off-by: Christoph Hellwig --- block/bio.c | 43 +++++++++++++++++++++++++++++++++++++++++++ fs/buffer.c | 43 ------------------------------------------- fs/internal.h | 1 - include/linux/bio.h | 1 + 4 files changed, 44 insertions(+), 44 deletions(-) diff --git a/block/bio.c b/block/bio.c index 209715765a7a..28d8055c04e2 100644 --- a/block/bio.c +++ b/block/bio.c @@ -588,6 +588,49 @@ void bio_truncate(struct bio *bio, unsigned new_size) bio->bi_iter.bi_size = new_size; } +/** + * guard_bio_eod - truncate a BIO to fit the block device + * @bio: bio to truncate + * + * This allows us to do IO even on the odd last sectors of a device, even if the + * block size is some multiple of the physical sector size. + * + * We'll just truncate the bio to the size of the device, and clear the end of + * the buffer head manually. Truly out-of-range accesses will turn into actual + * I/O errors, this only handles the "we need to be able to do I/O at the final + * sector" case. + */ +void guard_bio_eod(struct bio *bio) +{ + sector_t maxsector; + struct hd_struct *part; + + rcu_read_lock(); + part = __disk_get_part(bio->bi_disk, bio->bi_partno); + if (part) + maxsector = part_nr_sects_read(part); + else + maxsector = get_capacity(bio->bi_disk); + rcu_read_unlock(); + + if (!maxsector) + return; + + /* + * If the *whole* IO is past the end of the device, + * let it through, and the IO layer will turn it into + * an EIO. + */ + if (unlikely(bio->bi_iter.bi_sector >= maxsector)) + return; + + maxsector -= bio->bi_iter.bi_sector; + if (likely((bio->bi_iter.bi_size >> 9) <= maxsector)) + return; + + bio_truncate(bio, maxsector << 9); +} + /** * bio_put - release a reference to a bio * @bio: bio to release reference to diff --git a/fs/buffer.c b/fs/buffer.c index b8d28370cfd7..3f5758e01e40 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -3019,49 +3019,6 @@ static void end_bio_bh_io_sync(struct bio *bio) bio_put(bio); } -/* - * This allows us to do IO even on the odd last sectors - * of a device, even if the block size is some multiple - * of the physical sector size. - * - * We'll just truncate the bio to the size of the device, - * and clear the end of the buffer head manually. - * - * Truly out-of-range accesses will turn into actual IO - * errors, this only handles the "we need to be able to - * do IO at the final sector" case. - */ -void guard_bio_eod(struct bio *bio) -{ - sector_t maxsector; - struct hd_struct *part; - - rcu_read_lock(); - part = __disk_get_part(bio->bi_disk, bio->bi_partno); - if (part) - maxsector = part_nr_sects_read(part); - else - maxsector = get_capacity(bio->bi_disk); - rcu_read_unlock(); - - if (!maxsector) - return; - - /* - * If the *whole* IO is past the end of the device, - * let it through, and the IO layer will turn it into - * an EIO. - */ - if (unlikely(bio->bi_iter.bi_sector >= maxsector)) - return; - - maxsector -= bio->bi_iter.bi_sector; - if (likely((bio->bi_iter.bi_size >> 9) <= maxsector)) - return; - - bio_truncate(bio, maxsector << 9); -} - static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, enum rw_hint write_hint, struct writeback_control *wbc) { diff --git a/fs/internal.h b/fs/internal.h index f3f280b952a3..4d37912a5587 100644 --- a/fs/internal.h +++ b/fs/internal.h @@ -38,7 +38,6 @@ static inline int __sync_blockdev(struct block_device *bdev, int wait) /* * buffer.c */ -extern void guard_bio_eod(struct bio *bio); extern int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, get_block_t *get_block, struct iomap *iomap); diff --git a/include/linux/bio.h b/include/linux/bio.h index 853d92ceee64..a430e9c1c2d2 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -471,6 +471,7 @@ extern struct bio *bio_copy_user_iov(struct request_queue *, extern int bio_uncopy_user(struct bio *); void zero_fill_bio_iter(struct bio *bio, struct bvec_iter iter); void bio_truncate(struct bio *bio, unsigned new_size); +void guard_bio_eod(struct bio *bio); static inline void zero_fill_bio(struct bio *bio) { -- 2.25.1