Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1479068pxk; Mon, 31 Aug 2020 22:36:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlcqxcz5OP11KRU1eAhQmGnrOMu2uJt8b6jIYuEL3jRDqA2ABChZHbYhHCNgzHM6u5Ejj4 X-Received: by 2002:a17:906:4151:: with SMTP id l17mr51480ejk.116.1598938561883; Mon, 31 Aug 2020 22:36:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598938561; cv=none; d=google.com; s=arc-20160816; b=IRMjwKwW4fbdQApczN3B1KovcjNp6Tcq87c8LBoSYcRL1GVWU8FPx9gsrzX926esiv uNXLCSf4uWh0hpK62S/PdzPnpD5ttTmrf5J2jn53cMp7e3lTnu5KMFJoWx4wNdy5pUfW 9NPiici6PiBSrRjM95z3BZLE6CMTwumv7A3p0iSeYBfMp1EtP1aO6oMxAZeGiyjfKSoC COGRHhkvuUHMhLB05E/lTVZMlw4DrPkrcpMxgDXTlWDBVDyfdl81j8v44rIL49d6NK/O b+ElT8VtDdFQRKtigwldHjaqVjVTIlU4sHArH6mOohmcNjBagGaiAJeWrN+tVkvlWUL8 aa5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=ZSo4Kk3lGyfakbzAOPf8+T3dlZFXdTrrp8cG3ND3CNc=; b=S24Nct5YIBG2166zxKLHg89Lj3maiA6SgwTV5wh8CQoz9tFflsKj4chA38qZaxbJOZ 9Xd4MLRwBHrRQ1TSwy+sHNdgkS6kV4gdv1w0CslV0Y/dzL2mINN6AkLJYy9pStHQ2loG zdEVy8TqCNYAyZurXwjzdFQ13ycKy8y/Qa29kWxEIa46/zh2QK5/o0QBHwrZ41C1LctD FYfI6wajBEn90IWQsJ8bzB/DDT+BY4cX4bSe37j8hGybP+n7h1uh9lRtGG+vQESbeGxv eh7aGUatOeUU8k14/Y2qOU184CgQIXrVN2XM4nJNRgJbwmF77H6cGhaNku1L8WSgH3fY fDww== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=casper.20170209 header.b=N0law7Zr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w13si12557ede.238.2020.08.31.22.35.38; Mon, 31 Aug 2020 22:36:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=casper.20170209 header.b=N0law7Zr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726292AbgIAFeb (ORCPT + 99 others); Tue, 1 Sep 2020 01:34:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726044AbgIAFea (ORCPT ); Tue, 1 Sep 2020 01:34:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F32C7C061290; Mon, 31 Aug 2020 22:34:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ZSo4Kk3lGyfakbzAOPf8+T3dlZFXdTrrp8cG3ND3CNc=; b=N0law7ZrT/Xc3xr2dooUsCVfPc nZkvh3GnonhgfEn5o5VQ6gOZA65oAMMu5Iz9sI+Z3RWX/372JQxqtjkiXBBGTmgCuugQECYCkv1+u Q7LmtbnpzGMZxpYhvM3KfSmgZwj10w1zrk/m5Njt/q2fKA5CtBErJfDWv5hINfAyY0Jjp2i5p4IIl TGdGrjgtvxwY7t2CGWm+NZWgQVECHmx/gmN1L6VeSNUVJO7IK+bw8FkuDLmi9OG6NkjvXI+zGvW6s ibS07B5cYF8p7v5Dx+gbtRN7Pg7NqvkVqahCgo3F6tehsPKsz9MlrCZmh7kFZ96T8vA4Rt8M1vWk4 5439Eg7w==; Received: from hch by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kCyw6-0007KE-DH; Tue, 01 Sep 2020 05:34:26 +0000 Date: Tue, 1 Sep 2020 06:34:26 +0100 From: Christoph Hellwig To: Matthew Wilcox Cc: Christoph Hellwig , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 04/11] block: Add bio_for_each_thp_segment_all Message-ID: <20200901053426.GB24560@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> <20200824151700.16097-5-willy@infradead.org> <20200827084431.GA15909@infradead.org> <20200831194837.GJ14765@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200831194837.GJ14765@casper.infradead.org> X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 31, 2020 at 08:48:37PM +0100, Matthew Wilcox wrote: > static void iomap_read_end_io(struct bio *bio) > { > int i, error = blk_status_to_errno(bio->bi_status); > > for (i = 0; i < bio->bi_vcnt; i++) { > struct bio_vec *bvec = &bio->bi_io_vec[i]; This should probably use bio_for_each_bvec_all instead of directly poking into the bio. I'd also be tempted to move the loop body into a separate helper, but that's just a slight stylistic preference. > size_t offset = bvec->bv_offset; > size_t length = bvec->bv_len; > struct page *page = bvec->bv_page; > > while (length > 0) { > size_t count = thp_size(page) - offset; > > if (count > length) > count = length; > iomap_read_page_end_io(page, offset, count, error); > page += (offset + count) / PAGE_SIZE; Shouldn't the page_size here be thp_size? > Maybe I'm missing something important here, but it's significantly > simpler code -- iomap_read_end_io() goes down from 816 bytes to 560 bytes > (256 bytes less!) iomap_read_page_end_io is inlined into it both before > and after. Yes, that's exactly why I think avoiding bio_for_each_segment_all is a good idea in general. > There is some weirdness going on with regards to bv_offset that I don't > quite understand. In the original bvec_advance: > > bv->bv_page = bvec->bv_page + (bvec->bv_offset >> PAGE_SHIFT); > bv->bv_offset = bvec->bv_offset & ~PAGE_MASK; > > which I cargo-culted into bvec_thp_advance as: > > bv->bv_page = thp_head(bvec->bv_page + > (bvec->bv_offset >> PAGE_SHIFT)); > page_size = thp_size(bv->bv_page); > bv->bv_offset = bvec->bv_offset - > (bv->bv_page - bvec->bv_page) * PAGE_SIZE; > > Is it possible to have a bvec with an offset that is larger than the > size of bv_page? That doesn't seem like a useful thing to do, but > if that needs to be supported, then the code up top doesn't do that. > We maybe gain a little bit by counting length down to 0 instead of > counting it up to bv_len. I dunno; reading the code over now, it > doesn't seem like that much of a difference. Drivers can absolutely see a bv_offset that is larger due to bio splitting. However the submitting file system should never see one unless it creates one, which would be stupid. And yes, eventually bv_page and bv_offset should be replaced with a phys_addr_t bv_phys; and life would become simpler in many places (and the bvec would shrink for most common setups as well). For now I'd end up with something like: static void iomap_read_end_bvec(struct page *page, size_t offset, size_t length, int error) { while (length > 0) { size_t page_size = thp_size(page); size_t count = min(page_size - offset, length); iomap_read_page_end_io(page, offset, count, error); page += (offset + count) / page_size; length -= count; offset = 0; } } static void iomap_read_end_io(struct bio *bio) { int i, error = blk_status_to_errno(bio->bi_status); struct bio_vec *bvec; bio_for_each_bvec_all(bvec, bio, i) iomap_read_end_bvec(bvec->bv_page, bvec->bv_offset, bvec->bv_len, error; bio_put(bio); } and maybe even merge iomap_read_page_end_io into iomap_read_end_bvec.