Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B30F8C54EB4 for ; Mon, 23 Jan 2023 14:49:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231892AbjAWOtf (ORCPT ); Mon, 23 Jan 2023 09:49:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231899AbjAWOtc (ORCPT ); Mon, 23 Jan 2023 09:49:32 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4B44CDD0; Mon, 23 Jan 2023 06:49:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Z11i+llmbjgD7cbFbLlQsJ9ixdAy5+4hRBMo2u2DGEw=; b=qlzRk3JIzqNYzOMGddXhs7a7Bc wecSP3xqjry+0odJFYKHt81zx29My16+nwZ9uPdz+oxpAU+dFHD7hhSDu7KVTpjpqXH87PsP4Rb3p bVXVmREdHoNfhebejQH03Zie4QFQPWqx56Iiq4+Ig3MdqlRszGXEk2tPZzcHA3ijuroPt0ipVPkmg RLWygV4sf1+RRR5RGrwocSn2NS66qsmSxjs49r+KrRzS91hecRgWeLROStWXk3NX7Wj6YTJ2hoTY7 MAD6D29oXBP5fICUHUEjAFdS/93kQhLKVeAhlGIMjtsel83o74xn4stIl/8CSUgKeTwWM9K0BPI+M MPS9jiHg==; Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pJy8P-0001xS-6J; Mon, 23 Jan 2023 14:49:21 +0000 Date: Mon, 23 Jan 2023 06:49:21 -0800 From: Christoph Hellwig To: David Howells Cc: Christoph Hellwig , Al Viro , Matthew Wilcox , Jens Axboe , Jan Kara , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: Re: [PATCH v7 6/8] block: Make bio structs pin pages rather than ref'ing if appropriate Message-ID: References: <20230120175556.3556978-1-dhowells@redhat.com> <20230120175556.3556978-7-dhowells@redhat.com> <3813654.1674473320@warthog.procyon.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3813654.1674473320@warthog.procyon.org.uk> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 23, 2023 at 11:28:40AM +0000, David Howells wrote: > void __bio_release_pages(struct bio *bio, bool mark_dirty) > { > unsigned int gup_flags = bio_to_gup_flags(bio); > struct bvec_iter_all iter_all; > struct bio_vec *bvec; > > bio_for_each_segment_all(bvec, bio, iter_all) { > if (mark_dirty && !PageCompound(bvec->bv_page)) > set_page_dirty_lock(bvec->bv_page); > >>>> page_put_unpin(bvec->bv_page, gup_flags); > } > } > > that ought to be a call to bio_release_page(), but the optimiser doesn't want > to inline it:-/ Why? __bio_release_pages is the fast path, no need to force using bio_relese_page which is otherwise only used for error cleanup.