Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp3300323pxv; Mon, 26 Jul 2021 00:23:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJykbTuvaE4aGqgUf7XG40Q9NaxKsHTMYOs6Q+uddI9NA+xq4G6V5BFCAs67zZSJ199eCvPF X-Received: by 2002:a05:6e02:672:: with SMTP id l18mr5611061ilt.228.1627284233508; Mon, 26 Jul 2021 00:23:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627284233; cv=none; d=google.com; s=arc-20160816; b=jqApdJ+M2538GWR+GaKRixkfLkWaU7kfeCu9MUoqTKTZ7UsgKJ5gOLPAd+o1Ie0Kl3 jnMc1/N1vZBhAhym+A/fETIc+SKCsBq7o2yrArH+zLn4NLFHNyv27l0QpXqBdCks9R/G OvFtGo/NcqODEiRcdTrJAROilxE39fG3BjgwsbfgYdbgSWf/pg5OhOyo3ihVUDhrYFYF UQvsZ978PoOl91KFMvwIosB1m8znKHrOG74NsjZrDtTRA6TH9FZERmzBMtIYSVVp/FHG rpJ6AAVog1A9993D/I1rfDR6Ss0LcKr6dlIfnPWGsTSWZ5f/oQg1CsrmHmpe0fH8B7/7 DwMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=MeX8qL0FAIQm9lCQlUw2wK9bVyGC91/TUjMa+ErGo/I=; b=gQAvO9F1Y+JO5T/0QLAEHzI1LDQFjX0UHq5kwzbEg9qV7FChfqBf3m7OsrjxjvWnyJ 0m9MUWzwCS40JodvIwgGVyI/aPgY/mvl5RtY+DvtT7V4A2P/+FZHAIy462c8XgZjISrE BlY47uoATREoUkJf1/7MPAzjOSprBPu7nnmUYc5Is+Lvt1Kl4jPJDousckWbQSjQ3Y/n GvJycsvYOv06mg+FiHW8orAHzAxAxV4o6UOwTru41ew47KLo+ZWPAfM8FQXoa48WI9NI pWPAOV1BBfnpw0Hr5j62fdOniOJv5zmOpJmEl7yKVjp1vV994k073OTPGpFOQ/8DO18v 4jAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=KDVOXin8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b69si44842219jai.66.2021.07.26.00.23.42; Mon, 26 Jul 2021 00:23:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=KDVOXin8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232394AbhGZGm1 (ORCPT + 99 others); Mon, 26 Jul 2021 02:42:27 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:37620 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231707AbhGZGm1 (ORCPT ); Mon, 26 Jul 2021 02:42:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1627284176; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=MeX8qL0FAIQm9lCQlUw2wK9bVyGC91/TUjMa+ErGo/I=; b=KDVOXin8HVJi/fT+Ef5rajZf3yoNJd4QRVY0t1hchuT4+Yp14FFTOX9GMa8/aLb1fIJuTR y1lgFizx0yKY4PXNqPKTuIf02ppb5C79tJc8JYBcG09N8uLkA8r3HTTqJOmAvTm4NDvEgy MGDU/FGQqWsufTGfnqtlBq6KkmbhRbU= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-194-q-oexxpMNIij5OXTakqU8Q-1; Mon, 26 Jul 2021 03:22:54 -0400 X-MC-Unique: q-oexxpMNIij5OXTakqU8Q-1 Received: by mail-wr1-f71.google.com with SMTP id z10-20020adfdf8a0000b02901536d17cd63so1630927wrl.21 for ; Mon, 26 Jul 2021 00:22:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=MeX8qL0FAIQm9lCQlUw2wK9bVyGC91/TUjMa+ErGo/I=; b=cZ/vnWf80AVPOBfr1664gr4F7ZJeQ5sXn9TBBW26ADWMbW9hJEo+4QtiHIWg37rZg3 8OcL9KdyZ2sp2XOfP+qqb6jeryJ5Mmt1syvERT1p6yjvCjx0o7b98mYJKLdYUSfp2aLw dBWGaICzriWDwGVcdoAnPiP3/zD3axQCUAWy+MES/Er+8xDV0HcQWcaLcte0onRf9ho0 mjVugxs+tEv1ojUgyrMA2Io5r6DR2ajVlcBXK6+g8DOKU7Cqmy+bsuzaPWSCHLJlLmiU rhdktlEtogYoPJLVgMFX8YHB5wjrpyMchQUpnjOYfeC45naB2Bh05DDumqQmZqxvFwtw Fusw== X-Gm-Message-State: AOAM530JbPgRUr2bNm/wp78CiNfyxCNBX29BFfmbGZSRXFQH7/QTLSvQ u+UI9R2IkJKACuIJvpmKhcsqwBTs1aI8ieOxuGY6DBqrIZB1DH7eF+WyzSmLnEF7/selCwENW8B XaUMpaJL/vl9WWrOZZykGY6ivfzvV7u8EKCBIn3Xv X-Received: by 2002:a05:6000:227:: with SMTP id l7mr6209246wrz.289.1627284172747; Mon, 26 Jul 2021 00:22:52 -0700 (PDT) X-Received: by 2002:a05:6000:227:: with SMTP id l7mr6209224wrz.289.1627284172598; Mon, 26 Jul 2021 00:22:52 -0700 (PDT) MIME-Version: 1.0 References: <20210723174131.180813-1-hsiangkao@linux.alibaba.com> <20210725221639.426565-1-agruenba@redhat.com> In-Reply-To: From: Andreas Gruenbacher Date: Mon, 26 Jul 2021 09:22:41 +0200 Message-ID: Subject: Re: [PATCH v7] iomap: make inline data support more flexible To: Andreas Gruenbacher , Christoph Hellwig , "Darrick J . Wong" , Matthew Wilcox , Huang Jianan , linux-erofs@lists.ozlabs.org, linux-fsdevel , LKML , Andreas Gruenbacher Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 26, 2021 at 4:36 AM Gao Xiang wrote: > On Mon, Jul 26, 2021 at 12:16:39AM +0200, Andreas Gruenbacher wrote: > > Here's a fixed and cleaned up version that passes fstests on gfs2. > > > > I see no reason why the combination of tail packing + writing should > > cause any issues, so in my opinion, the check that disables that > > combination in iomap_write_begin_inline should still be removed. > > Since there is no such fs for tail-packing write, I just do a wild > guess, for example, > 1) the tail-end block was not inlined, so iomap_write_end() dirtied > the whole page (or buffer) for the page writeback; > 2) then it was truncated into a tail-packing inline block so the last > extent(page) became INLINE but dirty instead; > 3) during the late page writeback for dirty pages, > if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) > would be triggered in iomap_writepage_map() for such dirty page. > > As Matthew pointed out before, > https://lore.kernel.org/r/YPrms0fWPwEZGNAL@casper.infradead.org/ > currently tail-packing inline won't interact with page writeback, but > I'm afraid a supported tail-packing write fs needs to reconsider the > whole stuff how page, inode writeback works and what the pattern is > with the tail-packing. > > > > > It turns out that returning the number of bytes copied from > > iomap_read_inline_data is a bit irritating: the function is really used > > for filling the page, but that's not always the "progress" we're looking > > for. In the iomap_readpage case, we actually need to advance by an > > antire page, but in the iomap_file_buffered_write case, we need to > > advance by the length parameter of iomap_write_actor or less. So I've > > changed that back. > > > > I've also renamed iomap_inline_buf to iomap_inline_data and I've turned > > iomap_inline_data_size_valid into iomap_within_inline_data, which seems > > more useful to me. > > > > Thanks, > > Andreas > > > > -- > > > > Subject: [PATCH] iomap: Support tail packing > > > > The existing inline data support only works for cases where the entire > > file is stored as inline data. For larger files, EROFS stores the > > initial blocks separately and then can pack a small tail adjacent to the > > inode. Generalise inline data to allow for tail packing. Tails may not > > cross a page boundary in memory. > > > > We currently have no filesystems that support tail packing and writing, > > so that case is currently disabled (see iomap_write_begin_inline). I'm > > not aware of any reason why this code path shouldn't work, however. > > > > Cc: Christoph Hellwig > > Cc: Darrick J. Wong > > Cc: Matthew Wilcox > > Cc: Andreas Gruenbacher > > Tested-by: Huang Jianan # erofs > > Signed-off-by: Gao Xiang > > --- > > fs/iomap/buffered-io.c | 34 +++++++++++++++++++++++----------- > > fs/iomap/direct-io.c | 11 ++++++----- > > include/linux/iomap.h | 22 +++++++++++++++++++++- > > 3 files changed, 50 insertions(+), 17 deletions(-) > > > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > > index 87ccb3438bec..334bf98fdd4a 100644 > > --- a/fs/iomap/buffered-io.c > > +++ b/fs/iomap/buffered-io.c > > @@ -205,25 +205,29 @@ struct iomap_readpage_ctx { > > struct readahead_control *rac; > > }; > > > > -static void > > -iomap_read_inline_data(struct inode *inode, struct page *page, > > +static int iomap_read_inline_data(struct inode *inode, struct page *page, > > struct iomap *iomap) > > { > > - size_t size = i_size_read(inode); > > + size_t size = i_size_read(inode) - iomap->offset; > > I wonder why you use i_size / iomap->offset here, This function is supposed to copy the inline or tail data at iomap->inline_data into the page passed to it. Logically, the inline data starts at iomap->offset and extends until i_size_read(inode). Relative to the page, the inline data starts at offset 0 and extends until i_size_read(inode) - iomap->offset. It's as simple as that. > and why you completely ignoring iomap->length field returning by fs. In the iomap_readpage case (iomap_begin with flags == 0), iomap->length will be the amount of data up to the end of the inode. In the iomap_file_buffered_write case (iomap_begin with flags == IOMAP_WRITE), iomap->length will be the size of iomap->inline_data. (For extending writes, we need to write beyond the current end of inode.) So iomap->length isn't all that useful for iomap_read_inline_data. > Using i_size here instead of iomap->length seems coupling to me in the > beginning (even currently in practice there is some limitation.) And what is that? Thanks, Andreas