Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp1948754rdb; Thu, 7 Dec 2023 13:27:54 -0800 (PST) X-Google-Smtp-Source: AGHT+IGJekriH1osjh4+kZq8QkWX5XIxuq+LugXBDa7oqs2bCYmcI7jbjIZ5akK43ALorMzKEKJ5 X-Received: by 2002:a17:902:ba95:b0:1d0:700b:3f88 with SMTP id k21-20020a170902ba9500b001d0700b3f88mr6049862pls.66.1701984473732; Thu, 07 Dec 2023 13:27:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701984473; cv=none; d=google.com; s=arc-20160816; b=cPFAHegoXzJBqWz6h6y9TmfhK8Yh3nswW3GTuwf2YCHQBSLoXv3+hhJpWamT1nAmkl Ee5E3QFBmViEZ7ssuhzE30MXvcU8SQl8nYEB1DnbiRXdrY83aFAj+bSFBzRsu/PjYuxq AOm8XBEzmbNmPHXXXtgqUe5M57nyDCv32/HvC+ryZiEqzbcPlY84hWweXbDEeBt/Xfqb NOItjWBoeHCOXCkGDfrh89pQVoADA+vf0W/n98gjP+6raq3MRs7uO2gliCZfdWYiLQik HDU/jYrjWtHnQE72h/uco1MhhDnYy245sttM2dO5XFCYfFP6qQIuSLY15hzQZLkeFVDL W9TQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9oRWOG0JX+9+yoH06vzJgnUfC3KUfaJZ4qcRVrSdfAE=; fh=ib4gl95HfLmZdfX9QIMf3rTepWCH9JlNymcDKJTPhJg=; b=eJ3Qi9vgNxB2gUHDAk1FrIlgteL6/uBkn1pn6x9YG41IFoHnjrAlCcixNtfhOJGmYl 0OU5NLXRZey6k7rs4yLPr9/yCk7rraSGplPxeprEIYXj+E1qXCU1NHSYqOzPu19N7Pc9 TIQoCW6boeReAdppaveuI0OFIcGA+XzxMDW5UCvCA2KRWxpYvX2pNJH68qAom/QJB88Q 4YCVuHoY0XlbMBrUoIRX6mABjTkqM25jzeMuh0S4Rm+83YxrXaa7YllNprpfpcXctCp+ gbPr95N5Ha6XA+nJF+E6xA1sBT8X4xft9Hg9rA3EOUlkKRwmk/RCwAFX5fzJdFDipb8P WTZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GYBs2lu4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id je22-20020a170903265600b001bdd58f685fsi357736plb.85.2023.12.07.13.27.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 13:27:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GYBs2lu4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id D1A248521349; Thu, 7 Dec 2023 13:27:50 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443998AbjLGV1E (ORCPT + 99 others); Thu, 7 Dec 2023 16:27:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1444013AbjLGV0S (ORCPT ); Thu, 7 Dec 2023 16:26:18 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C775387C for ; Thu, 7 Dec 2023 13:24:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701984276; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9oRWOG0JX+9+yoH06vzJgnUfC3KUfaJZ4qcRVrSdfAE=; b=GYBs2lu4oznbvJqu8RS17CIWXf9/7EeIrWyMOD5k7ilw2mgFJmQ4tKhG3nsYkFdgy8dPtV oLM6UfO1FkMMcAJyXxU0t7joLvFo8GjvnMKpjB03TI0q4PFBkREyI8ItkXrl0jh/PCqTTr SpH+jQqYbuf4iFC73KNTLnZmFe9RfKA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-488-0DosAdmFMQGVLvu4bGIZZg-1; Thu, 07 Dec 2023 16:24:33 -0500 X-MC-Unique: 0DosAdmFMQGVLvu4bGIZZg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4E6DB29AC014; Thu, 7 Dec 2023 21:24:32 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.161]) by smtp.corp.redhat.com (Postfix) with ESMTP id B000B492BC6; Thu, 7 Dec 2023 21:24:29 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Eric Van Hensbergen , Ilya Dryomov , Christian Brauner , linux-cachefs@redhat.com, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 41/59] netfs: Provide a launder_folio implementation Date: Thu, 7 Dec 2023 21:21:48 +0000 Message-ID: <20231207212206.1379128-42-dhowells@redhat.com> In-Reply-To: <20231207212206.1379128-1-dhowells@redhat.com> References: <20231207212206.1379128-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 07 Dec 2023 13:27:51 -0800 (PST) Provide a launder_folio implementation for netfslib. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 74 ++++++++++++++++++++++++++++++++++++ fs/netfs/main.c | 1 + include/linux/netfs.h | 2 + include/trace/events/netfs.h | 3 ++ 4 files changed, 80 insertions(+) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index bffa508945cb..a71a9af1b880 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -1124,3 +1124,77 @@ int netfs_writepages(struct address_space *mapping, return ret; } EXPORT_SYMBOL(netfs_writepages); + +/* + * Deal with the disposition of a laundered folio. + */ +static void netfs_cleanup_launder_folio(struct netfs_io_request *wreq) +{ + if (wreq->error) { + pr_notice("R=%08x Laundering error %d\n", wreq->debug_id, wreq->error); + mapping_set_error(wreq->mapping, wreq->error); + } +} + +/** + * netfs_launder_folio - Clean up a dirty folio that's being invalidated + * @folio: The folio to clean + * + * This is called to write back a folio that's being invalidated when an inode + * is getting torn down. Ideally, writepages would be used instead. + */ +int netfs_launder_folio(struct folio *folio) +{ + struct netfs_io_request *wreq; + struct address_space *mapping = folio->mapping; + struct netfs_folio *finfo = netfs_folio_info(folio); + struct netfs_group *group = netfs_folio_group(folio); + struct bio_vec bvec; + unsigned long long i_size = i_size_read(mapping->host); + unsigned long long start = folio_pos(folio); + size_t offset = 0, len; + int ret = 0; + + if (finfo) { + offset = finfo->dirty_offset; + start += offset; + len = finfo->dirty_len; + } else { + len = folio_size(folio); + } + len = min_t(unsigned long long, len, i_size - start); + + wreq = netfs_alloc_request(mapping, NULL, start, len, NETFS_LAUNDER_WRITE); + if (IS_ERR(wreq)) { + ret = PTR_ERR(wreq); + goto out; + } + + if (!folio_clear_dirty_for_io(folio)) + goto out_put; + + trace_netfs_folio(folio, netfs_folio_trace_launder); + + _debug("launder %llx-%llx", start, start + len - 1); + + /* Speculatively write to the cache. We have to fix this up later if + * the store fails. + */ + wreq->cleanup = netfs_cleanup_launder_folio; + + bvec_set_folio(&bvec, folio, len, offset); + iov_iter_bvec(&wreq->iter, ITER_SOURCE, &bvec, 1, len); + __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); + ret = netfs_begin_write(wreq, true, netfs_write_trace_launder); + +out_put: + folio_detach_private(folio); + netfs_put_group(group); + kfree(finfo); + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); +out: + folio_wait_fscache(folio); + _leave(" = %d", ret); + return ret; +} +EXPORT_SYMBOL(netfs_launder_folio); diff --git a/fs/netfs/main.c b/fs/netfs/main.c index 122264d5c254..1e43bc73e130 100644 --- a/fs/netfs/main.c +++ b/fs/netfs/main.c @@ -33,6 +33,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = { [NETFS_READPAGE] = "RP", [NETFS_READ_FOR_WRITE] = "RW", [NETFS_WRITEBACK] = "WB", + [NETFS_LAUNDER_WRITE] = "LW", [NETFS_RMW_READ] = "RM", [NETFS_UNBUFFERED_WRITE] = "UW", [NETFS_DIO_READ] = "DR", diff --git a/include/linux/netfs.h b/include/linux/netfs.h index a9898c99e2ba..5ceb03abf1ff 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -232,6 +232,7 @@ enum netfs_io_origin { NETFS_READPAGE, /* This read is a synchronous read */ NETFS_READ_FOR_WRITE, /* This read is to prepare a write */ NETFS_WRITEBACK, /* This write was triggered by writepages */ + NETFS_LAUNDER_WRITE, /* This is triggered by ->launder_folio() */ NETFS_RMW_READ, /* This is an unbuffered read for RMW */ NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */ NETFS_DIO_READ, /* This is a direct I/O read */ @@ -424,6 +425,7 @@ int netfs_unpin_writeback(struct inode *inode, struct writeback_control *wbc); void netfs_clear_inode_writeback(struct inode *inode, const void *aux); void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length); bool netfs_release_folio(struct folio *folio, gfp_t gfp); +int netfs_launder_folio(struct folio *folio); /* VMA operations API. */ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group); diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 4dc5fcc7b0a4..94669fad4b7a 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -25,6 +25,7 @@ #define netfs_write_traces \ EM(netfs_write_trace_dio_write, "DIO-WRITE") \ + EM(netfs_write_trace_launder, "LAUNDER ") \ EM(netfs_write_trace_unbuffered_write, "UNB-WRITE") \ E_(netfs_write_trace_writeback, "WRITEBACK") @@ -33,6 +34,7 @@ EM(NETFS_READPAGE, "RP") \ EM(NETFS_READ_FOR_WRITE, "RW") \ EM(NETFS_WRITEBACK, "WB") \ + EM(NETFS_LAUNDER_WRITE, "LW") \ EM(NETFS_RMW_READ, "RM") \ EM(NETFS_UNBUFFERED_WRITE, "UW") \ EM(NETFS_DIO_READ, "DR") \ @@ -132,6 +134,7 @@ EM(netfs_folio_trace_end_copy, "end-copy") \ EM(netfs_folio_trace_filled_gaps, "filled-gaps") \ EM(netfs_folio_trace_kill, "kill") \ + EM(netfs_folio_trace_launder, "launder") \ EM(netfs_folio_trace_mkwrite, "mkwrite") \ EM(netfs_folio_trace_mkwrite_plus, "mkwrite+") \ EM(netfs_folio_trace_read_gaps, "read-gaps") \