Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp2905410iog; Mon, 27 Jun 2022 05:29:49 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uTPGWgkSJy/lmw6t+L23T60X0MhS6YSYVGsF7ifa/GlTOh4PA5z/ElSSCdxzCfO9Gg6QaE X-Received: by 2002:a17:907:da1:b0:722:bc0b:4f4c with SMTP id go33-20020a1709070da100b00722bc0b4f4cmr12199306ejc.761.1656332989057; Mon, 27 Jun 2022 05:29:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656332989; cv=none; d=google.com; s=arc-20160816; b=XVrPhFMXNWENnfK2bLztVKIdlfGCHc8OaRExbWK9pir2hYHClPUHTjRTzYt8SHAGNS K84Q4EfNro3grd8tSd7jYs/0BHf58W4uXiC72/5a3dNaDxvKP+mDcoXr1F1O9bc8UWxT INbwArwtaEQ4E73vPF1VMTaxmwmMVE64BxIJX0AbJs7x9OYyqaj4v04inNfEwc5e8Iwc DD2JexRyHyKuHi3BAezA923Ur7RY0JwCDzQsKZlTwMEb/Jhd2OhQWz45DbA9KBKH2Igf mYSxqLJrzvwbanPL2pMYhnKknx4PwaBNGkEhVlcPtMSkdofnTZ/9iCGWyixlx7oVcTwp QqBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=i3YHpx/ZqegetiHdbqRddt9QwMceGMChNYnr+uCzAQg=; b=JBb07FfcLHXvVY7kWfUPZtVnhZwT8pLKYJeVYHdENQPwecore6CTGOLWPUYCFWcP4H w9911S3aicX5+zjt0kAh/FRgHKj2PCsu8GPwpUrgOHGoMDRnLmKvGAAdld9uNqIp4Kk7 yF/4xsdUDgamkefj0458CeGKKzKRZ1g8V9gQEDhq7NTrtdakikcEMJOSe/Y1ndhNAnBK 3XiEzxBJUSRfLRxRS+f0aQia2PbeR7QekTms9Rg8nB38Tq+wrINFYEhPY0AmZzaHfYWX iltB2InY0C+u5mPusZFu+RGStzkwRFAhePKgmsPMqEHfKDJOcPcykSOi1lU58ellFFR/ Ix9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=AafEcPSD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y1-20020a056402358100b004359f471717si14737936edc.0.2022.06.27.05.29.24; Mon, 27 Jun 2022 05:29:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=AafEcPSD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238447AbiF0Lza (ORCPT + 99 others); Mon, 27 Jun 2022 07:55:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238579AbiF0Lsr (ORCPT ); Mon, 27 Jun 2022 07:48:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CCBFD48; Mon, 27 Jun 2022 04:42:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0CBE7611AE; Mon, 27 Jun 2022 11:42:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 248BAC3411D; Mon, 27 Jun 2022 11:42:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1656330150; bh=47pZZTtzNUS+oJyyrvsilCS55bonXuaxLoQSfMbjg9E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AafEcPSDyIylJbRDnswf7mrsaErMr6OZi4SqqaC2KXoIw7R6mdiCo7FGHVNXfRkDa bMCBvyoyfh36jzQxew3Mc+ZEPa6B361mDKNIt2fhoswu8D6bir+C8egsS5s7DmFqhK ZKExiqGSRSrJJ3JRxfR4NDGUHmsc8tptrE13qJAY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Alistair Popple , Jan Kara , "Matthew Wilcox (Oracle)" , Sasha Levin Subject: [PATCH 5.18 102/181] filemap: Fix serialization adding transparent huge pages to page cache Date: Mon, 27 Jun 2022 13:21:15 +0200 Message-Id: <20220627111947.655795167@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220627111944.553492442@linuxfoundation.org> References: <20220627111944.553492442@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Alistair Popple [ Upstream commit 00fa15e0d56482e32d8ca1f51d76b0ee00afb16b ] Commit 793917d997df ("mm/readahead: Add large folio readahead") introduced support for using large folios for filebacked pages if the filesystem supports it. page_cache_ra_order() was introduced to allocate and add these large folios to the page cache. However adding pages to the page cache should be serialized against truncation and hole punching by taking invalidate_lock. Not doing so can lead to data races resulting in stale data getting added to the page cache and marked up-to-date. See commit 730633f0b7f9 ("mm: Protect operations adding pages to page cache with invalidate_lock") for more details. This issue was found by inspection but a testcase revealed it was possible to observe in practice on XFS. Fix this by taking invalidate_lock in page_cache_ra_order(), to mirror what is done for the non-thp case in page_cache_ra_unbounded(). Signed-off-by: Alistair Popple Fixes: 793917d997df ("mm/readahead: Add large folio readahead") Reviewed-by: Jan Kara Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Sasha Levin --- mm/readahead.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/readahead.c b/mm/readahead.c index 4a60cdb64262..38635af5bab7 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -508,6 +508,7 @@ void page_cache_ra_order(struct readahead_control *ractl, new_order--; } + filemap_invalidate_lock_shared(mapping); while (index <= limit) { unsigned int order = new_order; @@ -534,6 +535,7 @@ void page_cache_ra_order(struct readahead_control *ractl, } read_pages(ractl); + filemap_invalidate_unlock_shared(mapping); /* * If there were already pages in the page cache, then we may have -- 2.35.1