Received: by 2002:a05:7412:e794:b0:fa:551:50a7 with SMTP id o20csp1688544rdd; Thu, 11 Jan 2024 06:39:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IFoKSaz9efkgRJ6J4ZrT2/5nn+glf/RDJavBw4ybr839oqZi2EEFsz1Dq4SNLXzrYHX+/F2 X-Received: by 2002:a05:6a20:baa0:b0:196:b157:a0a3 with SMTP id fb32-20020a056a20baa000b00196b157a0a3mr812180pzb.62.1704983954241; Thu, 11 Jan 2024 06:39:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704983954; cv=none; d=google.com; s=arc-20160816; b=mLImQ/CH8naqros0/2PzYXlf6fu5WDkduV33CbY7x9MzihrpydOZV3e+905AVOOjpe nXCRQ4BPduArnX6r1Nj9Y3azOXagQxor+yESNu3ZM2jPDTZiZJ5ce4JDS9Qk7z/JhPss NNvZfZSuZwDvItbUJsydlJdlU6BxouyZh0Q/KuD1mFP9D3bOe0iyIUWgidVF47zwLyTb 8bpPSvGB+HOfmpN5GN7BNlK20waq8932dIgQir0rPhHUMafLHnr+O5twQzIDrfbOrQ20 weVKCEP0Vc3TV0kqKCaHBQEjmR3jF/FSfZUQ/P0GnjMUNZXxnfPZ4QsQx/CIbP/F5Olc zQzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=JjNB8C8MDdLujdkKeWrCwsqDM10AujGCKmMPvLk54zc=; fh=6bsp1IlJ49oLif/xgTIcLmPe0SBpQazwB7ELLivfWts=; b=QPnxxT41kSgl7YqlS5XprfYdurefzFuSAyNhs8kjcjgxIHvDWy2H0K8DQ9/49l9IEv rRuiXu/qPmN0CzNRkBKhnASLlQbinTMprA2jF5ASqsMoNKTtBsgXwpv5f0RoL2xxacu+ UHNsjBoozsqK4l8CpmPih/mjNzXDHI32WTq4QQNW32uwNBNzt0eioOMZjXQvBSxhzJL8 KPK1Z5Ap809rpd6OZ8nneZek2zM0yIHFM6+1XM/v8wXrVgOq2YGu1rNW1BpK7nh3vwWT 7QPMxviy0/KJGki4himCsbGswctis7mm+GYnYTvQaGq+dNfHRsSRvWgvGqec4DEIQQfj Jd4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-23716-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-23716-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id d188-20020a6336c5000000b005be0ca9ca31si1222836pga.294.2024.01.11.06.39.13 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 06:39:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-23716-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-23716-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-23716-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id D4595286E49 for ; Thu, 11 Jan 2024 14:39:13 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9EF624C608; Thu, 11 Jan 2024 14:38:49 +0000 (UTC) Received: from mx01.omp.ru (mx01.omp.ru [90.154.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B62148CED; Thu, 11 Jan 2024 14:38:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=omp.ru Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=omp.ru Received: from r.smirnovsmtp.omp.ru (10.189.215.22) by msexch01.omp.ru (10.188.4.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1258.12; Thu, 11 Jan 2024 17:38:36 +0300 From: Roman Smirnov To: , Greg Kroah-Hartman CC: Roman Smirnov , "Matthew Wilcox (Oracle)" , Andrew Morton , Alexey Khoroshilov , Sergey Shtylyov , Karina Yankevich , , , , , John Hubbard , Christoph Hellwig Subject: [PATCH 5.10 1/2] mm/truncate: Inline invalidate_complete_page() into its one caller Date: Thu, 11 Jan 2024 14:37:46 +0000 Message-ID: <20240111143747.4418-2-r.smirnov@omp.ru> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240111143747.4418-1-r.smirnov@omp.ru> References: <20240111143747.4418-1-r.smirnov@omp.ru> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: msexch02.omp.ru (10.188.4.13) To msexch01.omp.ru (10.188.4.12) X-KSE-ServerInfo: msexch01.omp.ru, 9 X-KSE-AntiSpam-Interceptor-Info: scan successful X-KSE-AntiSpam-Version: 6.1.0, Database issued on: 01/11/2024 14:25:57 X-KSE-AntiSpam-Status: KAS_STATUS_NOT_DETECTED X-KSE-AntiSpam-Method: none X-KSE-AntiSpam-Rate: 0 X-KSE-AntiSpam-Info: Lua profiles 182570 [Jan 11 2024] X-KSE-AntiSpam-Info: Version: 6.1.0.3 X-KSE-AntiSpam-Info: Envelope from: r.smirnov@omp.ru X-KSE-AntiSpam-Info: LuaCore: 7 0.3.7 6d6bf5bd8eea7373134f756a2fd73e9456bb7d1a X-KSE-AntiSpam-Info: {Tracking_from_domain_doesnt_match_to} X-KSE-AntiSpam-Info: 127.0.0.199:7.1.2;r.smirnovsmtp.omp.ru:7.1.1;d41d8cd98f00b204e9800998ecf8427e.com:7.1.1;omp.ru:7.1.1 X-KSE-AntiSpam-Info: FromAlignment: s X-KSE-AntiSpam-Info: Rate: 0 X-KSE-AntiSpam-Info: Status: not_detected X-KSE-AntiSpam-Info: Method: none X-KSE-AntiSpam-Info: Auth:dkim=none X-KSE-Antiphishing-Info: Clean X-KSE-Antiphishing-ScanningType: Heuristic X-KSE-Antiphishing-Method: None X-KSE-Antiphishing-Bases: 01/11/2024 14:31:00 X-KSE-Antivirus-Interceptor-Info: scan successful X-KSE-Antivirus-Info: Clean, bases: 1/11/2024 1:37:00 PM X-KSE-Attachment-Filter-Triggered-Rules: Clean X-KSE-Attachment-Filter-Triggered-Filters: Clean X-KSE-BulkMessagesFiltering-Scan-Result: InTheLimit From: "Matthew Wilcox (Oracle)" Commit 1b8ddbeeb9b819e62b7190115023ce858a159f5c upstream. invalidate_inode_page() is the only caller of invalidate_complete_page() and inlining it reveals that the first check is unnecessary (because we hold the page locked, and we just retrieved the mapping from the page). Actually, it does make a difference, in that tail pages no longer fail at this check, so it's now possible to remove a tail page from a mapping. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: John Hubbard Reviewed-by: Christoph Hellwig Signed-off-by: Roman Smirnov --- kernel/futex/core.c | 2 +- mm/truncate.c | 31 +++++-------------------------- 2 files changed, 6 insertions(+), 27 deletions(-) diff --git a/kernel/futex/core.c b/kernel/futex/core.c index cde0ca876b93..cbbebc3de1d3 100644 --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -578,7 +578,7 @@ static int get_futex_key(u32 __user *uaddr, bool fshared, union futex_key *key, * found it, but truncated or holepunched or subjected to * invalidate_complete_page2 before we got the page lock (also * cases which we are happy to fail). And we hold a reference, - * so refcount care in invalidate_complete_page's remove_mapping + * so refcount care in invalidate_inode_page's remove_mapping * prevents drop_caches from setting mapping to NULL beneath us. * * The case we do have to guard against is when memory pressure made diff --git a/mm/truncate.c b/mm/truncate.c index 8914ca4ce4b1..03998fd86e4a 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -190,30 +190,6 @@ static void truncate_cleanup_page(struct page *page) ClearPageMappedToDisk(page); } -/* - * This is for invalidate_mapping_pages(). That function can be called at - * any time, and is not supposed to throw away dirty pages. But pages can - * be marked dirty at any time too, so use remove_mapping which safely - * discards clean, unused pages. - * - * Returns non-zero if the page was successfully invalidated. - */ -static int -invalidate_complete_page(struct address_space *mapping, struct page *page) -{ - int ret; - - if (page->mapping != mapping) - return 0; - - if (page_has_private(page) && !try_to_release_page(page, 0)) - return 0; - - ret = remove_mapping(mapping, page); - - return ret; -} - int truncate_inode_page(struct address_space *mapping, struct page *page) { VM_BUG_ON_PAGE(PageTail(page), page); @@ -258,7 +234,10 @@ int invalidate_inode_page(struct page *page) return 0; if (page_mapped(page)) return 0; - return invalidate_complete_page(mapping, page); + if (page_has_private(page) && !try_to_release_page(page, 0)) + return 0; + + return remove_mapping(mapping, page); } /** @@ -645,7 +624,7 @@ void invalidate_mapping_pagevec(struct address_space *mapping, } /* - * This is like invalidate_complete_page(), except it ignores the page's + * This is like invalidate_inode_page(), except it ignores the page's * refcount. We do this because invalidate_inode_pages2() needs stronger * invalidation guarantees, and cannot afford to leave pages behind because * shrink_page_list() has a temp ref on them, or because they're transiently -- 2.34.1