Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp4084315rwb; Tue, 8 Nov 2022 11:53:22 -0800 (PST) X-Google-Smtp-Source: AMsMyM5EW2tkUY88hPzrsnIlEqAP26zQFGhg5XR8LFxc7Tk46YZZruT4QWGwrb3t+hzapLOwIjtx X-Received: by 2002:a17:902:e846:b0:187:2127:cbb with SMTP id t6-20020a170902e84600b0018721270cbbmr49191264plg.125.1667937202078; Tue, 08 Nov 2022 11:53:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667937202; cv=none; d=google.com; s=arc-20160816; b=EOr6ftjUBYSYkvG7zhxcO19nm7eLXi5j2Nc03IDqbnEYXMtRNzxNazS7CoFVRqyRqS I9lvXE21CyxLM7sAfJGX2ilnxinKhy096lotX4OQaeTyh+8MC8vJG2XdTR3lx+9DvZLR tQz/5bFH69hueo3H8Ic2SDoxb+1jS6Z1oO0s0JswvcgxB5VH6uVGXHJQi3b2SnwHsj4q 4P7xFBrHqRpSgXKKs1HZzkGp3aUMQG7B9qfW/rW8+UFjBui8kijEKc49Zo+ZbNluDUOR VuFwl+u4gZ8IOD2nDB2AVbyJBSiiRjcOI1HCWnwXFB7KkmntBnsWifEglS2yh0RXxBm5 IfBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0U8h3wXAFvQjOZ2F23Z8+zJsbCg1m8KawRjv1q819mw=; b=NkkjhV2yLn3eonF+FvHFcLiQzrTQMdeyRikAsz8ERGBKrAf85ae6yPdkcAqiIodGtt 9z8Y0eKHxch/bQGFo253xK2J2Dg/T4SI/frWq1w/KdyoRyngWuPdalKRwDcM07nASpMM /LpNHV0PlRF/kkiPMjAvXAeKlbMyhTmP+H296UdBGY/eaSlqRzPS8j+dSDw+A/WSM3AA nQgYb72eaWPepW+LT/JyVk35k2RgwmHtsQsJt74HcRCJX+ftN6Fz6veHUjAeWim1E0rZ XL7CJ7jpeAnZNbsWIH+aiCZYg86t9TXHfaxm9xjkn8A/uuiWBp7rFjJh+C7IzYojgC2P QIiw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=Puc8twx3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t10-20020a056a0021ca00b0056b94c5af48si16745994pfj.307.2022.11.08.11.53.10; Tue, 08 Nov 2022 11:53:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=Puc8twx3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229602AbiKHTlp (ORCPT + 92 others); Tue, 8 Nov 2022 14:41:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbiKHTlm (ORCPT ); Tue, 8 Nov 2022 14:41:42 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A722716CA for ; Tue, 8 Nov 2022 11:41:42 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C9F316177A for ; Tue, 8 Nov 2022 19:41:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C42FAC433B5; Tue, 8 Nov 2022 19:41:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1667936501; bh=KtnZVT0YbwzWMG53hT3URF7/oRWr49RGvTuDrC8WUTY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Puc8twx3+crexM3+XlAfoAfXJdSqnXG5mTkZTE+eK9SWgd99KLHzzfjSjbwlSR9rG vIvyljiUS7yb8ThC0QbXeHTUFZEce/ro/TougawWiPy/s00uFg52Nlje34u8jvC+9q toltFhsioCZg7qZXVfUpy2pFWEDN+3iUD9ECCCus= From: Linus Torvalds To: Hugh Dickins , Johannes Weiner , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/4] mm: mmu_gather: prepare to gather encoded page pointers with flags Date: Tue, 8 Nov 2022 11:41:38 -0800 Message-Id: <20221108194139.57604-3-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.38.1.284.gfd9468d787 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is purely a preparatory patch that makes all the data structures ready for encoding flags with the mmu_gather page pointers. The code currently always sets the flag to zero and doesn't use it yet, but now it's tracking the type state along. The next step will be to actually start using it. Signed-off-by: Linus Torvalds --- include/asm-generic/tlb.h | 2 +- include/linux/swap.h | 2 +- mm/mmu_gather.c | 4 ++-- mm/swap_state.c | 11 ++++------- 4 files changed, 8 insertions(+), 11 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 492dce43236e..faca23e87278 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -242,7 +242,7 @@ struct mmu_gather_batch { struct mmu_gather_batch *next; unsigned int nr; unsigned int max; - struct page *pages[]; + struct encoded_page *encoded_pages[]; }; #define MAX_GATHER_BATCH \ diff --git a/include/linux/swap.h b/include/linux/swap.h index a18cf4b7c724..40e418e3461b 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -470,7 +470,7 @@ static inline unsigned long total_swapcache_pages(void) extern void free_swap_cache(struct page *page); extern void free_page_and_swap_cache(struct page *); -extern void free_pages_and_swap_cache(struct page **, int); +extern void free_pages_and_swap_cache(struct encoded_page **, int); /* linux/mm/swapfile.c */ extern atomic_long_t nr_swap_pages; extern long total_swap_pages; diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index add4244e5790..57b7850c1b5e 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -48,7 +48,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) struct mmu_gather_batch *batch; for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { - struct page **pages = batch->pages; + struct encoded_page **pages = batch->encoded_pages; do { /* @@ -92,7 +92,7 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ * Add the page and check if we are full. If so * force a flush. */ - batch->pages[batch->nr++] = page; + batch->encoded_pages[batch->nr++] = encode_page(page, 0); if (batch->nr == batch->max) { if (!tlb_next_batch(tlb)) return true; diff --git a/mm/swap_state.c b/mm/swap_state.c index 438d0676c5be..8bf08c313872 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -303,15 +303,12 @@ void free_page_and_swap_cache(struct page *page) * Passed an array of pages, drop them all from swapcache and then release * them. They are removed from the LRU and freed if this is their last use. */ -void free_pages_and_swap_cache(struct page **pages, int nr) +void free_pages_and_swap_cache(struct encoded_page **pages, int nr) { - struct page **pagep = pages; - int i; - lru_add_drain(); - for (i = 0; i < nr; i++) - free_swap_cache(pagep[i]); - release_pages(pagep, nr); + for (int i = 0; i < nr; i++) + free_swap_cache(encoded_page_ptr(pages[i])); + release_pages(pages, nr); } static inline bool swap_use_vma_readahead(void) -- 2.38.1.284.gfd9468d787