Received: by 2002:ab2:7903:0:b0:1fb:b500:807b with SMTP id a3csp1363184lqj; Mon, 3 Jun 2024 21:22:18 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVOZ8xVFyLgK5q25uCBegPYWCQryn2hTf7yL8kkYKQlHsnoiXoOjK6X/dM5t36CifPXS6OSOFMDyx5h/Yvc2YtWcScq0ZZtbW1a38tQUg== X-Google-Smtp-Source: AGHT+IGBc2lt7pEp+xAMZbp/g2Vvgbo6LjuD7QOASyWKCMV7MGje1Tg43H9UJTxVsKI7bR9w/JlY X-Received: by 2002:a05:6512:7b:b0:51d:a1ca:5f26 with SMTP id 2adb3069b0e04-52b89573c20mr7271272e87.20.1717474937894; Mon, 03 Jun 2024 21:22:17 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1717474937; cv=pass; d=google.com; s=arc-20160816; b=gWvGN9C23WVROpt/mmBzliG++tnE19N7faoMalo/Jh3ez349ONval2fg3uS01m6za3 QmP084qxFZEd4LjNsgDJEwVcpNKc2Pf4E+Wo/Hpfl6VN3vlD0wxsXIQp5qUWqIViOgfi SaOHgDWZfWL/hFPzLYTIsk1/h7kN8WiFhuod14Ap9gsP8vWYv8PxG3jUlY4DOyTpcpyk Xj5CKbJc2ffc4kzELExRMTjiSXPMw/29Dig5GeaYuAz8GfQRNnxNbVgjvXA5lf3M5zvb na+yrvxd4AftbQUfJXusF6s8/XoLpVXxwmWrSpBgHXWyRBrGhQ4+f9Niqn3XPH4RJMHB 9w2Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=jaTa4hdmBnimMLa8wYkNVT/2zgdLaWkSGqmxeE4jtSI=; fh=8klvRVDD56yFrB+N2y9E7zuvf70XnrIgM6sxOhtJUBc=; b=ZMqav8Cpi+lvhrvarhMKRAbW8ZETXIB0qGGXPTTFISA615xsWy4JAc72vcecQc2EQO /Z8wIB5yBhd1KOvv73Z6mWdQvHZf0MGb1qhxkkXnUSSN8MVgceoJ2mS7JwMyJ+h7S8FY B1RfawgAY8MINHkYJPiU7k7ydRsgmIFDT492fR2zjQznDCqjHiUkUnMyZKOxVSUHVFaV B58O/TwQriTGcfTy2KUEUXs5GxzOjCb+bsCFmrrAueKWa82RmGSpoY6umMi7HYjy6daa FhTrEk97XnrfH79b2K6SjIhM872n5tpU/B1bYwq8+XgY3VSM/UcE547cw1hPXUaEMHlh XleQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=LVNkYZ54; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-200007-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-200007-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id 4fb4d7f45d1cf-57a31ca5a25si4623403a12.533.2024.06.03.21.22.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Jun 2024 21:22:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-200007-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=LVNkYZ54; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-200007-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-200007-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 6B6D41F268CC for ; Tue, 4 Jun 2024 04:22:17 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 54FE613E88B; Tue, 4 Jun 2024 04:21:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LVNkYZ54" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C18C13E3F9 for ; Tue, 4 Jun 2024 04:21:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717474901; cv=none; b=cpuQYR1JG6gJZXH+qPGU4irxtJOMpqMDBu2CXxLGdewaq2OlBgZX+DqwvfP7QZOfIkxQdW3HN0qc1SdUqIu+ZqS3Pr42EfccTB0zTA9v4UYbYshFHOOWV3k32lZy+B4MV/z3kAMt0KfoinG3UKTlSsZWnxZM9Q6OiuDgfMbSBV4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717474901; c=relaxed/simple; bh=CfvlQ2M9KWnaqxKPXvUUKqZr9diZgDumxJiDtxK8xLg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=h2rLiFrtFOCl+KnRj0qbfGN09UhVyxLmnHe6p0Q353GvjRLCOQshC42rL1WtEQNSHQh0Cw/7yhutXgfbiCgT1N3Yd7gXU/Mmm4Jut5FUQCySxqxc92ya0Krg62iyYZMo57IX3LFxO/Z0g/qo6EUctQZ1+0JRlFQnbATceYsyaO0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LVNkYZ54; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 95F25C32786; Tue, 4 Jun 2024 04:21:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1717474900; bh=CfvlQ2M9KWnaqxKPXvUUKqZr9diZgDumxJiDtxK8xLg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LVNkYZ54FWWO0xDgawCPCmAKsUuesOzTQxtIVVhIDWRFB+BRrf0loqYP6nhYl+ZxY 4/FZDa5Z71YUuEhj74spNoj9ub6BKUBfz6kLqA4rTWFIZUJHInhVTnX/Q73H75AiZE 85lX1fl8SDh3Q25c6Noz3NN0Wl8T/1GRCbq6062CnvkbylpQ2xwZz64V3O1XnQv1GD rbF+PTkdvZSFnKI6OjAVUsldEzoxEUdBOuyOnpHHZA6UHSgFrSTxZfnCboZeZh5zMn hdaK9PWG/w+vABWb3fmiH/Jgh1v2eloKrI0GKrme2D4X1hpN3ameLMF8M7ylbqpR3O 9mJ/sbzJPl4rA== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, izik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, chrisw@sous-sol.org, hughd@google.com, david@redhat.com Cc: "Alex Shi (tencent)" Subject: [PATCH 03/10] mm/ksm: use folio in try_to_merge_one_page Date: Tue, 4 Jun 2024 12:24:45 +0800 Message-ID: <20240604042454.2012091-4-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240604042454.2012091-1-alexs@kernel.org> References: <20240604042454.2012091-1-alexs@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Alex Shi (tencent)" scan_get_next_rmap_item() return folio actually now. So in the calling path to try_to_merge_one_page() parameter pages are actually folios. So let's use folio instead of of page in the function to save few compound checking in callee functions. The 'page' left here since flush functions still not support folios yet. Signed-off-by: Alex Shi (tencent) --- mm/ksm.c | 61 ++++++++++++++++++++++++++++++++------------------------ 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index e2fdb9dd98e2..21bfa9bfb210 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1462,24 +1462,29 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, } /* - * try_to_merge_one_page - take two pages and merge them into one - * @vma: the vma that holds the pte pointing to page - * @page: the PageAnon page that we want to replace with kpage - * @kpage: the PageKsm page that we want to map instead of page, - * or NULL the first time when we want to use page as kpage. + * try_to_merge_one_page - take two folios and merge them into one + * @vma: the vma that holds the pte pointing to folio + * @folio: the PageAnon page that we want to replace with kfolio + * @kfolio: the PageKsm page that we want to map instead of folio, + * or NULL the first time when we want to use folio as kfolio. * - * This function returns 0 if the pages were merged, -EFAULT otherwise. + * This function returns 0 if the folios were merged, -EFAULT otherwise. */ -static int try_to_merge_one_page(struct vm_area_struct *vma, struct page *page, - struct ksm_rmap_item *rmap_item, struct page *kpage) +static int try_to_merge_one_page(struct vm_area_struct *vma, struct folio *folio, + struct ksm_rmap_item *rmap_item, struct folio *kfolio) { pte_t orig_pte = __pte(0); int err = -EFAULT; + struct page *page = folio_page(folio, 0); + struct page *kpage; - if (page == kpage) /* ksm page forked */ + if (kfolio) + kpage = folio_page(kfolio, 0); + + if (folio == kfolio) /* ksm page forked */ return 0; - if (!PageAnon(page)) + if (!folio_test_anon(folio)) goto out; /* @@ -1489,11 +1494,11 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, struct page *page, * prefer to continue scanning and merging different pages, * then come back to this page when it is unlocked. */ - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto out; - if (PageTransCompound(page)) { - if (split_huge_page(page)) + if (folio_test_large(folio)) { + if (split_folio(folio)) goto out_unlock; } @@ -1506,35 +1511,36 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, struct page *page, * ptes are necessarily already write-protected. But in either * case, we need to lock and check page_count is not raised. */ - if (write_protect_page(vma, page_folio(page), &orig_pte) == 0) { - if (!kpage) { + if (write_protect_page(vma, folio, &orig_pte) == 0) { + if (!kfolio) { /* * While we hold page lock, upgrade page from * PageAnon+anon_vma to PageKsm+NULL stable_node: * stable_tree_insert() will update stable_node. */ - folio_set_stable_node(page_folio(page), NULL); - mark_page_accessed(page); + folio_set_stable_node(folio, NULL); + folio_mark_accessed(folio); /* * Page reclaim just frees a clean page with no dirty * ptes: make sure that the ksm page would be swapped. */ - if (!PageDirty(page)) - SetPageDirty(page); + if (!folio_test_dirty(folio)) + folio_set_dirty(folio); err = 0; } else if (pages_identical(page, kpage)) err = replace_page(vma, page, kpage, orig_pte); } out_unlock: - unlock_page(page); + folio_unlock(folio); out: return err; } /* * try_to_merge_with_ksm_page - like try_to_merge_two_pages, - * but no new kernel page is allocated: kpage must already be a ksm page. + * but no new kernel page is allocated, kpage is a ksm page or NULL + * if we use page as first ksm page. * * This function returns 0 if the pages were merged, -EFAULT otherwise. */ @@ -1544,13 +1550,17 @@ static int try_to_merge_with_ksm_page(struct ksm_rmap_item *rmap_item, struct mm_struct *mm = rmap_item->mm; struct vm_area_struct *vma; int err = -EFAULT; + struct folio *kfolio; mmap_read_lock(mm); vma = find_mergeable_vma(mm, rmap_item->address); if (!vma) goto out; - err = try_to_merge_one_page(vma, page, rmap_item, kpage); + if (kpage) + kfolio = page_folio(kpage); + + err = try_to_merge_one_page(vma, page_folio(page), rmap_item, kfolio); if (err) goto out; @@ -2385,8 +2395,8 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite mmap_read_lock(mm); vma = find_mergeable_vma(mm, rmap_item->address); if (vma) { - err = try_to_merge_one_page(vma, page, rmap_item, - ZERO_PAGE(rmap_item->address)); + err = try_to_merge_one_page(vma, page_folio(page), rmap_item, + page_folio(ZERO_PAGE(rmap_item->address))); trace_ksm_merge_one_page( page_to_pfn(ZERO_PAGE(rmap_item->address)), rmap_item, mm, err); @@ -2671,8 +2681,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) rmap_item = get_next_rmap_item(mm_slot, ksm_scan.rmap_list, ksm_scan.address); if (rmap_item) { - ksm_scan.rmap_list = - &rmap_item->rmap_list; + ksm_scan.rmap_list = &rmap_item->rmap_list; if (should_skip_rmap_item(*page, rmap_item)) goto next_page; -- 2.43.0