Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp1604546rwl; Mon, 26 Dec 2022 01:49:33 -0800 (PST) X-Google-Smtp-Source: AMrXdXur7Hi+2RZFRUlO12DTRa5pLa3EQeCCXSEQ9xnQmrLWNeGiAsBQ9YKWil7y2d2o91Bzyaq1 X-Received: by 2002:a17:90a:b904:b0:219:d636:dd82 with SMTP id p4-20020a17090ab90400b00219d636dd82mr19679979pjr.4.1672048173424; Mon, 26 Dec 2022 01:49:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672048173; cv=none; d=google.com; s=arc-20160816; b=OEjB0scjjRVoQoOF/s77v1x14A4bX6lcELDAZE9gWgucEPY2pBGSTZ6GnxpVZjODI0 XJ1YEQ9/7Z5O6hVSSQVQgTeQIzTSsiuAoULAmk8siIBV2aRQAOL64rlT1TT8we1UBLcX VOS3brgJmCdjhlEpmO/hB9K0PCQX9INzduA4bMjcMFLElfanFTh9Z0NdB5XD/ZlmpYvI H3WPwvqqEvWDyqk9ZE9Ol63qrjtZ3jNSSdwYRZYy/EIF+BOXbi8BCYxLIIb9t/Hg1QAD kTF+pqau3F9YZ4wqMyfEVYWfHxrUsC0p+shTiwrQgocssZFioyG1NTri6B2BvX2trQW6 xVPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version; bh=mrB1MiLPJiyOd+sQ+kYL31Kb2U8C2us/yziaQLwpPq8=; b=HmXvuCZULjx865yAQRQKqDBEH+qoZlki1bVxBOw050pHo0ZItNs4sahg4XsRNbxQJE TWbNUu2RWoSIvCcjZ924be6DOizzdDSuyYvQaRcVkOCnuvt6WYUDaHgHJnXEQiizo+I2 Z+pU9c7elzNCJhasedb6TWe4o/me+XWsmrPKj5YquyiGUL16+Zj1D0bKO4J4EjpyVG67 HOUbTi4gPcuETVumhHwy0TrJrhUKjyuGJrB7Bmzw+AEy7WRvekfZElRZjfdg+GU9uWlu 4nuRM7GQBRhziTpPJj4yikl1TFmtmOtSRKRRJlUTZNwCd842Ic+j7eRcb6dIE8hAklDc WX8A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e18-20020a630f12000000b004827ef5f0ccsi10181875pgl.352.2022.12.26.01.49.23; Mon, 26 Dec 2022 01:49:33 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229686AbiLZJlK (ORCPT + 66 others); Mon, 26 Dec 2022 04:41:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229447AbiLZJlG (ORCPT ); Mon, 26 Dec 2022 04:41:06 -0500 Received: from mail-vs1-f53.google.com (mail-vs1-f53.google.com [209.85.217.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 654B4E9A for ; Mon, 26 Dec 2022 01:41:01 -0800 (PST) Received: by mail-vs1-f53.google.com with SMTP id d185so9936656vsd.0 for ; Mon, 26 Dec 2022 01:41:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=mrB1MiLPJiyOd+sQ+kYL31Kb2U8C2us/yziaQLwpPq8=; b=4SOur3uhpizEwGxdk4cMpubi0YiMJaGho7ymkW4zgwk7RQ/FyL9osr8+qu4jJlt3tP AsAt2qRsSpb4Jgazg3ZgsdKzVLOwvUdDHzNZdOBYfClnWGgxKsUCM/drgsleN72bNVM2 xLEkwedtqhrh+neCXTgB9BVfMySjH22jENzmllwkMJ0p3/STPH31ViVZJ2lE+W6X9d5n ABOdJ67/8oBEj4RBcD6EF9KmaU0GZpPuDntEA/tOKbFeLCBw29UfXJmzeHXAMGFCXE+x ppOkZhCz7pZBLgsElbBJ7yfSO97ql/ih3Cfa7qibicsIlqhAoZI/QjMawhwOrML37OY8 ahjQ== X-Gm-Message-State: AFqh2kqmS1KjVyPD/PR7PLcW/xYsoZm1Px9b764dq/h2pUmrKZOXmI9N 3aWayJBQEwlH5gj+S/otQEVdAmjzOoEZWygd2uI= X-Received: by 2002:a05:6102:1041:b0:3c6:2426:2210 with SMTP id h1-20020a056102104100b003c624262210mr672551vsq.86.1672047660448; Mon, 26 Dec 2022 01:41:00 -0800 (PST) MIME-Version: 1.0 References: <20221220072743.3039060-1-shiyn.lin@gmail.com> <20221220072743.3039060-5-shiyn.lin@gmail.com> In-Reply-To: <20221220072743.3039060-5-shiyn.lin@gmail.com> From: Barry Song Date: Mon, 26 Dec 2022 22:40:49 +1300 Message-ID: Subject: Re: [PATCH v3 04/14] mm/rmap: Break COW PTE in rmap walking To: Chih-En Lin Cc: Andrew Morton , Qi Zheng , David Hildenbrand , Matthew Wilcox , Christophe Leroy , John Hubbard , Nadav Amit , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Yang Shi , Peter Xu , "Zach O'Keefe" , "Liam R . Howlett" , Alex Sierra , Xianting Tian , Colin Cross , Suren Baghdasaryan , Pasha Tatashin , Suleiman Souhlal , Brian Geffon , Yu Zhao , Tong Tiangen , Liu Shixin , Li kunyu , Anshuman Khandual , Vlastimil Babka , Hugh Dickins , Minchan Kim , Miaohe Lin , Gautam Menghani , Catalin Marinas , Mark Brown , Will Deacon , "Eric W . Biederman" , Thomas Gleixner , Sebastian Andrzej Siewior , Andy Lutomirski , Fenghua Yu , Barret Rhoden , Davidlohr Bueso , "Jason A . Donenfeld" , Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 20, 2022 at 8:25 PM Chih-En Lin wrote: > > Some of the features (unmap, migrate, device exclusive, mkclean, etc) > might modify the pte entry via rmap. Add a new page vma mapped walk > flag, PVMW_BREAK_COW_PTE, to indicate the rmap walking to break COW PTE. > > Signed-off-by: Chih-En Lin > --- > include/linux/rmap.h | 2 ++ > mm/migrate.c | 3 ++- > mm/page_vma_mapped.c | 2 ++ > mm/rmap.c | 12 +++++++----- > mm/vmscan.c | 7 ++++++- > 5 files changed, 19 insertions(+), 7 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index bd3504d11b155..d0f07e5519736 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -368,6 +368,8 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, > #define PVMW_SYNC (1 << 0) > /* Look for migration entries rather than present PTEs */ > #define PVMW_MIGRATION (1 << 1) > +/* Break COW-ed PTE during walking */ > +#define PVMW_BREAK_COW_PTE (1 << 2) > > struct page_vma_mapped_walk { > unsigned long pfn; > diff --git a/mm/migrate.c b/mm/migrate.c > index dff333593a8ae..a4be7e04c9b09 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -174,7 +174,8 @@ void putback_movable_pages(struct list_head *l) > static bool remove_migration_pte(struct folio *folio, > struct vm_area_struct *vma, unsigned long addr, void *old) > { > - DEFINE_FOLIO_VMA_WALK(pvmw, old, vma, addr, PVMW_SYNC | PVMW_MIGRATION); > + DEFINE_FOLIO_VMA_WALK(pvmw, old, vma, addr, > + PVMW_SYNC | PVMW_MIGRATION | PVMW_BREAK_COW_PTE); > > while (page_vma_mapped_walk(&pvmw)) { > rmap_t rmap_flags = RMAP_NONE; > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 93e13fc17d3cb..5dfc9236dc505 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -251,6 +251,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > step_forward(pvmw, PMD_SIZE); > continue; > } > + if (pvmw->flags & PVMW_BREAK_COW_PTE) > + break_cow_pte(vma, pvmw->pmd, pvmw->address); > if (!map_pte(pvmw)) > goto next_pte; > this_pte: > diff --git a/mm/rmap.c b/mm/rmap.c > index 2ec925e5fa6a9..b1b7dcbd498be 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -807,7 +807,8 @@ static bool folio_referenced_one(struct folio *folio, > struct vm_area_struct *vma, unsigned long address, void *arg) > { > struct folio_referenced_arg *pra = arg; > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > + /* it will clear the entry, so we should break COW PTE. */ > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); what do you mean by breaking cow pte? in memory reclamation case, we are only checking and clearing page referenced bit in pte, do we really need to break cow? > int referenced = 0; > > while (page_vma_mapped_walk(&pvmw)) { > @@ -1012,7 +1013,8 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) > static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, > unsigned long address, void *arg) > { > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, > + PVMW_SYNC | PVMW_BREAK_COW_PTE); > int *cleaned = arg; > > *cleaned += page_vma_mkclean_one(&pvmw); > @@ -1471,7 +1473,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > unsigned long address, void *arg) > { > struct mm_struct *mm = vma->vm_mm; > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); > pte_t pteval; > struct page *subpage; > bool anon_exclusive, ret = true; > @@ -1842,7 +1844,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, > unsigned long address, void *arg) > { > struct mm_struct *mm = vma->vm_mm; > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); > pte_t pteval; > struct page *subpage; > bool anon_exclusive, ret = true; > @@ -2195,7 +2197,7 @@ static bool page_make_device_exclusive_one(struct folio *folio, > struct vm_area_struct *vma, unsigned long address, void *priv) > { > struct mm_struct *mm = vma->vm_mm; > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); > struct make_exclusive_args *args = priv; > pte_t pteval; > struct page *subpage; > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 026199c047e0e..980d2056adfd1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1781,6 +1781,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > } > } > > + /* > + * Break COW PTE since checking the reference > + * of folio might modify the PTE. > + */ > if (!ignore_references) > references = folio_check_references(folio, sc); > > @@ -1864,7 +1868,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > > /* > * The folio is mapped into the page tables of one or more > - * processes. Try to unmap it here. > + * processes. Try to unmap it here. Also, since it will write > + * to the page tables, break COW PTE if they are. > */ > if (folio_mapped(folio)) { > enum ttu_flags flags = TTU_BATCH_FLUSH; > -- > 2.37.3 > Thanks Barry