Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp933528rdh; Fri, 24 Nov 2023 01:10:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IHgp3O1CK4fVxfiDFWpIuqg7flA09dGIsGgwEZr4aRb9xO7O737I06pCTQggPgo7d6BktHC X-Received: by 2002:a05:6a21:3714:b0:18b:8147:91fb with SMTP id yl20-20020a056a21371400b0018b814791fbmr1786749pzb.19.1700817046053; Fri, 24 Nov 2023 01:10:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700817046; cv=none; d=google.com; s=arc-20160816; b=jDbMJFevz628/U7u0+voikcWP9bgKOBz4Rz1luCVzWXah/zXUVy1Cgx1UHfx1kToHT tr/H9shpAvYnI+36bSnmTnHSGAoxGHmLQv/jrkbX3YbRsNjibW00+yUMzrEQpEO/flXt TZvLzaUUTNsRXV4/NpP1p6Sd5faMHx+0fkBBZ8tSQjAb79z0285wAwslvWvDcYDVqZjZ gky5dkujUVz+9GkqPdP6OYcKA2Y7X9i/E0G0pw1Ns1VguTY0/7w9MheVStGT4NWusEDD y8mUAG8MEoryDND6HQKW61mbzsoz6lCCP8VE4K8GuNSZ6pkK129yZkF4WZvOc7grNLan IgjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=JqrbauSlfA6XrOMmiCaafCEQk08ixuFyzPJuqOunsQI=; fh=K/LZ1BUptwrDpCNjVhfmA07Y8doOUYOQskSkVF/w5AI=; b=JRqaHZKoO6sGidOYFmdFUIAkmpCHgEqBq6Sx77WKjonfZhJiY6394wkp2gWM5qeY5W +Kb8oOZNhvjqKjg5pZdXtfy3IA4mJ0Vw5DQ2xgLxYkOXPqq03zB4uaBRXjb/S5oM+UZV /UDPRgA8ATza6iKVwT4f9USaERboyh1V1LvpzeOjPnoAHrWUg51d099xsjs0nQFYDK30 rAQH4d1e4NbmY1Q/AAoV4093XuOS2gse/CY8P5KlWoutRUfgp5cXhDlLLbEPxH/4rmK4 7R0A9hk/ldyInAkS01vsQ6cnIJdMPwcBmxuWZI7VX1zb4EFz9itED9eEhNqoC0Ihm/uh AI0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZBrc0AIV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id w7-20020a17090aea0700b002805aa7b138si3747237pjy.59.2023.11.24.01.10.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Nov 2023 01:10:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZBrc0AIV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 771B080A9790; Fri, 24 Nov 2023 01:10:42 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235228AbjKXJKY (ORCPT + 99 others); Fri, 24 Nov 2023 04:10:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234145AbjKXJKO (ORCPT ); Fri, 24 Nov 2023 04:10:14 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65CDE10F9 for ; Fri, 24 Nov 2023 01:10:19 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EAC4DC433CD for ; Fri, 24 Nov 2023 09:10:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1700817019; bh=6ZkRiniu8/O5HHUlahhrxMHGkSqH9Z2eWrnLAXchLsY=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=ZBrc0AIVIG0PxCXQCzgC6RUAJZ7KgCGi4q6OiVoq4wBW4w8sddc/wKgd4sLSScCJQ cSsaW/SE9h2Z2HVS8ixMfqXSQsZGCxN0jCrMJNhNCMD5jXRqN2BZsS5MUjejjV+zqX ZtB0GV8R/dAqxE2Mfn0W7VRRFx57ZDZR4s94tHOkE2SR5/bBdZOpPFsAZ4Evuf2ruz DQnT441T4eEjXesTjgkJZrAh8T9sRr/QPyjYs9RJ69XVX8SAWQZr3CG3rHSlHenof/ xWZrTWI4daAepXJ7eIYjNkAo+lybxSmjPNPJTJ4XRbkfRH1V07ZcZocdjsxvFWtuqv HxKccZd0DLkbg== Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-6c33ab26dddso1377959b3a.0 for ; Fri, 24 Nov 2023 01:10:18 -0800 (PST) X-Gm-Message-State: AOJu0YyVMrr2sTmfYvmGB0gu/HiTyMEEamBLjc5x9gtJUvX0fYWLEjBn ROeDdAx9cuDPFuJcASG1wfkeG7lWeI0b0zzy/0naUg== X-Received: by 2002:a05:6a20:6728:b0:18b:8b4:2dde with SMTP id q40-20020a056a20672800b0018b08b42ddemr2158097pzh.61.1700817018259; Fri, 24 Nov 2023 01:10:18 -0800 (PST) MIME-Version: 1.0 References: <20231119194740.94101-1-ryncsn@gmail.com> <20231119194740.94101-12-ryncsn@gmail.com> In-Reply-To: From: Chris Li Date: Fri, 24 Nov 2023 01:10:07 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 11/24] mm/swap: also handle swapcache lookup in swapin_readahead To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-1.3 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Fri, 24 Nov 2023 01:10:42 -0800 (PST) On Fri, Nov 24, 2023 at 12:42=E2=80=AFAM Kairui Song wro= te: > > Chris Li =E4=BA=8E2023=E5=B9=B411=E6=9C=8822=E6=97=A5= =E5=91=A8=E4=B8=89 00:07=E5=86=99=E9=81=93=EF=BC=9A > > > > > > On Sun, Nov 19, 2023 at 11:48=E2=80=AFAM Kairui Song = wrote: > > > > > > From: Kairui Song > > > > > > No feature change, just prepare for later commits. > > > > You need to have a proper commit message why this change needs to happe= n. > > Preparing is too generic, it does not give any real information. > > For example, it seems you want to reduce one swap cache lookup because > > swap_readahead already has it? > > > > I am a bit puzzled at this patch. It shuffles a lot of sensitive code. > > However I do not get the value. > > It seems like this patch should be merged with the later patch that > > depends on it to be judged together. > > > > > > > > Signed-off-by: Kairui Song > > > --- > > > mm/memory.c | 61 +++++++++++++++++++++++------------------------= -- > > > mm/swap.h | 10 ++++++-- > > > mm/swap_state.c | 26 +++++++++++++-------- > > > mm/swapfile.c | 30 +++++++++++------------- > > > 4 files changed, 66 insertions(+), 61 deletions(-) > > > > > > diff --git a/mm/memory.c b/mm/memory.c > > > index f4237a2e3b93..22af9f3e8c75 100644 > > > --- a/mm/memory.c > > > +++ b/mm/memory.c > > > @@ -3786,13 +3786,13 @@ static vm_fault_t handle_pte_marker(struct vm= _fault *vmf) > > > vm_fault_t do_swap_page(struct vm_fault *vmf) > > > { > > > struct vm_area_struct *vma =3D vmf->vma; > > > - struct folio *swapcache, *folio =3D NULL; > > > + struct folio *swapcache =3D NULL, *folio =3D NULL; > > > + enum swap_cache_result cache_result; > > > struct page *page; > > > struct swap_info_struct *si =3D NULL; > > > rmap_t rmap_flags =3D RMAP_NONE; > > > bool exclusive =3D false; > > > swp_entry_t entry; > > > - bool swapcached; > > > pte_t pte; > > > vm_fault_t ret =3D 0; > > > > > > @@ -3850,42 +3850,37 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > > if (unlikely(!si)) > > > goto out; > > > > > > - folio =3D swap_cache_get_folio(entry, vma, vmf->address); > > > - if (folio) > > > - page =3D folio_file_page(folio, swp_offset(entry)); > > > - swapcache =3D folio; > > > > Is the motivation that swap_readahead() already has a swap cache look u= p so you > > remove this look up here? > > Yes, the cache look up can is moved and shared in swapin_readahead, > and this also make it possible to use that look up to return a shadow > when entry is not a page, so another shadow look up can be saved for > sync (ZRAM) swapin path. This can help improve ZRAM performance for > ~4% for a 10G ZRAM, and should improves more when the cache tree grows > large. > > > > > > - > > > - if (!folio) { > > > - page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE= , > > > - vmf, &swapcached); > > > - if (page) { > > > - folio =3D page_folio(page); > > > - if (swapcached) > > > - swapcache =3D folio; > > > - } else { > > > + page =3D swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, > > > + vmf, &cache_result); > > > + if (page) { > > > + folio =3D page_folio(page); > > > + if (cache_result !=3D SWAP_CACHE_HIT) { > > > + /* Had to read the page from swap area: Major= fault */ > > > + ret =3D VM_FAULT_MAJOR; > > > + count_vm_event(PGMAJFAULT); > > > + count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); > > > + } > > > + if (cache_result !=3D SWAP_CACHE_BYPASS) > > > + swapcache =3D folio; > > > + if (PageHWPoison(page)) { > > > > There is a lot of code shuffle here. From the diff it is hard to tell > > if they are doing the same thing as before. > > > > > /* > > > - * Back out if somebody else faulted in this = pte > > > - * while we released the pte lock. > > > + * hwpoisoned dirty swapcache pages are kept = for killing > > > + * owner processes (which may be unknown at h= wpoison time) > > > */ > > > - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, = vmf->pmd, > > > - vmf->address, &vmf->ptl); > > > - if (likely(vmf->pte && > > > - pte_same(ptep_get(vmf->pte), vmf->= orig_pte))) > > > - ret =3D VM_FAULT_OOM; > > > - goto unlock; > > > + ret =3D VM_FAULT_HWPOISON; > > > + goto out_release; > > > } > > > - > > > - /* Had to read the page from swap area: Major fault *= / > > > - ret =3D VM_FAULT_MAJOR; > > > - count_vm_event(PGMAJFAULT); > > > - count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); > > > - } else if (PageHWPoison(page)) { > > > + } else { > > > /* > > > - * hwpoisoned dirty swapcache pages are kept for kill= ing > > > - * owner processes (which may be unknown at hwpoison = time) > > > + * Back out if somebody else faulted in this pte > > > + * while we released the pte lock. > > > */ > > > - ret =3D VM_FAULT_HWPOISON; > > > - goto out_release; > > > + vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd= , > > > + vmf->address, &vmf->ptl); > > > + if (likely(vmf->pte && > > > + pte_same(ptep_get(vmf->pte), vmf->orig_pte= ))) > > > + ret =3D VM_FAULT_OOM; > > > + goto unlock; > > > } > > > > > > ret |=3D folio_lock_or_retry(folio, vmf); > > > diff --git a/mm/swap.h b/mm/swap.h > > > index a9a654af791e..ac9136eee690 100644 > > > --- a/mm/swap.h > > > +++ b/mm/swap.h > > > @@ -30,6 +30,12 @@ extern struct address_space *swapper_spaces[]; > > > (&swapper_spaces[swp_type(entry)][swp_offset(entry) \ > > > >> SWAP_ADDRESS_SPACE_SHIFT]) > > > > > > +enum swap_cache_result { > > > + SWAP_CACHE_HIT, > > > + SWAP_CACHE_MISS, > > > + SWAP_CACHE_BYPASS, > > > +}; > > > > Does any function later care about CACHE_BYPASS? > > > > Again, better introduce it with the function that uses it. Don't > > introduce it for "just in case I might use it later". > > Yes, callers in shmem will also need to know if the page is cached in > swap, and need a value to indicate the bypass case. I can add some > comments here to indicate the usage. I also comment on the later patch. Because you do the look up without the folio locked. The swap_cache can change by the time you use the "*result". I suspect some of the swap_cache look up you need to add it back due to the race before locking. This better introduces this field with the user side together. Make it easier to reason the usage in one patch. BTW, one way to flatten the development history of the patch is to flatten the branch as one big patch. Then copy/paste from the big patch to introduce a sub patch step by step. That way the sub patch is closer to the latest version of the code. Just something for you to consider. > > > > > > + > > > void show_swap_cache_info(void); > > > bool add_to_swap(struct folio *folio); > > > void *get_shadow_from_swap_cache(swp_entry_t entry); > > > @@ -55,7 +61,7 @@ struct page *__read_swap_cache_async(swp_entry_t en= try, gfp_t gfp_mask, > > > struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, > > > struct mempolicy *mpol, pgoff_t i= lx); > > > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, > > > - struct vm_fault *vmf, bool *swapcached)= ; > > > + struct vm_fault *vmf, enum swap_cache_r= esult *result); > > > > > > static inline unsigned int folio_swap_flags(struct folio *folio) > > > { > > > @@ -92,7 +98,7 @@ static inline struct page *swap_cluster_readahead(s= wp_entry_t entry, > > > } > > > > > > static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t g= fp_mask, > > > - struct vm_fault *vmf, bool *swapcached) > > > + struct vm_fault *vmf, enum swap_cache_result = *result) > > > { > > > return NULL; > > > } > > > diff --git a/mm/swap_state.c b/mm/swap_state.c > > > index d87c20f9f7ec..e96d63bf8a22 100644 > > > --- a/mm/swap_state.c > > > +++ b/mm/swap_state.c > > > @@ -908,8 +908,7 @@ static struct page *swapin_no_readahead(swp_entry= _t entry, gfp_t gfp_mask, > > > * @entry: swap entry of this memory > > > * @gfp_mask: memory allocation flags > > > * @vmf: fault information > > > - * @swapcached: pointer to a bool used as indicator if the > > > - * page is swapped in through swapcache. > > > + * @result: a return value to indicate swap cache usage. > > > * > > > * Returns the struct page for entry and addr, after queueing swapin= . > > > * > > > @@ -918,30 +917,39 @@ static struct page *swapin_no_readahead(swp_ent= ry_t entry, gfp_t gfp_mask, > > > * or vma-based(ie, virtual address based on faulty address) readahe= ad. > > > */ > > > struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, > > > - struct vm_fault *vmf, bool *swapcached) > > > + struct vm_fault *vmf, enum swap_cache_r= esult *result) > > > { > > > + enum swap_cache_result cache_result; > > > struct swap_info_struct *si; > > > struct mempolicy *mpol; > > > + struct folio *folio; > > > struct page *page; > > > pgoff_t ilx; > > > - bool cached; > > > + > > > + folio =3D swap_cache_get_folio(entry, vmf->vma, vmf->address)= ; > > > + if (folio) { > > > + page =3D folio_file_page(folio, swp_offset(entry)); > > > + cache_result =3D SWAP_CACHE_HIT; > > > + goto done; > > > + } > > > > > > si =3D swp_swap_info(entry); > > > mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); > > > if (swap_use_no_readahead(si, swp_offset(entry))) { > > > page =3D swapin_no_readahead(entry, gfp_mask, mpol, i= lx, vmf->vma->vm_mm); > > > - cached =3D false; > > > + cache_result =3D SWAP_CACHE_BYPASS; > > > } else if (swap_use_vma_readahead(si)) { > > > page =3D swap_vma_readahead(entry, gfp_mask, mpol, il= x, vmf); > > > - cached =3D true; > > > + cache_result =3D SWAP_CACHE_MISS; > > > } else { > > > page =3D swap_cluster_readahead(entry, gfp_mask, mpol= , ilx); > > > - cached =3D true; > > > + cache_result =3D SWAP_CACHE_MISS; > > > } > > > mpol_cond_put(mpol); > > > > > > - if (swapcached) > > > - *swapcached =3D cached; > > > +done: > > > + if (result) > > > + *result =3D cache_result; > > > > > > return page; > > > } > > > diff --git a/mm/swapfile.c b/mm/swapfile.c > > > index 01c3f53b6521..b6d57fff5e21 100644 > > > --- a/mm/swapfile.c > > > +++ b/mm/swapfile.c > > > @@ -1822,13 +1822,21 @@ static int unuse_pte_range(struct vm_area_str= uct *vma, pmd_t *pmd, > > > > > > si =3D swap_info[type]; > > > do { > > > - struct folio *folio; > > > + struct page *page; > > > unsigned long offset; > > > unsigned char swp_count; > > > + struct folio *folio =3D NULL; > > > swp_entry_t entry; > > > int ret; > > > pte_t ptent; > > > > > > + struct vm_fault vmf =3D { > > > + .vma =3D vma, > > > + .address =3D addr, > > > + .real_address =3D addr, > > > + .pmd =3D pmd, > > > + }; > > > > Is this code move caused by skipping the swap cache look up here? > > Yes. > > > > > This is very sensitive code related to swap cache racing. It needs > > very careful reviewing. Better not shuffle it for no good reason. > > Thanks for the suggestion, I'll try to avoid these shuffling, but > cache lookup is moved into swappin_readahead so some changes in the > original caller are not avoidable... Yes I agree sometimes it is unavoidable. Chris