Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp5050730rdb; Tue, 12 Dec 2023 18:23:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IHWCVZW+zMK05w2pjiRam5W0jXu1WKVI6o7iYA0UqRQ0F+iCyDWDqs7asmrpVJNJSpRJetm X-Received: by 2002:a9d:7ad0:0:b0:6d9:d8af:d302 with SMTP id m16-20020a9d7ad0000000b006d9d8afd302mr6788813otn.1.1702434197549; Tue, 12 Dec 2023 18:23:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702434197; cv=none; d=google.com; s=arc-20160816; b=dYxuHUtyU5WlrAENyt1lThCk0tLHPrcWhFgxARRy49NvkWoxWLQfV87hHGTryhTv5Y YsFpQ+gGm1UcvG9JRTAJ6dWh5lfdJD3BMN3R6keLzjGMdTicGxQygr2TakGKu5OWza3E u8UHL8DBDk4abV9pl8gvfJqaoqjw9GRsjy4yZc1BLymTW37b8IevS6kSwFNaUU+y6lAp tTV1WnCLtx6gjN2gpzjm1EqMd4pa1mah1pk51wQIBL9LrRU32rzk54tY/JRy2SNz1t6v Efy2IFyQ/5TiK4Ol/3qBFqoVpSFSrka/dpuJLPZ2U3tnsIt+V+Cs2IhxaVIfdIHAkLcl GKmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=AMaOuwX05cXWrFNobmMzbdjje1SpPIWT5dwm/Wgp1Jo=; fh=K/LZ1BUptwrDpCNjVhfmA07Y8doOUYOQskSkVF/w5AI=; b=zBZ/RQua7vKNTp+t8mrKICYOZJCWE94n78yrqvyUA1Knws5GVhOBszIl8s7J+UWWHI Htl84Ov+OpKGwEof6UlpdCc6icAXsc/Hu0tguU0r2aFGgS2Krdn80XpESyx79m45a4Rm euv9w9N3GIkGMfSa3NIB+LdjEpmOEMkQIQeyh2l4RVkhNRa1io7+DUPGE7e73lT0oLmc ud0MqpDLRKblC0Kt4WHKw2G8FSldeXl42Y2nsbMxYm4H8Ch2cYmX+s6VdoPD3+XNSh6X Fke9dcMC0nAuA1vek2vFSvQbXxEQ9nGMFjl6UBh81Mfdi79DpFDsdo5ltpFJTvFUTW+U i1XA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=MRtu+hmU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id m22-20020aa78a16000000b006ce63dbd7f4si8638873pfa.142.2023.12.12.18.23.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 18:23:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=MRtu+hmU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 0CA9D807385D; Tue, 12 Dec 2023 18:23:15 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378335AbjLMCXB (ORCPT + 99 others); Tue, 12 Dec 2023 21:23:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378400AbjLMCWo (ORCPT ); Tue, 12 Dec 2023 21:22:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF841BC for ; Tue, 12 Dec 2023 18:22:21 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4B8C4C43391 for ; Wed, 13 Dec 2023 02:22:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702434141; bh=wwCgm+IFjqFyxUBy11fFZfn6oiOr0/BXrjgrdZ1mWyg=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=MRtu+hmU4CkJurSA+0yTo4zFOwkQjx6jo6GwjN2I4q4AIdx+cHWzDP+9TtJ3+WFoM aQ40bxwPHQ0BRMj4QezD4VfPgxfi+8H0CeKYrifSS2lVBsZ/0XWF7YRN3T8zWfgFPA uumVIUe7gCjYhfd2aL6IXbUEGs+5rVVnA/kXXYST58i3KT4xkamiN54BLkxfKDP5po lElzd9bUkVPeNhe7kM1bC5lLzBCHjEpHftGfMsHPFHzLHVaSdjrbhNGk9QNXfAD1YE HLvMe/vrEwaVj6K1c8m537yho50ezftINb2ief0tlMNuxFmnIItHdplkk0OzWI/kMF AVcgwvc761ULA== Received: by mail-oo1-f50.google.com with SMTP id 006d021491bc7-58d08497aa1so4065528eaf.0 for ; Tue, 12 Dec 2023 18:22:21 -0800 (PST) X-Gm-Message-State: AOJu0YzA7Mdr1+oT1DrgShMK3HLRzO2NUy1ZG1W9tUfes+Y+4NbG1FXf HyOIeGvMiLou87831zPQZpAuZGGKEDttGImHtyRNQg== X-Received: by 2002:a05:6358:9889:b0:16d:e1d8:22c7 with SMTP id q9-20020a056358988900b0016de1d822c7mr7530145rwa.29.1702434140415; Tue, 12 Dec 2023 18:22:20 -0800 (PST) MIME-Version: 1.0 References: <20231119194740.94101-1-ryncsn@gmail.com> <20231119194740.94101-19-ryncsn@gmail.com> In-Reply-To: From: Chris Li Date: Tue, 12 Dec 2023 18:22:07 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 18/24] mm/swap: introduce a helper non fault swapin To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 12 Dec 2023 18:23:15 -0800 (PST) On Tue, Nov 28, 2023 at 3:22=E2=80=AFAM Kairui Song wrot= e: > > > > /* > > > * Make sure huge_gfp is always more limited than limit_gfp. > > > * Some of the flags set permissions, while others set limitations. > > > @@ -1854,9 +1838,12 @@ static int shmem_swapin_folio(struct inode *in= ode, pgoff_t index, > > > { > > > struct address_space *mapping =3D inode->i_mapping; > > > struct shmem_inode_info *info =3D SHMEM_I(inode); > > > - struct swap_info_struct *si; > > > + enum swap_cache_result result; > > > struct folio *folio =3D NULL; > > > + struct mempolicy *mpol; > > > + struct page *page; > > > swp_entry_t swap; > > > + pgoff_t ilx; > > > int error; > > > > > > VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); > > > @@ -1866,34 +1853,30 @@ static int shmem_swapin_folio(struct inode *i= node, pgoff_t index, > > > if (is_poisoned_swp_entry(swap)) > > > return -EIO; > > > > > > - si =3D get_swap_device(swap); > > > - if (!si) { > > > + mpol =3D shmem_get_pgoff_policy(info, index, 0, &ilx); > > > + page =3D swapin_page_non_fault(swap, gfp, mpol, ilx, fault_mm= , &result); > > Hi Chris, > > I've been trying to address these issues in V2, most issue in other > patches have a straight solution, some could be discuss in seperate > series, but I come up with some thoughts here: > > > > > Notice this "result" CAN be outdated. e.g. after this call, the swap > > cache can be changed by another thread generating the swap page fault > > and installing the folio into the swap cache or removing it. > > This is true, and it seems a potential race also exist before this > series for direct (no swapcache) swapin path (do_swap_page) if I > understand it correctly: I just noticed I missed this email while I was cleaning up my email archive. Sorry for the late reply. Traveling does not help either. I am not aware of swap in racing bugs in the existing code. Racing, yes. If you discover a code path for racing causing bug, please report it. > > In do_swap_page path, multiple process could swapin the page at the > same time (a mapped once page can still be shared by sub threads), > they could get different folios. The later pte lock and pte_same check > is not enough, because while one process is not holding the pte lock, > another process could read-in, swap_free the entry, then swap-out the > page again, using same entry, an ABA problem. The race is not likely > to happen in reality but in theory possible. Have you taken into account that if the page was locked, then it wasn't able to change from the swapcache? I think the swap cache find and get function will return the page locked. Then swapcache will not be able to change the mapping as long as the page is still locked. > > Same issue for shmem here, there are > shmem_confirm_swap/shmem_add_to_page_cache check later to prevent > re-installing into shmem mapping for direct swap in, but also not > enough. Other process could read-in and re-swapout using same entry so > the mapping entry seems unchanged during the time window. Still very > unlikely to happen in reality, but not impossible. Please take a look again with the page lock information. Report back if you still think there is a racing bug in the existing code. We can take a closer look at the concurrent call stack to trigger the bug. Chris > > When swapcache is used there is no such issue, since swap lock and > swap_map are used to sync all readers, and while one reader is still > holding the folio, the entry is locked through swapcache, or if a > folio is removed from swapcache, folio_test_swapcache will fail, and > the reader could retry. > > I'm trying to come up with a better locking for direct swap in, am I > missing anything here? Correct me if I get it wrong... >