Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp764498rwb; Sun, 6 Nov 2022 13:27:19 -0800 (PST) X-Google-Smtp-Source: AMsMyM5sqA2X33I8B7s6z6+IpxMfGHF6j0p3HPssoAkzj4jl+A7ntJizGYw9hJ1BQqUntbLQWmzb X-Received: by 2002:a17:907:7f16:b0:7ad:bf79:f9b6 with SMTP id qf22-20020a1709077f1600b007adbf79f9b6mr40905660ejc.61.1667770039193; Sun, 06 Nov 2022 13:27:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667770039; cv=none; d=google.com; s=arc-20160816; b=1Dry7Wt7UW7myGCWZFLTcgVQLyhDet3Jn/6ECaGxCkAeNtj321nYUUtIjP7r+6CFH9 eNZKS77Px+Jcm9sb+7dfcty3zd3bdpdm9F1RtrhH/BLr164ghmkCnnDtyaKNoTnayJlo 9ffCw6cpTmKbkOheHnwfX0aeT5OOih613x+EFjjFlbbGk8lPyRaYNgPmkXeztonj9ElV KkaWvixwFn4JmqxFj24S4v020c91Omj8/C+xSzcXiMK7NGg+vnAAYTqLtuR3pVpsi8Gd g+w4fCC8gayEJw7ZiY6KnAIz4D0FZDnRLFMmwayvAxG71Pl8BqMJ9/TRL52DH+I4GM0P zPhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=cl3T2kX2Bn7Po+06v+uWOWk/dmmWwqPlgzeoJMMIkg4=; b=dWar3HiM4L8Rb/3LNdFNf9vCLp3oiYhnIPHlLVVaTcrgfu2HT6woEb+q0MYASFI+aj GneRNMwXurAS35YpWpNrzIcq4hq8fin2dZBWBU129mCz08C4qpwTz57FTIVoMSSZK0GP xPlHCDMDfysXos8Eh1mLDRohe6jsGqXN1uwB0n0cIq9ECfdm7TiLrN2JG43akNABanNB GHheZsBCBIwV5vFArANBWgsYPZcKKrslYQVfS4XBV25f7sr1p/UhGgAvH3h+ecLC9Hrc dWqbKzw0c2kecTMlDTxg3JQxM8VQaUP1YifFdA9SQXQMxqmkzhcD8nGagOP1ZFUHzvc+ UL8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=L4WRc9bq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ht20-20020a170907609400b007acc6769292si8286722ejc.365.2022.11.06.13.26.49; Sun, 06 Nov 2022 13:27:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=L4WRc9bq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230021AbiKFVGl (ORCPT + 96 others); Sun, 6 Nov 2022 16:06:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229669AbiKFVGj (ORCPT ); Sun, 6 Nov 2022 16:06:39 -0500 Received: from mail-qv1-xf2e.google.com (mail-qv1-xf2e.google.com [IPv6:2607:f8b0:4864:20::f2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E402C2E for ; Sun, 6 Nov 2022 13:06:38 -0800 (PST) Received: by mail-qv1-xf2e.google.com with SMTP id ml12so6982164qvb.0 for ; Sun, 06 Nov 2022 13:06:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=cl3T2kX2Bn7Po+06v+uWOWk/dmmWwqPlgzeoJMMIkg4=; b=L4WRc9bqPUJfTMNZcwjljqyoMcIN/xavYSt5+AYQHkS3OkajRlXtR5qNquUTntDwGx q7ywNprAdwKZh9/D3dki+7hdYeiOO+BGEQEB3OJPwmUF4cypJ+RMXt6BPdx7Q4aVs8Yz 70l/YVWUm3Ve8zW4zVcxwSqC7HETxpdFQ1SqEXO4COgkE9/7k4wXQ3yP9eUC3hPHactE rhtxAFHp9AG1/dQtx1FDBLvWEQSrLY7dZydEASeIh0hKA7lJrfCejtKRkAyYc0Hy15yq CHxBkrun251NcqSlcKZd9tnkF9V1J4bWXmz58xKPD7ozBok9KKjHnskzLlXEyN5Mdupo yBRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cl3T2kX2Bn7Po+06v+uWOWk/dmmWwqPlgzeoJMMIkg4=; b=vc4RFLm+0py7+c81yplclZsnBaYHTXyDKGNg2SE8lvmadvc5hMMQkqPupISPqCAYPN egqBwAIuq4OHBORTieVZzTcoBpT9k0e20+l6d4l8G0tDsSSosDuOZWCO8qmmSjOM0kQo 09MnvL8skc8kIIvb5/D1RO2pLZDSpc33lNB6ZeYKDyPgc2Lg0+mHGUc7jUAfAxHMqcYG tilYGB+iRTTEI0CC8OIDmO1EE2/R6HM3AeO4oZwl6nJ76IP4qXDIBQ1/dgPqQH49PBtM +Y99lRA86HLhCMM59LQxdCymUFGLgcczql9EstmKxbZvpu2gJ0yr9ViaGkHjxP2I/Cm9 ivHQ== X-Gm-Message-State: ACrzQf1+aRdXC+JeAM1Q+B9C3k8NoZLJErMIwVxEI7Ps3aPz5tJUma2o V8rQaDa5lWSA+gk8mY/3fEv6bQ== X-Received: by 2002:a05:6214:2b46:b0:4b8:d67a:b060 with SMTP id jy6-20020a0562142b4600b004b8d67ab060mr649014qvb.122.1667768790505; Sun, 06 Nov 2022 13:06:30 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id m13-20020a05620a290d00b006fa43e139b5sm5253412qkp.59.2022.11.06.13.06.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Nov 2022 13:06:30 -0800 (PST) Date: Sun, 6 Nov 2022 13:06:19 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Linus Torvalds cc: Johannes Weiner , Stephen Rothwell , Alexander Gordeev , Peter Zijlstra , Will Deacon , Aneesh Kumar , Nick Piggin , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Nadav Amit , Jann Horn , John Hubbard , X86 ML , Matthew Wilcox , Andrew Morton , kernel list , Linux-MM , Andrea Arcangeli , "Kirill A . Shutemov" , Joerg Roedel , Uros Bizjak , Alistair Popple , linux-arch Subject: Re: mm: delay rmap removal until after TLB flush In-Reply-To: Message-ID: <8a1e97c9-bd5-7473-6da8-2aa75198fbe8@google.com> References: <140B437E-B994-45B7-8DAC-E9B66885BEEF@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adding Stephen to Cc, for next-20221107 alert. Adding Johannes to Cc, particularly for lock_page_memcg discussion. On Fri, 4 Nov 2022, Linus Torvalds wrote: > On Thu, Nov 3, 2022 at 11:33 PM Alexander Gordeev > wrote: > > > > I rather have a question to the generic part (had to master the code quotting). > > Sure. > > Although now I think the series in in Andrew's -mm tree, or just about > to get moved in there, so I'm not going to touch my actual branch any > more. Linus, we've been exchanging about my mm/rmap.c mods in private mail, I need to raise some points about your mods here in public mail. Four topics - munlock (good), anon_vma (bad), mm-unstable (bad), lock_page_memcg (uncertain). I'm asking Andrew here to back your patchset out of mm-unstable for now, until we have its fixes in: otherwise I'm worried that next-20221107 will waste everyone's time. munlock (good) -------------- You've separated out the munlock_vma_page() from the rest of PTE remove rmap work. Worried me at first, but I think that's fine: bundling it into page_remove_rmap() was mainly so that we wouldn't forget to do it anywhere (and other rmap funcs already took vma arg). Certainly during development, I had been using page mapcount somewhere inside munlock_vma_page(), either for optimization or for sanity check, I forget; but gave up on that as unnecessary complication, and I think it became impossible with the pagevec; so not an issue now. If it were my change, I'd certainly do it by changing the name of the vma arg in page_remove_rmap() from vma to munlock_vma, not doing the munlock when that is NULL, and avoid the need for duplicating code: whereas you're very pleased with cutting out the unnecessary stuff in slimmed-down page_zap_pte_rmap(). Differing tastes (or perhaps you'll say taste versus no taste). anon_vma (bad) -------------- My first reaction to your patchset was, that it wasn't obvious to me that delaying the decrementation of mapcount is safe: I needed time to think about it. But happily, testing saved me from needing much thought. The first problem, came immediately in the first iteration of my huge tmpfs kbuild swapping loads, was BUG at mm/migrate.c:1105! VM_BUG_ON_FOLIO(folio_test_anon(src) && !folio_test_ksm(src) && !anon_vma, src); Okay, that's interesting but not necessarily fatal, we can very easily make it a recoverable condition. I didn't bother to think on that one, just patched around it. The second problem was more significant: came after nearly five hours of load, BUG NULL dereference (address 8) in down_read_trylock() in rmap_walk_anon(), while kswapd was doing a folio_referenced(). That one goes right to the heart of the matter, and instinct had been correct to worry about delaying the decrementation of mapcount, that is, extending the life of page_mapped() or folio_mapped(). See folio_lock_anon_vma_read(): folio_mapped() plays a key role in establishing the continued validity of an anon_vma. See comments above folio_get_anon_vma(), some by me but most by PeterZ IIRC. I believe what has happened is that your patchset has, very intentionally, kept the page as "folio_mapped" until after free_pgtables() does its unlink_anon_vmas(); but that is telling folio_lock_anon_vma_read() that the anon_vma is safe to use when actually it has been freed. (It looked like a page table when I peeped at it.) I'm not certain, but I think that you made page_zap_pte_rmap() handle anon as well as file, just for the righteous additional simplification; but I'm afraid that (without opening a huge anon_vma refcounting can of worms) that unification has to be reverted, and anon left to go the same old way it did before. I didn't look into whether reverting one of your patches would achieve that, I just adjusted the code in zap_pte_range() to go the old way for PageAnon; and that has been running successfully, hitting neither BUG, for 15 hours now. mm-unstable (bad) ----------------- Aside from that PageAnon issue, mm-unstable is in an understandably bad state because you could not have foreseen my subpages_mapcount addition to page_remove_rmap(). page_zap_pte_rmap() now needs to handle the PageCompound (but not the "compound") case too. I rushed you and akpm an emergency patch for that on Friday night, but you, let's say, had reservations about it. So I haven't posted it, and while the PageAnon issue remains, I think your patchset has to be removed from mm-unstable and linux-next anyway. What happens to mm-unstable with page_zap_pte_rmap() not handling subpages_mapcount? In my CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y case, I get a "Bad page state" even before reaching first login after boot, "page dumped because: nonzero subpages_mapcount". Yes, I make it worse for myself by having a BUG() in bad_page(); others will limp along with more and more of those "Bad page" messages. lock_page_memcg (uncertain) --------------------------- Johannes, please help! Linus has got quite upset by lock_page_memcg(), its calls from mm/rmap.c anyway, and most particularly by the way in which it is called at the start of page_remove_rmap(), before anyone's critical atomic_add_negative(), yet its use is to guarantee the stability of page memcg while doing the stats updates, done only when atomic_add_negative() says so. I do have one relevant insight on this. It (or its antecedents under other names) date from the days when we did "reparenting" of memcg charges from an LRU: and in those days the lock_page_memcg() before mapcount adjustment was vital, to pair with the uses of folio_mapped() or page_mapped() in mem_cgroup_move_account() - those "mapped" checks are precisely around the stats which the rmap functions affect. But nowadays mem_cgroup_move_account() is only called, with page table lock held, on matching pages found in a task's page table: so its "mapped" checks are redundant - I've sometimes thought in the past of removing them, but held back, because I always have the notion (not hope!) that "reparenting" may one day be brought back from the grave. I'm too out of touch with memcg to know where that notion stands today. I've gone through a multiverse of opinions on those lock_page_memcg()s in the last day: I currently believe that Linus is right, that the lock_page_memcg()s could and should be moved just before the stats updates. But I am not 100% certain of that - is there still some reason why it's important that the page memcg at the instant of the critical mapcount transition be kept unchanged until the stats are updated? I've tried running scenarios through my mind but given up. (And note that the answer might be different with Linus's changes than without them: since he delays the mapcount decrementation until long after pte was removed and page table lock dropped). And I do wish that we could settle this lock_page_memcg() question in an entirely separate patch: as it stands, page_zap_pte_rmap() gets to benefit from Linus's insight (or not), and all the other rmap functions carry on with the mis?placed lock_page_memcg() as before. Let's press Send, Hugh