Received: by 2002:a05:7412:f690:b0:e2:908c:2ebd with SMTP id ej16csp763501rdb; Thu, 19 Oct 2023 20:27:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGrf4xUocJ04t1o6ILVoc4N2WKkiGD3AYSdl3Q4kG4jfRfzt2QAvJuBJZemrURk0TmGFnCX X-Received: by 2002:a05:6830:453:b0:6b9:2869:bd81 with SMTP id d19-20020a056830045300b006b92869bd81mr692277otc.18.1697772459624; Thu, 19 Oct 2023 20:27:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697772459; cv=none; d=google.com; s=arc-20160816; b=MnCn0rxSSNCfp1cUH34t3eGHdC00HqRufk5r2EpzqyVhHW/HvaInVdbjUWyPoEZH3M S3q7CKfrrrSdEv9IvFMe2+FJdRK7c/3UmPAs6wfF8lipdaTOuHBtwkigYuzJWn0RTG4f l3k76xYUV5qNnvBxpHXYiMhuRZYqmH6i7VWdmtFEI1x+D/NI7pqqH5JI4b/+IGMGE4tM 9hS9A349+iplC+KPZ0RUhhrUvj73Jfjxx6+K3IVCXWEIcdZBDuL1U1WnqrWINBbzxgFf iothkSSJd2F+0hwO86OdApqD/T2o6nFEUYTpVhGt07uHU4JaZEY5Rvs8SfS5mPGj1n+B 3M0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=GxxR2DX78DztG09LdPqjz5q1xquZbcgeQq3Q9/FZ8LM=; fh=7Nro44dYdwJyUOGavf2MDYwx+qN9cB+kQahyKjCmVkI=; b=i7ZlF1/U8zWkwZQSA6Z4iKkzW/8LXH+uUm/2JUzqh6qz6hTy7Atrwl1P7AClvWAm9M KVoKB7V980r300iZYwl+Su+bm6rj3z6ygxkiTGQgzO0wT62EASCNi9bHW8nov+RL3Aq4 oSEnscCx4cTbgxzJIpwjsFEso83VB/wPNFN3mzsJxx13y+0wytegoYa5H/+n6+P+X4n1 GkqDelumPhH/FmzxczqpNmIe93zjbdJHqs3egs934DB68iK1VKzoZAx9yXRLzqBQZCKN w2sO3pcpnPD+LMka7fQ3nXoMQDtNa5ROa1Em4Ix6mqmfQjPcghaCnonbzYXQH59Ibr3f QtUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id r36-20020a634424000000b00578bb707e70si877160pga.799.2023.10.19.20.27.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 20:27:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 4939D8266303; Thu, 19 Oct 2023 20:27:36 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346864AbjJTD1T (ORCPT + 99 others); Thu, 19 Oct 2023 23:27:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233400AbjJTD1S (ORCPT ); Thu, 19 Oct 2023 23:27:18 -0400 Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 872B2181 for ; Thu, 19 Oct 2023 20:27:15 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R411e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0VuVcYpV_1697772430; Received: from 30.97.48.56(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VuVcYpV_1697772430) by smtp.aliyun-inc.com; Fri, 20 Oct 2023 11:27:11 +0800 Message-ID: <0aaf6bf4-a327-9582-569e-2a634ce74af4@linux.alibaba.com> Date: Fri, 20 Oct 2023 11:27:24 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH] mm: migrate: record the mlocked page status to remove unnecessary lru drain To: "Yin, Fengwei" , "Huang, Ying" , Zi Yan , Yosry Ahmed Cc: akpm@linux-foundation.org, mgorman@techsingularity.net, hughd@google.com, vbabka@suse.cz, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <64899ad0bb78cde88b52abed1a5a5abbc9919998.1697632761.git.baolin.wang@linux.alibaba.com> <1F80D8DA-8BB5-4C7E-BC2F-030BF52931F7@nvidia.com> <87il73uos1.fsf@yhuang6-desk2.ccr.corp.intel.com> <2ad721be-b81e-d279-0055-f995a8cfe180@linux.alibaba.com> <27f40fc2-806a-52a9-3697-4ed9cd7081d4@intel.com> <05d596f3-c59c-76c3-495e-09f8573cf438@linux.alibaba.com> <93abbbfb-27fb-4f65-883c-a6aa38c61fa0@intel.com> From: Baolin Wang In-Reply-To: <93abbbfb-27fb-4f65-883c-a6aa38c61fa0@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.1 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Thu, 19 Oct 2023 20:27:36 -0700 (PDT) On 10/20/2023 10:54 AM, Yin, Fengwei wrote: > > > On 10/20/2023 10:45 AM, Baolin Wang wrote: >> >> >> On 10/20/2023 10:30 AM, Yin, Fengwei wrote: >>> >>> >>> On 10/20/2023 10:09 AM, Baolin Wang wrote: >>>> >>>> >>>> On 10/19/2023 8:07 PM, Yin, Fengwei wrote: >>>>> >>>>> >>>>> On 10/19/2023 4:51 PM, Baolin Wang wrote: >>>>>> >>>>>> >>>>>> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>>>>>> Hi Baolin, >>>>>>> >>>>>>> On 10/19/23 15:25, Baolin Wang wrote: >>>>>>>> >>>>>>>> >>>>>>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>>>>>> Zi Yan writes: >>>>>>>>> >>>>>>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>>>>>> >>>>>>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>>>>>>        - 18.75% compact_zone >>>>>>>>>>>           - 17.39% migrate_pages >>>>>>>>>>>              - 13.79% migrate_pages_batch >>>>>>>>>>>                 - 11.66% migrate_folio_move >>>>>>>>>>>                    - 7.02% lru_add_drain >>>>>>>>>>>                       + 7.02% lru_add_drain_cpu >>>>>>>>>>>                    + 3.00% move_to_new_folio >>>>>>>>>>>                      1.23% rmap_walk >>>>>>>>>>>                 + 1.92% migrate_folio_unmap >>>>>>>>>>>              + 3.20% migrate_pages_sync >>>>>>>>>>>           + 0.90% isolate_migratepages >>>>>>>>>>> >>>>>>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>>>>>> for the heavy concurrent scenarios. >>>>>>>>>> >>>>>>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>>>>>> >>>>>>>>> lru_add_drain() is called after the page reference count checking in >>>>>>>>> move_to_new_folio().  So, I don't this is an issue. >>>>>>>> >>>>>>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>>>>>> >>>>>>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>>>>>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>>>>>> >>>>>>> >>>>>>> But I have question: why do we need use page_was_mlocked instead of check >>>>>>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >>>>>> >>>>>> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). >>>>> >>>>> Yes. This will clear mlock bit. >>>>> >>>>> What about set dst folio mlocked if source is before try_to_migrate_one()? And >>>>> then check whether dst folio is mlocked after? And need clear mlocked if migration >>>>> fails. I suppose the change is minor. Just a thought. Thanks. >>>> >>>> IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. >>>> >>>> Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. >>>> >>>> So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :) >>> >>> Can these concerns be addressed by clear dst mlocked after lru_add_drain() but before >>> remove_migration_pte()? >> >> IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages. > > BTW, Yosry tried to address the overlap of field lru and mlock_count: > https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@google.com/ > But the lore doesn't group all the patches. Thanks for the information. I'd like to review and test if this work can continue.