Received: by 2002:a05:6358:7058:b0:131:369:b2a3 with SMTP id 24csp8426547rwp; Wed, 19 Jul 2023 09:35:25 -0700 (PDT) X-Google-Smtp-Source: APBJJlE2bgUJjuafZMns1yMlTYDZ7Mnb/VJEy413iaMM8Z5qyLUObl5vNS7n2KT4/MnEEq0mCcW2 X-Received: by 2002:ac2:4e8c:0:b0:4f8:5792:3802 with SMTP id o12-20020ac24e8c000000b004f857923802mr387797lfr.10.1689784525294; Wed, 19 Jul 2023 09:35:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689784525; cv=none; d=google.com; s=arc-20160816; b=gdEUHTeVrbd7O+dUTXc7kOTCf2oH9pd7RKWv0DR0j09BCfRA3z3x3dl1nF9/0iFRHm X4etYcoXESlEAvVsUSm3WQuHGEBGNaKOD6V2b09I5Wg2jp1k4Hi8MvN73R5qnm0mlKmV pbGR+jWZ1nD5Ik/NJJ0uRNuw4bO19rmg133uH+DPgACF/9Ambhe4V1D85Ua6U70P8Nc/ DsjvdAPAaleBq2pYPhAUpmikKF994yEFpDaLvkuTrD3lndADNuy67GST5fffqiKasW1U E0QutVtW2J0abVdTzyetVPfMeFuaXcVvKskHvHyGr/nGd9hSXZ5r1HVaHWUBLyHR8t11 D5Zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=/0vxZKgyFoy+3Fkqu+eMp39hj00kw6FEBog74IgK+d0=; fh=D5NnC6MaDHdnIwKsuSqm04s94RDsNwyIUoHj6klgGNU=; b=1EOzm9nxxwRUinb3ay4HtptkipPBsBy4+vFOtwwfpN+hyJxlfMCfXZQvx0VCNhvGuN RNxzdMAK2YrTGCZ803B08GpmBGN8Khm2F7+nATP2cTEd9IyqO9gulqXAIVkTlSTEIhnG G8moYhINI60cC6z3gmzIJqhc5e+zSPV+PsO68zfSUEKtBWHuA6NcP7MtChmngBbAtMNm Cfh93WNfnUrmFjXi3wv07ZMHSii0/38in/yQyo7UdTEQqd4GuldaUEWidM4Q976q/IxR ri8L/sds7EraJqz128j7E3Vca9ROpuhBYKb7tCifq8JOGYAQSsGnNt6cchN8k9lrg01+ GrKQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z20-20020aa7d414000000b005149e1735bcsi3369568edq.33.2023.07.19.09.34.59; Wed, 19 Jul 2023 09:35:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231337AbjGSPtR (ORCPT + 99 others); Wed, 19 Jul 2023 11:49:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230147AbjGSPtQ (ORCPT ); Wed, 19 Jul 2023 11:49:16 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4D3A0197 for ; Wed, 19 Jul 2023 08:49:15 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 428F92F4; Wed, 19 Jul 2023 08:49:58 -0700 (PDT) Received: from [10.57.76.81] (unknown [10.57.76.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 336EC3F67D; Wed, 19 Jul 2023 08:49:12 -0700 (PDT) Message-ID: Date: Wed, 19 Jul 2023 16:49:10 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH v2 0/5] variable-order, large folios for anonymous memory To: Zi Yan , David Hildenbrand Cc: Matthew Wilcox , Andrew Morton , "Kirill A. Shutemov" , Yin Fengwei , Yu Zhao , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20230703135330.1865927-1-ryan.roberts@arm.com> <78159ed0-a233-9afb-712f-2df1a4858b22@redhat.com> <4d4c45a2-0037-71de-b182-f516fee07e67@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/07/2023 17:53, Zi Yan wrote: > On 7 Jul 2023, at 9:24, David Hildenbrand wrote: > >> On 07.07.23 15:12, Matthew Wilcox wrote: >>> On Fri, Jul 07, 2023 at 01:40:53PM +0200, David Hildenbrand wrote: >>>> On 06.07.23 10:02, Ryan Roberts wrote: >>>> But can you comment on the page migration part (IOW did you try it already)? >>>> >>>> For example, memory hotunplug, CMA, MCE handling, compaction all rely on >>>> page migration of something that was allocated using GFP_MOVABLE to actually >>>> work. >>>> >>>> Compaction seems to skip any higher-order folios, but the question is if the >>>> udnerlying migration itself works. >>>> >>>> If it already works: great! If not, this really has to be tackled early, >>>> because otherwise we'll be breaking the GFP_MOVABLE semantics. >>> >>> I have looked at this a bit. _Migration_ should be fine. _Compaction_ >>> is not. >> >> Thanks! Very nice if at least ordinary migration works. >> >>> >>> If you look at a function like folio_migrate_mapping(), it all seems >>> appropriately folio-ised. There might be something in there that is >>> slightly wrong, but that would just be a bug to fix, not a huge >>> architectural problem. >>> >>> The problem comes in the callers of migrate_pages(). They pass a >>> new_folio_t callback. alloc_migration_target() is the usual one passed >>> and as far as I can tell is fine. I've seen no problems reported with it. >>> >>> compaction_alloc() is a disaster, and I don't know how to fix it. >>> The compaction code has its own allocator which is populated with order-0 >>> folios. How it populates that freelist is awful ... see split_map_pages() >> >> Yeah, all that code was written under the assumption that we're moving order-0 pages (which is what the anon+pagecache pages part). >> >> From what I recall, we're allocating order-0 pages from the high memory addresses, so we can migrate from low memory addresses, effectively freeing up low memory addresses and filling high memory addresses. >> >> Adjusting that will be ... interesting. Instead of allocating order-0 pages from high addresses, we might want to allocate "as large as possible" ("grab what we can") from high addresses and then have our own kind of buddy for allocating from that pool a compaction destination page, depending on our source page. Nasty. > > We probably do not need a pool, since before migration, we have isolated folios to > be migrated and can come up with a stats on how many folios there are at each order. > Then, we can isolate free pages based on the stats and do not split free pages > all the way down to order-0. We can sort the source folios based on their orders > and isolate free pages from largest order to smallest order. That could avoid > a free page pool. Hi Zi, I just wanted to check; is this something you are working on or planning to work on? I'm trying to maintain a list of all the items that need to get sorted for large anon folios. It would be great to put your name against it! ;-) > > -- > Best Regards, > Yan, Zi