Received: by 2002:a05:6a10:7420:0:0:0:0 with SMTP id hk32csp911228pxb; Thu, 17 Feb 2022 18:15:59 -0800 (PST) X-Google-Smtp-Source: ABdhPJxCaWm6VFixgU18KgQmhv3q9CrytnlGLlIyqlnKsO4OTbfkqozfTyIYghQ5caNyKheV4Ndf X-Received: by 2002:a17:90b:616:b0:1b8:c05e:bd2e with SMTP id gb22-20020a17090b061600b001b8c05ebd2emr10248466pjb.176.1645150559427; Thu, 17 Feb 2022 18:15:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1645150559; cv=none; d=google.com; s=arc-20160816; b=kjpf0lTd6dQLbOGRhfDbyIy4gs9NKgvJBx7rNIqhJg4quKGEaFLWTxQaF2gljzYVNw f1FqIjB57YKzU5lXFFQ7gqu4w/VXMGrze3/7cKqfOLQjD8a2dtGhYLjdkgC8/lasI6+z nHxcwULbM4bQL2KhgpLqNwskKoFDq4J9/6uEY1rrqCCdqiySrzwF+m8Cjk2a7sXIxRDW GUGBx6eUy4Oa6oqGOBhvIKHH5XHdcwIi5aYBl6B3mZqXDGLdp4KnK6BXMA+8KO9ZZOqb BvU2obm7nP1/BwQ0PpSq02wKCFWDEzMdpakFO4do4iRh/5BTAd0B4TCImi2ttjHU47OF TX0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=A2ielBr8Bd/1oteh9EjUs9neeNddkqKCWjYRSd8Boio=; b=KmYiomYY+aHMsQ3QN2iDP+CosjoS4VGYse6o8CbcnTMBI4U6AJ4NVAWwqB5r7jMgiF y6E1wLKYT1WTvXPZ6CoTsuX4Ud6VGaCI31GVJpBuQci+f9gI7M9BU0R/NIcOgEfSxRPq wffYal3tz0KlVuvKdeXEDBf0znRy5Xw/UNI0lF6lsakNfGBWWX0Ms/xFVdLQg5Etb9GJ VZDvAztcGhEh+poa9kBu7+Nee5QriwVtD6c+hndidcRzFXWjltBJ30ZWmku34ITJLJ2X wmDBxog5OirbIFdaWGa9N3PVOEgKlGfYfOG3dadz2dNd02wXtCKJkz26sIe+XcjNhySN BVzA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nfuII0yy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id t185si3618896pgt.773.2022.02.17.18.15.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Feb 2022 18:15:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nfuII0yy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7280E1100; Thu, 17 Feb 2022 18:15:57 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231485AbiBRCQJ (ORCPT + 99 others); Thu, 17 Feb 2022 21:16:09 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:34116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231474AbiBRCQI (ORCPT ); Thu, 17 Feb 2022 21:16:08 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DCAB20A349 for ; Thu, 17 Feb 2022 18:15:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645150552; x=1676686552; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=SuSapOJ8d+YrYjQ2tf2ikjJ10uJp/d39OepR1OXdY4g=; b=nfuII0yyPvMUjrxYWtI1ASD37zh8xDmbrBXWqPGGOWfwkRVyer3S2O8m SbDUimFixOQFAf3x38GNHXrY6UkZBxGxkqpFaozQ8jfLGO0SM8mwkrSgF iO3YtzlIzvt7G/jjkPRqsoIo0nCngT6kRpBd6izO/YZIMJjs2aNnUycKB JQ0lBTyuNOybhqw8g/KEABEytGLu+9HL0cSqUHZgvTa87kEk1ZPccL4tJ guvMNpxYsTSBG5WhD/08mjoz9quO2sRqpN+Ylskb9eiC4WSvYzaxwv/mD 8lrjcT6GDZJwRYnloHzqZuzbCv1pA20Lwcqck7rF4SUh2kt/vfMl4c7rn g==; X-IronPort-AV: E=McAfee;i="6200,9189,10261"; a="337475944" X-IronPort-AV: E=Sophos;i="5.88,377,1635231600"; d="scan'208";a="337475944" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Feb 2022 18:15:52 -0800 X-IronPort-AV: E=Sophos;i="5.88,377,1635231600"; d="scan'208";a="503804866" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.13.11]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Feb 2022 18:15:48 -0800 From: "Huang, Ying" To: Johannes Weiner Cc: Peter Zijlstra , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang , Baolin Wang , Andrew Morton , Michal Hocko , Rik van Riel , Mel Gorman , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt , zhongjiang-ali Subject: Re: [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system References: <20220128082751.593478-1-ying.huang@intel.com> <20220128082751.593478-3-ying.huang@intel.com> <87ee4cliia.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Fri, 18 Feb 2022 10:15:46 +0800 In-Reply-To: (Johannes Weiner's message of "Thu, 17 Feb 2022 11:26:04 -0500") Message-ID: <87h78w7wdp.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Johannes Weiner writes: > Hi Huang, > > Sorry, I didn't see this reply until you sent out the new version > already :( Apologies. Never mind! > On Wed, Feb 09, 2022 at 01:24:29PM +0800, Huang, Ying wrote: >> > On Fri, Jan 28, 2022 at 04:27:50PM +0800, Huang Ying wrote: >> >> @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms, >> >> numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, >> >> numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls. >> >> >> >> +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among >> >> +different types of memory (represented as different NUMA nodes) to >> >> +place the hot pages in the fast memory. This is implemented based on >> >> +unmapping and page fault too. >> > >> > NORMAL | TIERING appears to be a non-sensical combination. >> > >> > Would it be better to have a tristate (disabled, normal, tiering) >> > rather than a mask? >> >> NORMAL is for balancing cross-socket memory accessing among DRAM nodes. >> TIERING is for optimizing page placement between DRAM and PMEM in one >> socket. We think it's possible to do both. >> >> For example, with [3/3] of the patchset, >> >> - TIERING: because DRAM pages aren't made PROT_NONE, it's disabled to >> balance among DRAM nodes. >> >> - NORMAL | TIERING: both cross-socket balancing among DRAM nodes and >> page placement optimizing between DRAM and PMEM are enabled. > > Ok, I get it. So NORMAL would enable PROT_NONE sampling on all nodes, > and TIERING would additionally raise the watermarks on DRAM nodes. > > Thanks! > >> >> @@ -2034,16 +2035,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) >> >> { >> >> int page_lru; >> >> int nr_pages = thp_nr_pages(page); >> >> + int order = compound_order(page); >> >> >> >> - VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); >> >> + VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); >> >> >> >> /* Do not migrate THP mapped by multiple processes */ >> >> if (PageTransHuge(page) && total_mapcount(page) > 1) >> >> return 0; >> >> >> >> /* Avoid migrating to a node that is nearly full */ >> >> - if (!migrate_balanced_pgdat(pgdat, nr_pages)) >> >> + if (!migrate_balanced_pgdat(pgdat, nr_pages)) { >> >> + int z; >> >> + >> >> + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) || >> >> + !numa_demotion_enabled) >> >> + return 0; >> >> + if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE) >> >> + return 0; >> > >> > The encoded behavior doesn't seem very user-friendly: Unless the user >> > enables numa demotion in a separate flag, enabling numa balancing in >> > tiered mode will silently do nothing. >> >> In theory, TIERING still does something even with numa_demotion_enabled >> == false. Where it works more like the original NUMA balancing. If >> there's some free space in DRAM node (for example, some programs exit), >> some PMEM pages will be promoted to DRAM. But as in the change log, >> this isn't good enough for page placement optimizing. > > Right, so it's a behavior that likely isn't going to be useful. > >> > Would it make more sense to have a central flag for the operation of >> > tiered memory systems that will enable both promotion and demotion? >> >> IMHO, it may be possible for people to enable demotion alone. For >> example, if some people want to use a user space page placement >> optimizing solution based on PMU counters, they may disable TIERING, but >> still use demotion as a way to avoid swapping in some situation. Do you >> think this makes sense? > > Yes, it does. > >> > Alternatively, it could also ignore the state of demotion and promote >> > anyway if asked to, resulting in regular reclaim to make room. It >> > might not be the most popular combination, but would be in line with >> > the zone_reclaim_mode policy of preferring reclaim over remote >> > accesses. It would make the knobs behave more as expected and it's >> > less convoluted than having flags select other user-visible flags. >> >> Sorry, I don't get your idea here. Do you suggest to add another knob >> like zone_relcaim_mode? Then we can define some bit to control demotion >> and promotion there? If so, I still don't know how to fit this into the >> existing NUMA balancing framework. > > No, I'm just suggesting to remove the !numa_demotion_disabled check > from the promotion path on unbalanced nodes. Keep the switches > independent from each other. > > Like you said, demotion without promotion can be a valid config with a > userspace promoter. > > And I'm saying promotion without demotion can be a valid config in a > zone_reclaim_mode type of setup. > > We also seem to agree degraded promotion when demotion enabled likely > isn't very useful to anybody. So maybe it should be removed? > > It just comes down to user expectations. There is no masterswitch that > says "do the right thing on tiered systems", so absent of that I think > it would be best to keep the semantics of each of the two knobs simple > and predictable, without tricky interdependencies - like quietly > degrading promotion behavior when demotion is disabled. > > Does that make sense? Yes. It does. I will do that in the next version! Best Regards, Huang, Ying