Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp234712pxm; Tue, 1 Mar 2022 19:28:56 -0800 (PST) X-Google-Smtp-Source: ABdhPJwcfo+rxhJpP8Z4rLpj6FRRFH6O4kNzervT9+EcNJIv5rLMEne4wlwDKvNvUyI/Z7pxlHWw X-Received: by 2002:a17:906:1656:b0:6cf:571c:f91d with SMTP id n22-20020a170906165600b006cf571cf91dmr21214435ejd.377.1646191736604; Tue, 01 Mar 2022 19:28:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646191736; cv=none; d=google.com; s=arc-20160816; b=O5mJvvbBVoC89HIqjPt1wiRxMHV7AyiN8GTD25N1BD9wNcR9MOi7HS1Gy/AFjoMn2S 4K83Bwx/0BWPHRJKJ7uboe2Fs1FQWU8lcH3+JVkyxjzNZXVTONxIA87J/vYJ9I4T6haO svIiuRdqY1kVh3cLg/QtOlENgv5LLhDXMTBve+vHJJiO8Ek0Gk7WkXX9X3e1m4Awac6v GeY01XsvKHt+K48CnpGv/zqnTCHuaEkTWrhXrLfYg/bHowfP6U+v6oaN7SUIskGQAZBM c226vKadUPlWlAJWMvbRUN8v5h1NE1uuRbWIexUxGTiK+CWSAKM2DIne0ZeyRxlXETBP q+kQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=F2U/oGDLGXCvl7zfChco0Uz9wH9wYJvuw5/B6215wuo=; b=hb2II+lVyRvO9tw88fnfVT4rsL6sSTOGNg9FcjlHDp22zxwPAvoxdY1JsoKxmRL7Uz qYdIOYcsc3D+WU3KMi9ptOzYFQ8TLV65QqlmKpR/XCL+y9A9gsqf8Y3NfiYjZUCbMtTy Mi9Hjq+nOAee/Ej4vg9x6/mWPmwZZaaJA2HqXIB1CRURDZHeoiTj6HMb8yep3jkTFtDU +ZuGWSAeWC9/Y4Zx3xK6UNcQiBDDnDmALoH87IowC/yw1alUoN1xI46Xy3coTjM6Ni7P z+/Aag0Fmp+B2wOqOwFA4uBfEcCjFqTHiIzCmhvRqCS1ETbSdaxdoGvwK5QI53XTL0eI RQmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=cW72JqGN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o8-20020a509b08000000b00415a3351f00si1087075edi.559.2022.03.01.19.28.33; Tue, 01 Mar 2022 19:28:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=cW72JqGN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236081AbiCAV76 (ORCPT + 99 others); Tue, 1 Mar 2022 16:59:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229664AbiCAV75 (ORCPT ); Tue, 1 Mar 2022 16:59:57 -0500 Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com [IPv6:2a00:1450:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D85650B1C for ; Tue, 1 Mar 2022 13:59:14 -0800 (PST) Received: by mail-ed1-x52b.google.com with SMTP id g20so23846618edw.6 for ; Tue, 01 Mar 2022 13:59:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=F2U/oGDLGXCvl7zfChco0Uz9wH9wYJvuw5/B6215wuo=; b=cW72JqGNra8LYMyOFtLSEGHXTZ/T7/tw2gxuIztP8NrfPg/C2g4r2OWGhiCF+fGx6y WAaPsX8SHbaWVFT4qoXGnuPifoqCGsmxO6nW/L1b5DUU+w0uvGlKMXrqe6wV+E8dRCtD lyoFhWvNDmG7USbPq3szbQ2LWG3HK949l6N4V7x5LBV5fkXnB7osrMbn0EgASACsmTbk c+v2eWjW3KVsGPPD7A+o3Hiox/fzG5uxH7CSX7AhfmrBczKuBpXn/IzZeC9JgntAim3M dR22kOKzD97vsGbjzqNyMcQFT45ft0JMJMqFJidQ0Qt6AAa+/pJMKSR+lL4rxcu+RaBV 9Aow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=F2U/oGDLGXCvl7zfChco0Uz9wH9wYJvuw5/B6215wuo=; b=tpPyzjfU3D5vPySloYtX70w2x5Q9pVslAdZh0OGJB9X/nZ0DSo9H7W2/aBWVu5qmkd r44X6aAfHflEf4rf39FPQMZw+D08WILbeOPihgJAv25Qpx2RkrqgcnROX8ePxCS3VseD uVfTrSAiDw3QJ1siS5C3HikiqXC1Ft4sTQK17y5zYwQihW30NFega2b6xMmyGyhixoIS jguQbSLGvJcwzIlSi+vLvESZzi6cRFNBhkkfDUVMfRztsgieNn1lqt5myWKKRV4fyZf8 Qng+1dwonDf3mDZZ2o6L02K0MBG8wVzcKp3mkxzhAQMe48ljHX/bpp9Bv+ffsaH6C/Rt gYow== X-Gm-Message-State: AOAM530xeViFYLA/BX2RefyLg/37uxC9AmKjLt3rbRnjTBXNABKjFKNQ DZE6iz4iJUhzqpZroNTNhRzbHLojDnNiNiZf3qyMygN9qzKt9g== X-Received: by 2002:a50:ec95:0:b0:413:4d34:642d with SMTP id e21-20020a50ec95000000b004134d34642dmr27137083edr.3.1646171952915; Tue, 01 Mar 2022 13:59:12 -0800 (PST) MIME-Version: 1.0 References: <20220301085329.3210428-1-ying.huang@intel.com> <20220301085329.3210428-3-ying.huang@intel.com> In-Reply-To: <20220301085329.3210428-3-ying.huang@intel.com> From: Yang Shi Date: Tue, 1 Mar 2022 13:58:58 -0800 Message-ID: Subject: Re: [PATCH -V14 2/3] NUMA balancing: optimize page placement for memory tiering system To: Huang Ying Cc: Peter Zijlstra , Mel Gorman , Andrew Morton , Linux MM , Linux Kernel Mailing List , Feng Tang , Baolin Wang , Oscar Salvador , Johannes Weiner , Michal Hocko , Rik van Riel , Dave Hansen , Zi Yan , Wei Xu , Shakeel Butt , zhongjiang-ali , Randy Dunlap Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 1, 2022 at 12:54 AM Huang Ying wrote: > > With the advent of various new memory types, some machines will have > multiple types of memory, e.g. DRAM and PMEM (persistent memory). The > memory subsystem of these machines can be called memory tiering > system, because the performance of the different types of memory are > usually different. > > In such system, because of the memory accessing pattern changing etc, > some pages in the slow memory may become hot globally. So in this > patch, the NUMA balancing mechanism is enhanced to optimize the page > placement among the different memory types according to hot/cold > dynamically. > > In a typical memory tiering system, there are CPUs, fast memory and > slow memory in each physical NUMA node. The CPUs and the fast memory > will be put in one logical node (called fast memory node), while the > slow memory will be put in another (faked) logical node (called slow > memory node). That is, the fast memory is regarded as local while the > slow memory is regarded as remote. So it's possible for the recently > accessed pages in the slow memory node to be promoted to the fast > memory node via the existing NUMA balancing mechanism. > > The original NUMA balancing mechanism will stop to migrate pages if > the free memory of the target node becomes below the high watermark. > This is a reasonable policy if there's only one memory type. But this > makes the original NUMA balancing mechanism almost do not work to > optimize page placement among different memory types. Details are as > follows. > > It's the common cases that the working-set size of the workload is > larger than the size of the fast memory nodes. Otherwise, it's > unnecessary to use the slow memory at all. So, there are almost > always no enough free pages in the fast memory nodes, so that the > globally hot pages in the slow memory node cannot be promoted to the > fast memory node. To solve the issue, we have 2 choices as follows, > > a. Ignore the free pages watermark checking when promoting hot pages > from the slow memory node to the fast memory node. This will > create some memory pressure in the fast memory node, thus trigger > the memory reclaiming. So that, the cold pages in the fast memory > node will be demoted to the slow memory node. > > b. Define a new watermark called wmark_promo which is higher than > wmark_high, and have kswapd reclaiming pages until free pages reach > such watermark. The scenario is as follows: when we want to promote > hot-pages from a slow memory to a fast memory, but fast memory's free > pages would go lower than high watermark with such promotion, we wake > up kswapd with wmark_promo watermark in order to demote cold pages and > free us up some space. So, next time we want to promote hot-pages we > might have a chance of doing so. > > The choice "a" may create high memory pressure in the fast memory > node. If the memory pressure of the workload is high, the memory > pressure may become so high that the memory allocation latency of the > workload is influenced, e.g. the direct reclaiming may be triggered. > > The choice "b" works much better at this aspect. If the memory > pressure of the workload is high, the hot pages promotion will stop > earlier because its allocation watermark is higher than that of the > normal memory allocation. So in this patch, choice "b" is > implemented. A new zone watermark (WMARK_PROMO) is added. Which is > larger than the high watermark and can be controlled via > watermark_scale_factor. > > In addition to the original page placement optimization among sockets, > the NUMA balancing mechanism is extended to be used to optimize page > placement according to hot/cold among different memory types. So the > sysctl user space interface (numa_balancing) is extended in a backward > compatible way as follow, so that the users can enable/disable these > functionality individually. > > The sysctl is converted from a Boolean value to a bits field. The > definition of the flags is, > > - 0: NUMA_BALANCING_DISABLED > - 1: NUMA_BALANCING_NORMAL > - 2: NUMA_BALANCING_MEMORY_TIERING > > We have tested the patch with the pmbench memory accessing benchmark > with the 80:20 read/write ratio and the Gauss access address > distribution on a 2 socket Intel server with Optane DC Persistent > Memory Model. The test results shows that the pmbench score can > improve up to 95.9%. > > Thanks Andrew Morton to help fix the document format error. > > Signed-off-by: "Huang, Ying" > Tested-by: Baolin Wang > Reviewed-by: Baolin Wang > Reviewed-by: Oscar Salvador > Acked-by: Johannes Weiner > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Rik van Riel > Cc: Mel Gorman > Cc: Peter Zijlstra > Cc: Dave Hansen > Cc: Yang Shi > Cc: Zi Yan > Cc: Wei Xu > Cc: Shakeel Butt > Cc: zhongjiang-ali > Cc: Randy Dunlap > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > --- > Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++------- > include/linux/mmzone.h | 1 + > include/linux/sched/sysctl.h | 10 +++++++ > kernel/sched/core.c | 21 ++++++++++++--- > kernel/sysctl.c | 2 +- > mm/migrate.c | 16 ++++++++++-- > mm/page_alloc.c | 3 ++- > mm/vmscan.c | 6 ++++- > 8 files changed, 70 insertions(+), 18 deletions(-) > > diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst > index d359bcfadd39..fdfd2b684822 100644 > --- a/Documentation/admin-guide/sysctl/kernel.rst > +++ b/Documentation/admin-guide/sysctl/kernel.rst > @@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst). > numa_balancing > ============== > > -Enables/disables automatic page fault based NUMA memory > -balancing. Memory is moved automatically to nodes > -that access it often. > +Enables/disables and configures automatic page fault based NUMA memory > +balancing. Memory is moved automatically to nodes that access it often. > +The value to set can be the result of ORing the following: > > -Enables/disables automatic NUMA memory balancing. On NUMA machines, there > -is a performance penalty if remote memory is accessed by a CPU. When this > -feature is enabled the kernel samples what task thread is accessing memory > -by periodically unmapping pages and later trapping a page fault. At the > -time of the page fault, it is determined if the data being accessed should > -be migrated to a local memory node. > += ================================= > +0 NUMA_BALANCING_DISABLED > +1 NUMA_BALANCING_NORMAL > +2 NUMA_BALANCING_MEMORY_TIERING > += ================================= > + > +Or NUMA_BALANCING_NORMAL to optimize page placement among different > +NUMA nodes to reduce remote accessing. On NUMA machines, there is a > +performance penalty if remote memory is accessed by a CPU. When this > +feature is enabled the kernel samples what task thread is accessing > +memory by periodically unmapping pages and later trapping a page > +fault. At the time of the page fault, it is determined if the data > +being accessed should be migrated to a local memory node. > > The unmapping of pages and trapping faults incur additional overhead that > ideally is offset by improved memory locality but there is no universal > @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms, > numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, > numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls. > > +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among > +different types of memory (represented as different NUMA nodes) to > +place the hot pages in the fast memory. This is implemented based on > +unmapping and page fault too. > > numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb > =============================================================================================================================== > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 44bd054ca12b..06bc55db19bf 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -342,6 +342,7 @@ enum zone_watermarks { > WMARK_MIN, > WMARK_LOW, > WMARK_HIGH, > + WMARK_PROMO, TBH I'm not a fan of another water mark since we already have quite a few water marks (regular water mark, water mark boost, water mark promo). But it is not a big deal and gated problem for now since it is not user visible. We definitely could try to consolidate some of them later. The patch looks fine to me. Reviewed-by: Yang Shi > NR_WMARK > }; > > diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h > index c19dd5a2c05c..b5eec8854c5a 100644 > --- a/include/linux/sched/sysctl.h > +++ b/include/linux/sched/sysctl.h > @@ -23,6 +23,16 @@ enum sched_tunable_scaling { > SCHED_TUNABLESCALING_END, > }; > > +#define NUMA_BALANCING_DISABLED 0x0 > +#define NUMA_BALANCING_NORMAL 0x1 > +#define NUMA_BALANCING_MEMORY_TIERING 0x2 > + > +#ifdef CONFIG_NUMA_BALANCING > +extern int sysctl_numa_balancing_mode; > +#else > +#define sysctl_numa_balancing_mode 0 > +#endif > + > /* > * control realtime throttling: > * > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index fcf0c180617c..c25348e9ae3a 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -4280,7 +4280,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing); > > #ifdef CONFIG_NUMA_BALANCING > > -void set_numabalancing_state(bool enabled) > +int sysctl_numa_balancing_mode; > + > +static void __set_numabalancing_state(bool enabled) > { > if (enabled) > static_branch_enable(&sched_numa_balancing); > @@ -4288,13 +4290,22 @@ void set_numabalancing_state(bool enabled) > static_branch_disable(&sched_numa_balancing); > } > > +void set_numabalancing_state(bool enabled) > +{ > + if (enabled) > + sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL; > + else > + sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED; > + __set_numabalancing_state(enabled); > +} > + > #ifdef CONFIG_PROC_SYSCTL > int sysctl_numa_balancing(struct ctl_table *table, int write, > void *buffer, size_t *lenp, loff_t *ppos) > { > struct ctl_table t; > int err; > - int state = static_branch_likely(&sched_numa_balancing); > + int state = sysctl_numa_balancing_mode; > > if (write && !capable(CAP_SYS_ADMIN)) > return -EPERM; > @@ -4304,8 +4315,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write, > err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos); > if (err < 0) > return err; > - if (write) > - set_numabalancing_state(state); > + if (write) { > + sysctl_numa_balancing_mode = state; > + __set_numabalancing_state(state); > + } > return err; > } > #endif > diff --git a/kernel/sysctl.c b/kernel/sysctl.c > index 5ae443b2882e..c90a564af720 100644 > --- a/kernel/sysctl.c > +++ b/kernel/sysctl.c > @@ -1689,7 +1689,7 @@ static struct ctl_table kern_table[] = { > .mode = 0644, > .proc_handler = sysctl_numa_balancing, > .extra1 = SYSCTL_ZERO, > - .extra2 = SYSCTL_ONE, > + .extra2 = SYSCTL_FOUR, > }, > #endif /* CONFIG_NUMA_BALANCING */ > { > diff --git a/mm/migrate.c b/mm/migrate.c > index cdeaf01e601a..08ca9b9b142e 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -51,6 +51,7 @@ > #include > #include > #include > +#include > > #include > > @@ -2034,16 +2035,27 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) > { > int page_lru; > int nr_pages = thp_nr_pages(page); > + int order = compound_order(page); > > - VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page); > + VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); > > /* Do not migrate THP mapped by multiple processes */ > if (PageTransHuge(page) && total_mapcount(page) > 1) > return 0; > > /* Avoid migrating to a node that is nearly full */ > - if (!migrate_balanced_pgdat(pgdat, nr_pages)) > + if (!migrate_balanced_pgdat(pgdat, nr_pages)) { > + int z; > + > + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)) > + return 0; > + for (z = pgdat->nr_zones - 1; z >= 0; z--) { > + if (populated_zone(pgdat->node_zones + z)) > + break; > + } > + wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); > return 0; > + } > > if (isolate_lru_page(page)) > return 0; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3589febc6d31..295b8f1fc31d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -8474,7 +8474,8 @@ static void __setup_per_zone_wmarks(void) > > zone->watermark_boost = 0; > zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; > - zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2; > + zone->_watermark[WMARK_HIGH] = low_wmark_pages(zone) + tmp; > + zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp; > > spin_unlock_irqrestore(&zone->lock, flags); > } > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 6dd8f455bb82..199b8aadbdd6 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -56,6 +56,7 @@ > > #include > #include > +#include > > #include "internal.h" > > @@ -3988,7 +3989,10 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) > if (!managed_zone(zone)) > continue; > > - mark = high_wmark_pages(zone); > + if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) > + mark = wmark_pages(zone, WMARK_PROMO); > + else > + mark = high_wmark_pages(zone); > if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx)) > return true; > } > -- > 2.30.2 >