Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp4019849pxb; Tue, 26 Jan 2021 10:14:02 -0800 (PST) X-Google-Smtp-Source: ABdhPJyvMLsAnf16F92NV6TL2VP64Ygl9Rxqbp18Nb3wH0pUgJyDOe5Mo8MhJdMzp74mNNkOSkeZ X-Received: by 2002:a50:fc97:: with SMTP id f23mr5903827edq.307.1611684842710; Tue, 26 Jan 2021 10:14:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611684842; cv=none; d=google.com; s=arc-20160816; b=xXk8a9NYL0/Q9UFx5vXkTQYrqMItdyFX6cn9RszZaZTpyCYJml+rH32KRgiiQu2QVr hP01eX3ICN501kKMQLlSVVyYDHli/kKHsFKgVoox9QrFdUj8PBgIPnQKAhEzgJn0Ivsk viiK36YAZOEMOpDqrDIZMfZpGe2x7guSo8DaKQwHQVnaAf9s8gckFRXZ3O0OkdENHc2i uUFdUWGiJ2ZhvhXfPuL5U05e6yQJmjyVl3pS9sMC3HgQaPl27FHpjI9cxd1LZv9n5qLZ 8JWm4zPiRr+C1RwNec+OMF7mUqMfbFXMbMJ2BZi98PchQKqm7BRCtRbjZoV1Gsxx1VYm dRfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:in-reply-to:references:date:from:cc :to:subject:ironport-sdr:ironport-sdr; bh=rgWf+M2/BwP9jLCPRZNNZgYHPc/AkyMKCvIbWKDVcEo=; b=QTbp5h2kBaVVT2fmLISoOPh+A84qAf9YdwDlCdfH+SdCpcuOo5DaQaU5I2YQrwSEjh 6buyLSq7D9u3NmZmmzWyD8jzq4+ESIYZnOJn+wy07JQHbppUgBTqT1Ipq/ilOEMFwDp+ nAYf5FTHPo1eJ2EXqtSfVQIlauerdwYvOkiXEQrH1PMe2ddqxxYey8wtTgdknpIEbjSU 2T19ulIRCl592/WG3kM884qCERTRQeSbfkatbSY0v6XWMB4iJJx0VxG0dsXj/kMjpob5 VLoRgCzyJiA/Dep1Ku1fBBd/m9IEv40tK/Yc+fDNlK0jTByYMXrH6/izH9+TWPp5QoLS 4ORg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o9si888284eju.695.2021.01.26.10.13.37; Tue, 26 Jan 2021 10:14:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391936AbhAZKXq (ORCPT + 99 others); Tue, 26 Jan 2021 05:23:46 -0500 Received: from mga02.intel.com ([134.134.136.20]:22773 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731225AbhAZB16 (ORCPT ); Mon, 25 Jan 2021 20:27:58 -0500 IronPort-SDR: w+lP4qNcSSYh2218mYXbPIzGLyLEOMc0T4ZftPDT6AbKmHhLuWysPD8SjXuhgJZP9FHvt7IYgz fFMszPVoPNig== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="166926630" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="166926630" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:57 -0800 IronPort-SDR: BrniMOY76IriTCP3bbDTFbpZxbrLHYZMoDWs56ZstqsyDKsXXhrf39az3QEIwn1YlUUWZ1k3km K+1A/gEOdHbA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="472556514" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by fmsmga001.fm.intel.com with ESMTP; 25 Jan 2021 16:41:56 -0800 Subject: [RFC][PATCH 13/13] mm/migrate: new zone_reclaim_mode to enable reclaim migration To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Dave Hansen , yang.shi@linux.alibaba.com, rientjes@google.com, ying.huang@intel.com, dan.j.williams@intel.com, david@redhat.com, osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:36 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003436.80749D77@viggo.jf.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen Some method is obviously needed to enable reclaim-based migration. Just like traditional autonuma, there will be some workloads that will benefit like workloads with more "static" configurations where hot pages stay hot and cold pages stay cold. If pages come and go from the hot and cold sets, the benefits of this approach will be more limited. The benefits are truly workload-based and *not* hardware-based. We do not believe that there is a viable threshold where certain hardware configurations should have this mechanism enabled while others do not. To be conservative, earlier work defaulted to disable reclaim- based migration and did not include a mechanism to enable it. This propses extending the existing "zone_reclaim_mode" (now now really node_reclaim_mode) as a method to enable it. We are open to any alternative that allows end users to enable this mechanism or disable it it workload harm is detected (just like traditional autonuma). Signed-off-by: Dave Hansen Cc: Yang Shi Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/Documentation/admin-guide/sysctl/vm.rst | 9 +++++++++ b/include/linux/swap.h | 3 ++- b/include/uapi/linux/mempolicy.h | 1 + b/mm/vmscan.c | 6 ++++-- 4 files changed, 16 insertions(+), 3 deletions(-) diff -puN Documentation/admin-guide/sysctl/vm.rst~RECLAIM_MIGRATE Documentation/admin-guide/sysctl/vm.rst --- a/Documentation/admin-guide/sysctl/vm.rst~RECLAIM_MIGRATE 2021-01-25 16:23:43.721866624 -0800 +++ b/Documentation/admin-guide/sysctl/vm.rst 2021-01-25 16:23:43.732866624 -0800 @@ -971,6 +971,7 @@ This is value OR'ed together of 1 Zone reclaim on 2 Zone reclaim writes dirty pages out 4 Zone reclaim swaps pages +8 Zone reclaim migrates pages = =================================== zone_reclaim_mode is disabled by default. For file servers or workloads @@ -995,3 +996,11 @@ of other processes running on other node Allowing regular swap effectively restricts allocations to the local node unless explicitly overridden by memory policies or cpuset configurations. + +Page migration during reclaim is intended for systems with tiered memory +configurations. These systems have multiple types of memory with varied +performance characteristics instead of plain NUMA systems where the same +kind of memory is found at varied distances. Allowing page migration +during reclaim enables these systems to migrate pages from fast tiers to +slow tiers when the fast tier is under pressure. This migration is +performed before swap. diff -puN include/linux/swap.h~RECLAIM_MIGRATE include/linux/swap.h --- a/include/linux/swap.h~RECLAIM_MIGRATE 2021-01-25 16:23:43.723866624 -0800 +++ b/include/linux/swap.h 2021-01-25 16:23:43.732866624 -0800 @@ -384,7 +384,8 @@ extern int sysctl_min_slab_ratio; static inline bool node_reclaim_enabled(void) { /* Is any node_reclaim_mode bit set? */ - return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP); + return node_reclaim_mode & (RECLAIM_ZONE |RECLAIM_WRITE| + RECLAIM_UNMAP|RECLAIM_MIGRATE); } extern void check_move_unevictable_pages(struct pagevec *pvec); diff -puN include/uapi/linux/mempolicy.h~RECLAIM_MIGRATE include/uapi/linux/mempolicy.h --- a/include/uapi/linux/mempolicy.h~RECLAIM_MIGRATE 2021-01-25 16:23:43.725866624 -0800 +++ b/include/uapi/linux/mempolicy.h 2021-01-25 16:23:43.732866624 -0800 @@ -69,5 +69,6 @@ enum { #define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ #define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ #define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ +#define RECLAIM_MIGRATE (1<<3) /* Migrate to other nodes during reclaim */ #endif /* _UAPI_LINUX_MEMPOLICY_H */ diff -puN mm/vmscan.c~RECLAIM_MIGRATE mm/vmscan.c --- a/mm/vmscan.c~RECLAIM_MIGRATE 2021-01-25 16:23:43.728866624 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:43.734866624 -0800 @@ -1075,6 +1075,9 @@ static bool migrate_demote_page_ok(struc VM_BUG_ON_PAGE(PageHuge(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + if (!(node_reclaim_mode & RECLAIM_MIGRATE)) + return false; + /* It is pointless to do demotion in memcg reclaim */ if (cgroup_reclaim(sc)) return false; @@ -1084,8 +1087,7 @@ static bool migrate_demote_page_ok(struc if (PageTransHuge(page) && !thp_migration_supported()) return false; - // FIXME: actually enable this later in the series - return false; + return true; } _