Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp1665370pxb; Fri, 24 Sep 2021 09:15:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxpIU6S4ycJLzbs45WOv8cK0UagrJyq+elPM31g9IshO3nA+ORrvqVp1ui59Pt29+kczXSB X-Received: by 2002:a17:906:3c56:: with SMTP id i22mr12420247ejg.287.1632500157688; Fri, 24 Sep 2021 09:15:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632500157; cv=none; d=google.com; s=arc-20160816; b=Zu/CyIDJboKU/rB5Aoxtfg0widXLhntFO7RXf/BhUdlJKyol/5Z9hTVZ0cc94COCL7 oOPm5OEQM3EdOkh9mK07n5kXglnaRX6+DHZHX3ayWICqRnXfHTntlTpgsAIjkKqgRtQJ drknjYx3xqW1uczQU8rAEkhUhclupUUahscpJat4PpQC6MuFJlikBkErbk+Qu0Bj0euZ QUa4OvxD2Xs+Zq/T9gmeWalu6lh1CheBIJfQpZMo6KruomdKTIiMHoiVnJYeGJV5JyW3 j0NrztUdY7cMwlod6Y44xWGitcx7Msl+h6dG/ciobmAu6EmGRNop5CA5re3Ra4hDVFVC w9Yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:in-reply-to:references:date:from:cc :to:subject; bh=9Z/GRqfrvAbhtYo1HC8nwdpGaAPMtFR/c4hMfDQBd4c=; b=fSmUO629OheJUOI6YdFIC53UYPrMZkMu7DNthNu8XVTV0UbPk1GJ7iiIqR8Vt+wmvB XEBGq/dPi+kK7DZmKEQKqbFM2d4JWMl/4av5p/reBWWJAvcWs3J5L+VPW+yeKCGZiFGy AXObE2l02XDLtIBx0QWrFLosAMN+zwIjn1mxMcmhAq2XVvVGeVfJoPe5sk3Z04/fS72g AjVDPhUllrkhSa6iDeJEGXu4p1xUT8oETj4m0gDwDVBQ2zu7DCs7CsMJs5cdEjEIxE1b Yja8vi8f/s0aU4oclXIexAMdkbR7OY5ZGAZgjviFwVmFdbFqqiRwKV8A6uathsrgBLY0 5vew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g15si9047576edf.622.2021.09.24.09.15.27; Fri, 24 Sep 2021 09:15:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347347AbhIXQOd (ORCPT + 99 others); Fri, 24 Sep 2021 12:14:33 -0400 Received: from mga11.intel.com ([192.55.52.93]:43629 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347338AbhIXQO3 (ORCPT ); Fri, 24 Sep 2021 12:14:29 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10116"; a="220906298" X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="220906298" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Sep 2021 09:12:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,320,1624345200"; d="scan'208";a="551865386" Received: from davehans-spike.ostc.intel.com (HELO localhost.localdomain) ([10.165.28.105]) by FMSMGA003.fm.intel.com with ESMTP; 24 Sep 2021 09:12:55 -0700 Subject: [PATCH 2/2] mm/migrate: add CPU hotplug to demotion #ifdef To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Dave Hansen , ying.huang@intel.com, mhocko@suse.com, weixugc@google.com, osalvador@suse.de, rientjes@google.com, dan.j.williams@intel.com, david@redhat.com, gthelen@google.com, yang.shi@linux.alibaba.com, akpm@linux-foundation.org From: Dave Hansen Date: Fri, 24 Sep 2021 09:12:55 -0700 References: <20210924161251.093CCD06@davehans-spike.ostc.intel.com> In-Reply-To: <20210924161251.093CCD06@davehans-spike.ostc.intel.com> Message-Id: <20210924161255.E5FE8F7E@davehans-spike.ostc.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen Once upon a time, the node demotion updates were driven solely by memory hotplug events. But now, there are handlers for both CPU and memory hotplug. However, the #ifdef around the code checks only memory hotplug. A system that has HOTPLUG_CPU=y but MEMORY_HOTPLUG=n would miss CPU hotplug events. Update the #ifdef around the common code. Add memory and CPU-specific #ifdefs for their handlers. These memory/CPU #ifdefs avoid unused function warnings when their Kconfig option is off. Fixes: 884a6e5d1f93 ("mm/migrate: update node demotion order on hotplug events") Signed-off-by: Dave Hansen Cc: "Huang, Ying" Cc: Michal Hocko Cc: Wei Xu Cc: Oscar Salvador Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: Greg Thelen Cc: Yang Shi Cc: Andrew Morton --- b/mm/migrate.c | 46 +++++++++++++++++++++++++--------------------- 1 file changed, 25 insertions(+), 21 deletions(-) diff -puN mm/migrate.c~add-cpu-hotplug-config mm/migrate.c --- a/mm/migrate.c~add-cpu-hotplug-config 2021-09-24 09:12:31.308377810 -0700 +++ b/mm/migrate.c 2021-09-24 09:12:31.308377810 -0700 @@ -3066,7 +3066,7 @@ void migrate_vma_finalize(struct migrate EXPORT_SYMBOL(migrate_vma_finalize); #endif /* CONFIG_DEVICE_PRIVATE */ -#if defined(CONFIG_MEMORY_HOTPLUG) +#if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_HOTPLUG_CPU) /* Disable reclaim-based migration. */ static void __disable_all_migrate_targets(void) { @@ -3208,25 +3208,7 @@ static void set_migration_target_nodes(v put_online_mems(); } -/* - * React to hotplug events that might affect the migration targets - * like events that online or offline NUMA nodes. - * - * The ordering is also currently dependent on which nodes have - * CPUs. That means we need CPU on/offline notification too. - */ -static int migration_online_cpu(unsigned int cpu) -{ - set_migration_target_nodes(); - return 0; -} - -static int migration_offline_cpu(unsigned int cpu) -{ - set_migration_target_nodes(); - return 0; -} - +#if defined(CONFIG_MEMORY_HOTPLUG) /* * This leaves migrate-on-reclaim transiently disabled between * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs @@ -3283,6 +3265,27 @@ static int __meminit migrate_on_reclaim_ return notifier_from_errno(0); } +#endif /* CONFIG_MEMORY_HOTPLUG */ + +#ifdef CONFIG_HOTPLUG_CPU +/* + * React to hotplug events that might affect the migration targets + * like events that online or offline NUMA nodes. + * + * The ordering is also currently dependent on which nodes have + * CPUs. That means we need CPU on/offline notification too. + */ +static int migration_online_cpu(unsigned int cpu) +{ + set_migration_target_nodes(); + return 0; +} + +static int migration_offline_cpu(unsigned int cpu) +{ + set_migration_target_nodes(); + return 0; +} static int __init migrate_on_reclaim_init(void) { @@ -3303,4 +3306,5 @@ static int __init migrate_on_reclaim_ini return 0; } late_initcall(migrate_on_reclaim_init); -#endif /* CONFIG_MEMORY_HOTPLUG */ +#endif /* CONFIG_HOTPLUG_CPU */ +#endif /* CONFIG_MEMORY_HOTPLUG || CONFIG_HOTPLUG_CPU */ _