Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp2216314pxb; Fri, 17 Sep 2021 05:06:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw5modx8R23XbAFeaJk91+LV/EYIWn3rlerQ6bUp3++946y6/K/Co+gXUrADWGxUi+sKTsU X-Received: by 2002:a05:6638:3046:: with SMTP id u6mr8473445jak.35.1631880401349; Fri, 17 Sep 2021 05:06:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631880401; cv=none; d=google.com; s=arc-20160816; b=PM5eMBVuHdakxOI++3LsoLJXxy6H1m1oBsYE/To+psuJswxwLqpvZsm3IQAddxkEAB rfjVpWFVK2vxb7BOk58fixApFfoZYYO6LmN9BFMLG95IFVkfyobf3MHGEjbLbUCvT4eJ 2w7WCUeOG1syZWzhMIDkp36FPrm5yGlf1Pe97TC60SoJ5v2VL1sfTshvbslkv/xyHWCn DBLUaXbZ2lrHBVQ9tYqRdOhU7mkvGu/xy7w5eET22XxsRw+/A+2F5vGJeDPMRbyP8Xe6 dAxvkeuv3FKIoGQeLJvzeGfgN76FQ2H/fwD1c530JoZVdOyXVY+CkZdiSeoo611QR0s8 tAyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from; bh=ha5MzQo188qrgPeTstDqz0+JDY7oT+jameNMEl+IzkA=; b=qYWodyFTR7uB7sbk5hD6vZq7/948dp8m+876ldSJ1TmqFcH/CL9KzuA9C8CKZnHsqn YO+91x0NIs4u6poujgx5nts/nn6BIG6lMpjkjILiwU6yWRiksxqInmBuPW7t74W0+r1l bjAin/fzzJZyc2YMLuK0eQkY/2mDf86h0GX3o0qSQcaloeKfAW10GrT6IK7HuqW4rPKT cqWIFSdDmSyhBVpu5qeM69Wg50goDSn22W64heLKUkWGmtRSmmTbFad3/9ctBFsxObDW ghO11Ja86NgSA94RTa/CmYkmohzI6zANURfd7mtVME+kYyD/VKLQmFSY02ZAWLiDidiN nLhQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si180883jak.85.2021.09.17.05.06.13; Fri, 17 Sep 2021 05:06:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243628AbhIQDQ2 (ORCPT + 99 others); Thu, 16 Sep 2021 23:16:28 -0400 Received: from mga05.intel.com ([192.55.52.43]:16061 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232364AbhIQDQ2 (ORCPT ); Thu, 16 Sep 2021 23:16:28 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10109"; a="308256659" X-IronPort-AV: E=Sophos;i="5.85,299,1624345200"; d="scan'208";a="308256659" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Sep 2021 20:15:06 -0700 X-IronPort-AV: E=Sophos;i="5.85,299,1624345200"; d="scan'208";a="546111841" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.159.119]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Sep 2021 20:15:01 -0700 From: "Huang, Ying" To: kernel test robot Cc: Andrew Morton , 0day robot , Yang Shi , Zi Yan , Michal Hocko , Wei Xu , Oscar Salvador , David Rientjes , Dan Williams , David Hildenbrand , Greg Thelen , Keith Busch , Yang Shi , LKML , , , , , , , Subject: Re: [mm/migrate] 9eeb73028c: stress-ng.memhotplug.ops_per_sec -53.8% regression References: <20210905135932.GE15026@xsang-OptiPlex-9020> Date: Fri, 17 Sep 2021 11:14:59 +0800 In-Reply-To: <20210905135932.GE15026@xsang-OptiPlex-9020> (kernel test robot's message of "Sun, 5 Sep 2021 21:59:33 +0800") Message-ID: <87a6kbq4ek.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Oliver, kernel test robot writes: > Greeting, > > FYI, we noticed a -53.8% regression of stress-ng.memhotplug.ops_per_sec due to commit: > > > commit: 9eeb73028cfb54eb06efe87c50cc014d3f1ff43e ("[patch 174/212] mm/migrate: update node demotion order on hotplug events") > url: https://github.com/0day-ci/linux/commits/Andrew-Morton/ia64-fix-typo-in-a-comment/20210903-065028 > > > in testcase: stress-ng > on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory > with following parameters: > > nr_threads: 10% > disk: 1HDD > testtime: 60s > fs: ext4 > class: os > test: memhotplug > cpufreq_governor: performance > ucode: 0x5003006 > Can you help to test whether the following patch can recover the regression? Best Regards, Huang, Ying ----------------------------8<-------------------------------------- From 5d3e18a9f083954584932a20233ef489d9398342 Mon Sep 17 00:00:00 2001 From: Huang Ying Date: Thu, 16 Sep 2021 16:51:44 +0800 Subject: [PATCH] mm/migrate: recover hotplug performance regression 0-Day kernel test robot reported a -53.8% performance regression for stress-ng memhotplug test case. This patch is to recover the regression via avoid updating the demotion order if not necessary. Refer: https://lore.kernel.org/lkml/20210905135932.GE15026@xsang-OptiPlex-9020/ Fixes: 884a6e5d1f93 ("mm/migrate: update node demotion order on hotplug events") Signed-off-by: "Huang, Ying" Suggested-by: Dave Hansen Reported-by: kernel test robot Cc: Yang Shi Cc: Zi Yan Cc: Michal Hocko Cc: Wei Xu Cc: Oscar Salvador Cc: David Rientjes Cc: Dan Williams Cc: David Hildenbrand Cc: Greg Thelen Cc: Keith Busch --- mm/migrate.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 77d107a4577f..20d803707497 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1145,6 +1145,8 @@ static int __unmap_and_move(struct page *page, struct page *newpage, static int node_demotion[MAX_NUMNODES] __read_mostly = {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE}; +static bool node_demotion_disabled __read_mostly; + /** * next_demotion_node() - Get the next node in the demotion path * @node: The starting node to lookup the next node @@ -1158,6 +1160,8 @@ int next_demotion_node(int node) { int target; + if (node_demotion_disabled) + return NUMA_NO_NODE; /* * node_demotion[] is updated without excluding this * function from running. RCU doesn't provide any @@ -3198,13 +3202,26 @@ static void __set_migration_target_nodes(void) goto again; } +static int nr_node_has_cpu; +static int nr_node_has_mem; + +static void check_set_migration_target_nodes(void) +{ + if (num_node_state(N_MEMORY) != nr_node_has_mem || + num_node_state(N_CPU) != nr_node_has_cpu) { + __set_migration_target_nodes(); + nr_node_has_mem = num_node_state(N_MEMORY); + nr_node_has_cpu = num_node_state(N_CPU); + } +} + /* * For callers that do not hold get_online_mems() already. */ static void set_migration_target_nodes(void) { get_online_mems(); - __set_migration_target_nodes(); + check_set_migration_target_nodes(); put_online_mems(); } @@ -3249,7 +3266,7 @@ static int __meminit migrate_on_reclaim_callback(struct notifier_block *self, * will leave migration disabled until the offline * completes and the MEM_OFFLINE case below runs. */ - disable_all_migrate_targets(); + node_demotion_disabled = true; break; case MEM_OFFLINE: case MEM_ONLINE: @@ -3257,14 +3274,15 @@ static int __meminit migrate_on_reclaim_callback(struct notifier_block *self, * Recalculate the target nodes once the node * reaches its final state (online or offline). */ - __set_migration_target_nodes(); + check_set_migration_target_nodes(); + node_demotion_disabled = false; break; case MEM_CANCEL_OFFLINE: /* * MEM_GOING_OFFLINE disabled all the migration * targets. Reenable them. */ - __set_migration_target_nodes(); + node_demotion_disabled = false; break; case MEM_GOING_ONLINE: case MEM_CANCEL_ONLINE: -- 2.30.2