Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp413875ybl; Mon, 12 Aug 2019 19:08:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqzYcaBu4XIKHIkmbTrhQ9eo0JmquyxL+MTGRcym/Nd5jU8wCYXMcNw2Rc+4Ib0MMQcclCv2 X-Received: by 2002:aa7:9799:: with SMTP id o25mr13636503pfp.74.1565662128483; Mon, 12 Aug 2019 19:08:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565662128; cv=none; d=google.com; s=arc-20160816; b=KObU63mdgszq8B9OXDAwRN2q8SkhPbP/GPSDgvqXSoOGWc0YJtnGs2sHdx7CxLDiiB gk5LANoFvItS/Sf6c7JcQ0J4MgN9sGV7Mccq5cKRzMSdfEqAX50+XuathpDhyC3sZccK QalWufxXCenac8yI55nekO9LAvzNPl3BxK2cbtDOsKt35ge7lAwDzOjnIkT7Ls6l5gYw 7Jv7F6xp7Bmn93YtyJ0WGCGZ+sNp5JrMm7DIo1t3GYcaQ6/cQ0t6wDAhzUKb+FIWHlQY L8y3zNqAIZbnBNMeb53RdNjm87TKCc5ioA0OCy7Xdxk9T7X+uH4kA4kEd42i8btpc9qf LijQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=lTeUWW8+o9KeYcx1gpMVR94Vnm2js7jATZuNcGG8ZLQ=; b=eRJTrRQNZRb+UtAAPHc0BzPZJgqiH66oC8k4CqIxVoSFh4qtaytoNJ1tV6DvhK8CPg 9CfeVKlFIX/MVaXQIdMm64aVZSLMf6gDYT6H1mEcXZZdZObEhTfknaHfrb2P/f7TxDgt eHvnm30AVE2n0lQPxRMH1I9pGzqFee1RDjasfTJFNJ05ticu1AJ97qNCTPJfM/Uq96VU edFhEtgS4IgLOKTsN+a6ykMmer2p1ja7GpHDDLYYstBEG+qzOxql3ayo9/KcZt4W8NRG r9aQFXLjJyADtAnyuQcKijo/jM+tO0aBYBU7Y9vT0AAaNsf2x9j6eVx1M6B0YhvwMLDu 1Qkg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m9si24026532pll.333.2019.08.12.19.08.32; Mon, 12 Aug 2019 19:08:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726679AbfHMCHG (ORCPT + 99 others); Mon, 12 Aug 2019 22:07:06 -0400 Received: from mga14.intel.com ([192.55.52.115]:31293 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726558AbfHMCHF (ORCPT ); Mon, 12 Aug 2019 22:07:05 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Aug 2019 19:07:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,379,1559545200"; d="scan'208";a="327539466" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by orsmga004.jf.intel.com with ESMTP; 12 Aug 2019 19:07:02 -0700 From: Wei Yang To: akpm@linux-foundation.org, osalvador@suse.de, mhocko@suse.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Wei Yang Subject: [PATCH] mm/hotplug: prevent memory leak when reuse pgdat Date: Tue, 13 Aug 2019 10:06:08 +0800 Message-Id: <20190813020608.10194-1-richardw.yang@linux.intel.com> X-Mailer: git-send-email 2.17.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When offline a node in try_offline_node, pgdat is not released. So that pgdat could be reused in hotadd_new_pgdat. While we re-allocate pgdat->per_cpu_nodestats if this pgdat is reused. This patch prevents the memory leak by just allocate per_cpu_nodestats when it is a new pgdat. NOTE: This is not tested since I didn't manage to create a case to offline a whole node. If my analysis is not correct, please let me know. Signed-off-by: Wei Yang --- mm/memory_hotplug.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c73f09913165..efaf9e6f580a 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -933,8 +933,11 @@ static pg_data_t __ref *hotadd_new_pgdat(int nid, u64 start) if (!pgdat) return NULL; + pgdat->per_cpu_nodestats = + alloc_percpu(struct per_cpu_nodestat); arch_refresh_nodedata(nid, pgdat); } else { + int cpu; /* * Reset the nr_zones, order and classzone_idx before reuse. * Note that kswapd will init kswapd_classzone_idx properly @@ -943,6 +946,12 @@ static pg_data_t __ref *hotadd_new_pgdat(int nid, u64 start) pgdat->nr_zones = 0; pgdat->kswapd_order = 0; pgdat->kswapd_classzone_idx = 0; + for_each_online_cpu(cpu) { + struct per_cpu_nodestat *p; + + p = per_cpu_ptr(pgdat->per_cpu_nodestats, cpu); + memset(p, 0, sizeof(*p)); + } } /* we can use NODE_DATA(nid) from here */ @@ -952,7 +961,6 @@ static pg_data_t __ref *hotadd_new_pgdat(int nid, u64 start) /* init node's zones as empty zones, we don't have any present pages.*/ free_area_init_core_hotplug(nid); - pgdat->per_cpu_nodestats = alloc_percpu(struct per_cpu_nodestat); /* * The node we allocated has no zone fallback lists. For avoiding -- 2.17.1