Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp1808111yba; Fri, 17 May 2019 05:51:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqxvwUJwJ6LzPk/jQQgZtpj+MJ3Uxcrdm5bb7X05DctqisOUrt3mmw/TAv9kvxpJPxPH2d/k X-Received: by 2002:a62:e10f:: with SMTP id q15mr60764408pfh.56.1558097503016; Fri, 17 May 2019 05:51:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558097503; cv=none; d=google.com; s=arc-20160816; b=H5t+0Jqi5lXkBZsKMI7UxBV0Wtl/LGVtAqYi79LBP0cwGkhXYuYXID8a8E93Gec94j kk65Xf+w6mIFBzItUmIHPydNG91UtXJobtA75MuhNkNDxTMWhREZVf3HMQIbzYSv1Vb1 gIpy0dDXOXt0GO1e7cRlrE9xwYYWxwxadDKa2bT/lLmjJkMQsWPZL5m47bLDwkNSNHHZ y0RfiH/EafZcyIu6mJqr5fgjIDF1+mOvzBkxWdhNR+W7VjYjYPI1tpaZoBz5RvVxUohD T17X1xbGltgCiVj0an7mz9VaHJ+FVZA+ChefhD6shg+aMBCWRw1//ok1UIwCmlq5zYq8 FmyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=1MSGMYa7Pgl5jEklXI4z+mlq/ex7QoqUEhMVw2Uee6o=; b=cvPZ2E9JPPmb+mqLmgLsfzLngZZUm4KMNwP0n3/gqQuydwI/2ZX2k0vz903uloWgJT Na1ZKq3XmEHF3bDK8AHaAh7AFICvSG/A2MIbWZ/GDsYHPdxLw3FizImdXmdO6OzY0lSD x5ly7XkmLCohvDgwUQeb7ivqfvtBRgg8PZDI1pfWDHkzViykc+0cI0hoxbB0F5Gy5gPo tj0OsmZ3baTkPZWsR10SX7+Sdv9iAqy5ihXyvaLyrGD2thuB9uK+7eK7MaM49YI8zlgp KRoUvweFzW9G8O7GQCNM3IGW01Kj+3xTrtrEZCPbrX0Cy7NMDmfSmG/N6dlHtweT1j2T yQfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j10si8232379pgh.273.2019.05.17.05.51.26; Fri, 17 May 2019 05:51:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728452AbfEQLmH (ORCPT + 99 others); Fri, 17 May 2019 07:42:07 -0400 Received: from mx2.suse.de ([195.135.220.15]:52046 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727727AbfEQLmH (ORCPT ); Fri, 17 May 2019 07:42:07 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A5610AECD; Fri, 17 May 2019 11:42:05 +0000 (UTC) From: Jiri Slaby To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Jiri Slaby , Johannes Weiner , Michal Hocko , Vladimir Davydov , cgroups@vger.kernel.org, Raghavendra K T Subject: [PATCH v2] memcg: make it work on sparse non-0-node systems Date: Fri, 17 May 2019 13:42:04 +0200 Message-Id: <20190517114204.6330-1-jslaby@suse.cz> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190517080044.tnwhbeyxcccsymgf@esperanza> References: <20190517080044.tnwhbeyxcccsymgf@esperanza> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We have a single node system with node 0 disabled: Scanning NUMA topology in Northbridge 24 Number of physical nodes 2 Skipping disabled node 0 Node 1 MemBase 0000000000000000 Limit 00000000fbff0000 NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff] This causes crashes in memcg when system boots: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 #PF error: [normal kernel read fault] ... RIP: 0010:list_lru_add+0x94/0x170 ... Call Trace: d_lru_add+0x44/0x50 dput.part.34+0xfc/0x110 __fput+0x108/0x230 task_work_run+0x9f/0xc0 exit_to_usermode_loop+0xf5/0x100 It is reproducible as far as 4.12. I did not try older kernels. You have to have a new enough systemd, e.g. 241 (the reason is unknown -- was not investigated). Cannot be reproduced with systemd 234. The system crashes because the size of lru array is never updated in memcg_update_all_list_lrus and the reads are past the zero-sized array, causing dereferences of random memory. The root cause are list_lru_memcg_aware checks in the list_lru code. The test in list_lru_memcg_aware is broken: it assumes node 0 is always present, but it is not true on some systems as can be seen above. So fix this by avoiding checks on node 0. Remember the memcg-awareness by a bool flag in struct list_lru. [v2] use the idea proposed by Vladimir -- the bool flag. Signed-off-by: Jiri Slaby Cc: Johannes Weiner Cc: Michal Hocko Suggested-by: Vladimir Davydov Acked-by: Vladimir Davydov Cc: Cc: Cc: Raghavendra K T --- include/linux/list_lru.h | 1 + mm/list_lru.c | 8 +++----- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index aa5efd9351eb..d5ceb2839a2d 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -54,6 +54,7 @@ struct list_lru { #ifdef CONFIG_MEMCG_KMEM struct list_head list; int shrinker_id; + bool memcg_aware; #endif }; diff --git a/mm/list_lru.c b/mm/list_lru.c index 0730bf8ff39f..d3b538146efd 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru) static inline bool list_lru_memcg_aware(struct list_lru *lru) { - /* - * This needs node 0 to be always present, even - * in the systems supporting sparse numa ids. - */ - return !!lru->node[0].memcg_lrus; + return lru->memcg_aware; } static inline struct list_lru_one * @@ -451,6 +447,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) { int i; + lru->memcg_aware = memcg_aware; + if (!memcg_aware) return 0; -- 2.21.0