Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3519668yba; Mon, 29 Apr 2019 04:00:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqxSAr7ZyYtYwd+E8RLc1MZN+uGJ8l0v62W+Nb00ToXsae9J5PlrXY/6obM0dnvwr/VMonRc X-Received: by 2002:aa7:8282:: with SMTP id s2mr61830307pfm.7.1556535651098; Mon, 29 Apr 2019 04:00:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556535651; cv=none; d=google.com; s=arc-20160816; b=B3BHYNQfHagAOSZrhwyqsAXlVLj0yv22K+5DWcZcJhmWuIiP/6pPvXLD1F9LhWeYfv d2supkLbB9d/3IKvwErrI/Va1Ohqvm6rBDwm0DWv5GRDaIcUL38r/4QMW96sB1COEPX3 5IstBxA3+wja7ZKjUeen6bKn7fgP2h4Jfye6SBTXeKpq3CZ6qveSkM08v6geW7lOGF6G HqVeryxy+6SytEoP050M9g/+uMgw4fY3HnomYUrn61CxqaGkRSCoalfMcw4FMq/MjNmO df1OTlkOOOjfET+01q9oXNf5gK9A7Fn1yo+Qil3vlsxq5FQ+i1vHSjuPgAxcwu1d5/BV YLWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=dWX3FWphLOghWXlk17TBdDF7RRERQ2Z6BRop46ttLq8=; b=cjB4ipH5vQV54wOFSWlWEjaqYC4Q3+WNBeJpi+2v/gCwfZqNcRd3q6KZRDpYua6M/3 r5zAT6uC6xuKdh9c9LGv1QvSGW8An5O1z0XW/O6elRa8bg+ZBotACU2ggKwpI9sYJ+pC pqILP3SKeOUkalRhPRFT89jDqHvDyl+fwQXW9Xv1a9GVfzkZUV7uXs1xjjldrnr1KDyj qEGag2rhoW2lNDJL3v3Rwv9IspG5QuBbEs3E3X9UPWc17xt3sEoljRz7dUH8KoRRIbK6 bm/dJfiHQGFQQFivuoPR66dk8uDoXIbC+4oXxXkWholVjkyo+dQPJv/pLmDIA/5TSFs1 T0/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n31si31326936pgm.10.2019.04.29.04.00.34; Mon, 29 Apr 2019 04:00:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727868AbfD2K7n (ORCPT + 99 others); Mon, 29 Apr 2019 06:59:43 -0400 Received: from mx2.suse.de ([195.135.220.15]:40452 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727710AbfD2K7m (ORCPT ); Mon, 29 Apr 2019 06:59:42 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 6F05BADE1; Mon, 29 Apr 2019 10:59:41 +0000 (UTC) From: Jiri Slaby To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Jiri Slaby , Johannes Weiner , Michal Hocko , Vladimir Davydov , cgroups@vger.kernel.org, Raghavendra K T Subject: [PATCH] memcg: make it work on sparse non-0-node systems Date: Mon, 29 Apr 2019 12:59:39 +0200 Message-Id: <20190429105939.11962-1-jslaby@suse.cz> X-Mailer: git-send-email 2.21.0 In-Reply-To: <359d98e6-044a-7686-8522-bdd2489e9456@suse.cz> References: <359d98e6-044a-7686-8522-bdd2489e9456@suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We have a single node system with node 0 disabled: Scanning NUMA topology in Northbridge 24 Number of physical nodes 2 Skipping disabled node 0 Node 1 MemBase 0000000000000000 Limit 00000000fbff0000 NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff] This causes crashes in memcg when system boots: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 #PF error: [normal kernel read fault] ... RIP: 0010:list_lru_add+0x94/0x170 ... Call Trace: d_lru_add+0x44/0x50 dput.part.34+0xfc/0x110 __fput+0x108/0x230 task_work_run+0x9f/0xc0 exit_to_usermode_loop+0xf5/0x100 It is reproducible as far as 4.12. I did not try older kernels. You have to have a new enough systemd, e.g. 241 (the reason is unknown -- was not investigated). Cannot be reproduced with systemd 234. The system crashes because the size of lru array is never updated in memcg_update_all_list_lrus and the reads are past the zero-sized array, causing dereferences of random memory. The root cause are list_lru_memcg_aware checks in the list_lru code. The test in list_lru_memcg_aware is broken: it assumes node 0 is always present, but it is not true on some systems as can be seen above. So fix this by checking the first online node instead of node 0. Signed-off-by: Jiri Slaby Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Cc: Cc: Raghavendra K T --- mm/list_lru.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/mm/list_lru.c b/mm/list_lru.c index 0730bf8ff39f..7689910f1a91 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru) static inline bool list_lru_memcg_aware(struct list_lru *lru) { - /* - * This needs node 0 to be always present, even - * in the systems supporting sparse numa ids. - */ - return !!lru->node[0].memcg_lrus; + return !!lru->node[first_online_node].memcg_lrus; } static inline struct list_lru_one * -- 2.21.0