Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3648127yba; Mon, 29 Apr 2019 06:17:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqw5yDMURNBwWcaOoawdqB6C2+joynySOTO1UB1TJmxSrzOg6254P3tHXfIEp8ss5kIsQx9b X-Received: by 2002:a65:6490:: with SMTP id e16mr3576343pgv.13.1556543849403; Mon, 29 Apr 2019 06:17:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556543849; cv=none; d=google.com; s=arc-20160816; b=ZR8xHwDj7YmSg9IOS271vd4bpDd4zDgCl//H4mo6wp+5EXVg4T1sQ+XpvzyTLLsiLl 7Fnqxtoaqr9j4TrNnhhka3Dzm3Hsw8JufopmKFddMnNBWqLerOJLGggmH9HwkQDe1Xgu NV8fvcExlUAzcN8lLK9X+sIZNm4Ein90OLMWqh89TCfJKEqXJChVhSsdiibBvYf+g99Z PDBKm+TZNPIaVSDKp5EroBtqVDH91ZDoKgKEbKALwQA2gUbwsgQC5lbTgWV7kzUuueIy HhQg8O5X3gApqd2Ye1LJsLCvsVtshwcFRrkvfwiP5ofP40HRYmAKvOW1qarhIewMcRIU yBUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=7gTeKKjyC5xRehCGJ3fDVsp6EDCweAatZ/YC0G+ZU0U=; b=jNt+LmGb59sBhTU/eU+FlBi35UlMFJMkimKJ7UgAy8H8lEon5b/kFsuA2ThruLZX6s YvlPhmCc96ilXixuoJuGjm/1ncEBnrjLkL0Ka/OKm2tlWNLv+PWNl02BrQNG/d4njAt0 XSX+bYFT8+orPOjVSWa0JVU/qB3WrH+9nRvgp+tsvZRE1zHyDhJ/ajH4K+bjVc+vZkpZ xsYNHoRc8ZHo6obIrBrfJpoBKZkCgTENVxRHtYIu4sTxE92RdPNl0pKHZUFiYjWof23/ nfE9EKQruRN4+R4HjSfA8UAjlgwRwkwoJdrCgJLX2oUGHR5ZqBQFe/fhGjUNX3+PFSG6 SqxQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l5si14367457pgj.560.2019.04.29.06.17.12; Mon, 29 Apr 2019 06:17:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728246AbfD2NPy (ORCPT + 99 others); Mon, 29 Apr 2019 09:15:54 -0400 Received: from mx2.suse.de ([195.135.220.15]:37300 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726321AbfD2NPy (ORCPT ); Mon, 29 Apr 2019 09:15:54 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1E2D5AD31; Mon, 29 Apr 2019 13:15:53 +0000 (UTC) Date: Mon, 29 Apr 2019 09:15:49 -0400 From: Michal Hocko To: Jiri Slaby Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Vladimir Davydov , cgroups@vger.kernel.org, Raghavendra K T Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems Message-ID: <20190429131549.GL21837@dhcp22.suse.cz> References: <359d98e6-044a-7686-8522-bdd2489e9456@suse.cz> <20190429105939.11962-1-jslaby@suse.cz> <20190429112916.GI21837@dhcp22.suse.cz> <465a4b50-490c-7978-ecb8-d122b655f868@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <465a4b50-490c-7978-ecb8-d122b655f868@suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 29-04-19 13:55:26, Jiri Slaby wrote: > On 29. 04. 19, 13:30, Michal Hocko wrote: > > On Mon 29-04-19 12:59:39, Jiri Slaby wrote: > > [...] > >> static inline bool list_lru_memcg_aware(struct list_lru *lru) > >> { > >> - /* > >> - * This needs node 0 to be always present, even > >> - * in the systems supporting sparse numa ids. > >> - */ > >> - return !!lru->node[0].memcg_lrus; > >> + return !!lru->node[first_online_node].memcg_lrus; > >> } > >> > >> static inline struct list_lru_one * > > > > How come this doesn't blow up later - e.g. in memcg_destroy_list_lru > > path which does iterate over all existing nodes thus including the > > node 0. > > If the node is not disabled (i.e. is N_POSSIBLE), lru->node is allocated > for that node too. It will also have memcg_lrus properly set. > > If it is disabled, it will never be iterated. > > Well, I could have used first_node. But I am not sure, if the first > POSSIBLE node is also ONLINE during boot? I dunno. I would have to think about this much more. The whole expectation that node 0 is always around is simply broken. But also list_lru_memcg_aware looks very suspicious. We should have a flag or something rather than what we have now. I am still not sure I have completely understood the problem though. I will try to get to this during the week but Vladimir should be much better fit to judge here. -- Michal Hocko SUSE Labs