Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp111119ybx; Wed, 30 Oct 2019 12:08:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqzuMLGx3NQ0VI5h41VNHU9xEuaFMwa7i/3vVIzVB6xQ+UXIYodgT4E08UabyNyBqk55aFYa X-Received: by 2002:a17:906:1f57:: with SMTP id d23mr1108057ejk.233.1572462505150; Wed, 30 Oct 2019 12:08:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572462505; cv=none; d=google.com; s=arc-20160816; b=pikDR71702UNr3zVgHTCqLcFvEywsOH1wqoG5d93I9NX35sKtnUrCiTDZ0iDsC6/Ik AzI0D1u+GEtRgiOZutZryWoT7+NQvne+BkLDJ5rMHZeaG1lvKpstBSm3KzuoMIuj4po4 ZA6ecu0dSQLWB6wognKO/+t1adGsttKLhY0zVhiosfdx+TgA9Mslo311Jp6r+fAS6Old ItEBaC0BQX7vwqgecq48xxI1jf1Fd3OtiYX1Xp23pK9d95HFcTG3Lj0gSwUEmguxviru OmCSZPfqBEFlZswx0tvO9QO1lobARddAs4d7qaXUFS1rVJrd+inJtPagTK3ebcpIepn1 5Vew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=spVCm9412kkSXSgQKCzyed30EO8IU83tfmuFKtgmPdA=; b=mz2wLWngcjZ7vsKitvXDeimfhwgBJbqTYN9bbsCbFaFXAgb85sS6ba8h5VzqxuLu4T uNrGIJCB2+qUtYzzIFsQNTPqWLa4L/kn65o9gS4jhMpKBgqJ8tFKaHQTCmUUv0h7VXch n57EOF4yRNm+REI0zMa4HyrbdF0KIG7U3oxWPsKgVfwOuw4rIfWg4W2yL7nQBkvVn7dN 2klAXvCOPQ2HXOW6Rr8+oPbllFjc1LebEz6ObYxInIoElqicLfTDJiE73llPIL5Sx6D2 yBybe9L8EouQgOlwn1U/I/xmeSN3zHD849pePWhCz1enkTumUSVoBjKamytZ0Ka68w5X raRQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f3si1789339ejb.49.2019.10.30.12.08.01; Wed, 30 Oct 2019 12:08:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727374AbfJ3RxK (ORCPT + 99 others); Wed, 30 Oct 2019 13:53:10 -0400 Received: from mx2.suse.de ([195.135.220.15]:59652 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727267AbfJ3RxK (ORCPT ); Wed, 30 Oct 2019 13:53:10 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A713AACEF; Wed, 30 Oct 2019 17:53:08 +0000 (UTC) Date: Wed, 30 Oct 2019 18:53:02 +0100 From: Michal Hocko To: Johannes Weiner Cc: Shakeel Butt , Greg Thelen , Roman Gushchin , Andrew Morton , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+13f93c99c06988391efe@syzkaller.appspotmail.com Subject: Re: [PATCH] mm: vmscan: memcontrol: remove mem_cgroup_select_victim_node() Message-ID: <20191030175302.GM31513@dhcp22.suse.cz> References: <20191029234753.224143-1-shakeelb@google.com> <20191030174455.GA45135@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191030174455.GA45135@cmpxchg.org> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 30-10-19 13:44:55, Johannes Weiner wrote: > On Tue, Oct 29, 2019 at 04:47:53PM -0700, Shakeel Butt wrote: > > Since commit 1ba6fc9af35b ("mm: vmscan: do not share cgroup iteration > > between reclaimers"), the memcg reclaim does not bail out earlier based > > on sc->nr_reclaimed and will traverse all the nodes. All the reclaimable > > pages of the memcg on all the nodes will be scanned relative to the > > reclaim priority. So, there is no need to maintain state regarding which > > node to start the memcg reclaim from. Also KCSAN complains data races in > > the code maintaining the state. > > > > This patch effectively reverts the commit 889976dbcb12 ("memcg: reclaim > > memory from nodes in round-robin order") and the commit 453a9bf347f1 > > ("memcg: fix numa scan information update to be triggered by memory > > event"). > > > > Signed-off-by: Shakeel Butt > > Reported-by: > > Excellent, thanks Shakeel! > Acked-by: Johannes Weiner > > Just a request on this bit: > > > @@ -3360,16 +3358,9 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, > > .may_unmap = 1, > > .may_swap = may_swap, > > }; > > + struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask); > > > > set_task_reclaim_state(current, &sc.reclaim_state); > > - /* > > - * Unlike direct reclaim via alloc_pages(), memcg's reclaim doesn't > > - * take care of from where we get pages. So the node where we start the > > - * scan does not need to be the current node. > > - */ > > - nid = mem_cgroup_select_victim_node(memcg); > > - > > - zonelist = &NODE_DATA(nid)->node_zonelists[ZONELIST_FALLBACK]; > > This works, but it *is* somewhat fragile if we decide to add bail-out > conditions to reclaim again. And some numa nodes receiving slightly > less pressure than others could be quite tricky to debug. > > Can we add a comment here that points out the assumption that the > zonelist walk is comprehensive, and that all nodes receive equal > reclaim pressure? Makes sense > Also, I think we should use sc.gfp_mask & ~__GFP_THISNODE, so that > allocations with a physical node preference still do node-agnostic > reclaim for the purpose of cgroup accounting. Do not we exclude that by GFP_RECLAIM_MASK already? -- Michal Hocko SUSE Labs