Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp4830103ybg; Tue, 29 Oct 2019 13:03:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqz65PNU9uaKE8oxqL8xz5oLs+9/w2dMv/filk1//tSt2Tt0dbqDYvzac80aaY1jA62ZGmrZ X-Received: by 2002:a17:906:694d:: with SMTP id c13mr5035201ejs.223.1572379431515; Tue, 29 Oct 2019 13:03:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572379431; cv=none; d=google.com; s=arc-20160816; b=pucPvBFStO5FWf2DnkSBtbpwvPfr6t3SD3o6Bz468ipzi8BlLjXe+7IQIodoCcZKU3 1khx144Cdst2Vlmn+MeNYmBt3h8puGBxQsB5ky7GP3AWFpDbWKny1o4JLgeW5ashibRx TOJYVQFR4zsqA3aj38vnOp/W2ozeYMH/oHdUlyjmpDSj1HzvzXmKEYY1PJiBmLD69HhM r19+QryBW9HWtO59IlGQSTYW3FgtjwMm6zkQAIbtKYRyeGo9ob/bM6h0t4xjrHyq+mMY B9xaxHrnNhjDwfP32z1dXC6sMXD+tcuyer/8mxG8qukpcPliqvgMpUiKtKHnjeY8MrmM aq1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=cvcDmCriss9MNCwecZu9nxXC0u+MJ4zwXhv2DPdOfIU=; b=By/i41nMyL1vTDT550yU2AM3XfV+41Nuihv1jrsHCpe7h7VO1CcVNs/NcjDhc4uwcU z2N05RG8mRZ8wYGz27aIC29JgOKJe8r87y4NjCNl41U7DgfkEoJ94Dv5xqW/3yN/SiG/ cgQo2v7FC4Dlb5hA+P9WuUSuuGPyzidYaSMJNjvm7FSwLfsf10I/dC2S8WRmtZFj4yAA C3GvU8Ka9bBcZbWbMah5Fi8RU+vVeifRDJcEv+P90oYX/pQi26zQr+0ittAVHdGQZexb 08kgS/N5hH+0IJ2wgNv+xRxvc88yDogBbZo2xLux6jnMMk6SBJu3U7OX+vHyIaEf1N3B L9fw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ec17si8512369ejb.131.2019.10.29.13.03.28; Tue, 29 Oct 2019 13:03:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731806AbfJ2Sro (ORCPT + 99 others); Tue, 29 Oct 2019 14:47:44 -0400 Received: from mx2.suse.de ([195.135.220.15]:59344 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729528AbfJ2Sro (ORCPT ); Tue, 29 Oct 2019 14:47:44 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 96353AFF3; Tue, 29 Oct 2019 18:47:41 +0000 (UTC) Date: Tue, 29 Oct 2019 19:47:39 +0100 From: Michal Hocko To: Shakeel Butt Cc: Roman Gushchin , Johannes Weiner , Andrew Morton , Linux MM , Cgroups , LKML , Eric Dumazet , Greg Thelen , syzbot+13f93c99c06988391efe@syzkaller.appspotmail.com, elver@google.com Subject: Re: [PATCH] mm: memcontrol: fix data race in mem_cgroup_select_victim_node Message-ID: <20191029184739.GP31513@dhcp22.suse.cz> References: <20191029005405.201986-1-shakeelb@google.com> <20191029090347.GG31513@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 29-10-19 11:09:29, Shakeel Butt wrote: > +Marco > > On Tue, Oct 29, 2019 at 2:03 AM Michal Hocko wrote: > > > > On Mon 28-10-19 17:54:05, Shakeel Butt wrote: > > > Syzbot reported the following bug: > > > > > > BUG: KCSAN: data-race in mem_cgroup_select_victim_node / mem_cgroup_select_victim_node > > > > > > write to 0xffff88809fade9b0 of 4 bytes by task 8603 on cpu 0: > > > mem_cgroup_select_victim_node+0xb5/0x3d0 mm/memcontrol.c:1686 > > > try_to_free_mem_cgroup_pages+0x175/0x4c0 mm/vmscan.c:3376 > > > reclaim_high.constprop.0+0xf7/0x140 mm/memcontrol.c:2349 > > > mem_cgroup_handle_over_high+0x96/0x180 mm/memcontrol.c:2430 > > > tracehook_notify_resume include/linux/tracehook.h:197 [inline] > > > exit_to_usermode_loop+0x20c/0x2c0 arch/x86/entry/common.c:163 > > > prepare_exit_to_usermode+0x180/0x1a0 arch/x86/entry/common.c:194 > > > swapgs_restore_regs_and_return_to_usermode+0x0/0x40 > > > > > > read to 0xffff88809fade9b0 of 4 bytes by task 7290 on cpu 1: > > > mem_cgroup_select_victim_node+0x92/0x3d0 mm/memcontrol.c:1675 > > > try_to_free_mem_cgroup_pages+0x175/0x4c0 mm/vmscan.c:3376 > > > reclaim_high.constprop.0+0xf7/0x140 mm/memcontrol.c:2349 > > > mem_cgroup_handle_over_high+0x96/0x180 mm/memcontrol.c:2430 > > > tracehook_notify_resume include/linux/tracehook.h:197 [inline] > > > exit_to_usermode_loop+0x20c/0x2c0 arch/x86/entry/common.c:163 > > > prepare_exit_to_usermode+0x180/0x1a0 arch/x86/entry/common.c:194 > > > swapgs_restore_regs_and_return_to_usermode+0x0/0x40 > > > > > > mem_cgroup_select_victim_node() can be called concurrently which reads > > > and modifies memcg->last_scanned_node without any synchrnonization. So, > > > read and modify memcg->last_scanned_node with READ_ONCE()/WRITE_ONCE() > > > to stop potential reordering. > > > > I am sorry but I do not understand the problem and the fix. Why does the > > race happen and why does _ONCE fixes it? There is still no > > synchronization. Do you want to prevent from memcg->last_scanned_node > > reloading? > > > > The problem is memcg->last_scanned_node can read and modified > concurrently. Though to me it seems like a tolerable race and not > worth to add an explicit lock. Agreed > My aim was to make KCSAN happy here to > look elsewhere for the concurrency bugs. However I see that it might > complain next on memcg->scan_nodes. I would really refrain from adding whatever measure to silence some tool without a deeper understanding of why that is needed. $FOO_ONCE will prevent compiler from making funcy stuff. But this is an int and I would be really surprised if $FOO_ONCE made any practical difference. > Now taking a step back, I am questioning the whole motivation behind > mem_cgroup_select_victim_node(). Since we pass ZONELIST_FALLBACK > zonelist to the reclaimer, the shrink_node will be called for all > potential nodes. Also we don't short circuit the traversal of > shrink_node for all nodes on nr_reclaimed and we scan (size_on_node >> > priority) for all nodes, I don't see the reason behind having round > robin order of node traversal. > > I am thinking of removing the whole mem_cgroup_select_victim_node() > heuristic. Please let me know if there are any objections. I would have to think more about this but this surely sounds like a preferable way than adding $FOO_ONCE to silence the tool. Thanks! -- Michal Hocko SUSE Labs