Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp4050465ybc; Thu, 21 Nov 2019 18:41:26 -0800 (PST) X-Google-Smtp-Source: APXvYqxBYpPNI25KZVC9lOvWRk6NWlXPVxjrGtXPdvkriKf+B+PyzOBzePw4+Me4R5IkjR+8AIld X-Received: by 2002:a17:906:d793:: with SMTP id pj19mr18483805ejb.303.1574390486014; Thu, 21 Nov 2019 18:41:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574390486; cv=none; d=google.com; s=arc-20160816; b=kT5uSrgmYnS9BNg4R1xXJAy/Risx8/9fqq1pKUMos12L7jxyGmwPl1AIuO6GHSIImg O2IEmShCV4c9lnKQaSO47i7m6ib91kW1sQELwkxMUdRNEl+BrVQhEHUWtcjdjqIOMa8Z 5nfrBiGdc8kpR2nViPTaWQNHukjC6T+02uzCXy0um9T26tMwH0uvym3FPTy61XUpZg0+ B0HbhcASTYE2AEcbpiB4OpHP7PUr7lRepUaUx6ruGvoDTI8an9vyRSTjLoXWC31gYkdu GXkIbA1MpKW48oCNWhk8Z+YwLSeGIe4kDIvno/56rmDwpFT7LsV9WXcvIhgYqFyVANuv hz3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=n0lBinjoYoYHdktOOOaIfbrs1DeoX4ooCVs4HyLDBF4=; b=Lz5sX776OK7xvXI9zsUkE6ap04EnFeV+ypLBUuNbEuagk35Hqx/Z9SBk8UVN+GsygW FB4cEEq5CFhuXlEg3ukqbqhFnQzoHml5yPON4bTAx+oYhtq3Lp3c3x7PjB9NqF+wYhle hh3D/CJe9+Ta3bcpAsFrQM7TWnX9qihM9F0DN047ElOxz2HsACtEkhbqtHDDJja1/StH GcOXIUDn8PpjmoDErwIiSyx3iPIYyPFD3qZ6N/X13cCHbVcO4bno+qt1JiV3AKkWbuDv SBtYbaTZtuncCB9qHMgvpXKhABl6KqbKDZPVJwLRr17z1iWj1usUYq5GUllP7zZf9aV4 TDMg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v11si3128010eju.135.2019.11.21.18.41.01; Thu, 21 Nov 2019 18:41:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726638AbfKVCgt (ORCPT + 99 others); Thu, 21 Nov 2019 21:36:49 -0500 Received: from out30-44.freemail.mail.aliyun.com ([115.124.30.44]:59398 "EHLO out30-44.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726454AbfKVCgt (ORCPT ); Thu, 21 Nov 2019 21:36:49 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04427;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=38;SR=0;TI=SMTPD_---0TilD44h_1574390198; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TilD44h_1574390198) by smtp.aliyun-inc.com(127.0.0.1); Fri, 22 Nov 2019 10:36:39 +0800 Subject: Re: [PATCH v4 3/9] mm/lru: replace pgdat lru_lock with lruvec lock To: Johannes Weiner Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, Michal Hocko , Vladimir Davydov , Roman Gushchin , Chris Down , Thomas Gleixner , Vlastimil Babka , Qian Cai , Andrey Ryabinin , "Kirill A. Shutemov" , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrea Arcangeli , David Rientjes , "Aneesh Kumar K.V" , swkhack , "Potyra, Stefan" , Mike Rapoport , Stephen Rothwell , Colin Ian King , Jason Gunthorpe , Mauro Carvalho Chehab , Peng Fan , Nikolay Borisov , Ira Weiny , Kirill Tkhai , Yafang Shao References: <1574166203-151975-1-git-send-email-alex.shi@linux.alibaba.com> <1574166203-151975-4-git-send-email-alex.shi@linux.alibaba.com> <20191119160456.GD382712@cmpxchg.org> <20191121220613.GB487872@cmpxchg.org> From: Alex Shi Message-ID: Date: Fri, 22 Nov 2019 10:36:32 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: <20191121220613.GB487872@cmpxchg.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2019/11/22 上午6:06, Johannes Weiner 写道: >> >> Forgive my idiot, I still don't know the details of unsafe lruvec here. >> From my shortsight, the spin_lock_irq(embedded a preempt_disable) could block all rcu syncing thus, keep all memcg alive until the preempt_enabled in unspinlock, is this right? >> If so even the page->mem_cgroup is migrated to others cgroups, the new and old cgroup should still be alive here. > You are right about the freeing part, I missed this. And I should have > read this email here before sending out my "fix" to the current code; > thankfully Hugh re-iterated my mistake on that thread. My apologies. > That's all right. You and Hugh do give me a lot of help! :) > But I still don't understand how the moving part is safe. You look up > the lruvec optimistically, lock it, then verify the lookup. What keeps > page->mem_cgroup from changing after you verified it? > > lock_page_lruvec(): mem_cgroup_move_account(): > again: > rcu_read_lock() > lruvec = page->mem_cgroup->lruvec > isolate_lru_page() > spin_lock_irq(&lruvec->lru_lock) > rcu_read_unlock() > if page->mem_cgroup->lruvec != lruvec: > spin_unlock_irq(&lruvec->lru_lock) > goto again; > page->mem_cgroup = new cgroup > putback_lru_page() // new lruvec > SetPageLRU() > return lruvec; // old lruvec > > The caller assumes page belongs to the returned lruvec and will then > change the page's lru state with a mismatched page and lruvec. > Yes, that's the problem we have to deal. > If we could restrict lock_page_lruvec() to working only on PageLRU > pages, we could fix the problem with memory barriers. But this won't > work for split_huge_page(), which is AFAICT the only user that needs > to freeze the lru state of a page that could be isolated elsewhere. > > So AFAICS the only option is to lock out mem_cgroup_move_account() > entirely when the lru_lock is held. Which I guess should be fine. I guess we can try from lock_page_memcg, is that a good start? diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7e6387ad01f0..f4bbbf72c5b8 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1224,7 +1224,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd goto out; } - memcg = page->mem_cgroup; + memcg = lock_page_memcg(page); /* * Swapcache readahead pages are added to the LRU - and * possibly migrated - before they are charged. Thanks a lot! Alex