Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp387202pxb; Thu, 12 Nov 2020 06:24:35 -0800 (PST) X-Google-Smtp-Source: ABdhPJx7KiuAs+UfjKUwsFRWcBrw1DljYBtwzDqjmOVbElH0cgX6nnciQ1DU6KKxPx1RsRyf5aYG X-Received: by 2002:a05:6402:114c:: with SMTP id g12mr5389007edw.167.1605191075324; Thu, 12 Nov 2020 06:24:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605191075; cv=none; d=google.com; s=arc-20160816; b=npDtaEUJbRHt3fJbvDYWiGIgRiVzaqmIngX+4xpDvUGakXl6g3xaMl4eLkX6ShCaOw Or3HgCybErh5NLhSv/i0hFPBX1R+IdJzRzZCJ4Mryffoo9bdzr1QG5ioBiYDvxXSiS9i 6JxcUs1EOPpBZiG0cFVuXUT0AETJQhggKR9vpYFXccVszsSY4Y50iiMlzR74wsH5TXY0 RjdDIVjM0M5H6jaXMUc1mbK8syyjyDIKKGNOPoD0dIjrgiFQjO6T0iKnePEa18T9WC4q 6UOnCRPoWJMb/Wb8bQeGDs5PHnKZSuZb3q3F26d+9KqZa/nvBVyOh20OWQlFnNru8Y7F iJWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=pF0b7HRRtHMf1IILcKQ0WyD8GEvufYmDMokwwBZAlt0=; b=Ga3HiRwdT+LIh7Hr4ysUnhZojc3kYNOxSVDa1F8eSStNjV+MmvsFz3ytZSPgKgl+/z ncjoXucRui+PMg0DtRnqjpMcWLFTfFPSj4yeF1duKnA1xJisjqw1d6JoaV7Q8KulWwPA 8ognmzvcOPvCn1kF4aLO0BMN+wrbCADbb8K/YxpocC6T1rm44gEww6o4Z8lQ1JqZfGST 8xwXLvR6SqNrs9waFOy5X3KzmSddFyHGNhlvyt5mDHD2FXSWSw6wdV2L5Q989vdc9xJF RGXZcOgy3bnx+Zj3sWE37UudOM655VGeXygw6E6F8Xxf6JqwnXsmAuGoObTd1qu/TKmq C4Rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k13si3831804edk.167.2020.11.12.06.24.12; Thu, 12 Nov 2020 06:24:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728366AbgKLOUg (ORCPT + 99 others); Thu, 12 Nov 2020 09:20:36 -0500 Received: from out30-43.freemail.mail.aliyun.com ([115.124.30.43]:33143 "EHLO out30-43.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727803AbgKLOUf (ORCPT ); Thu, 12 Nov 2020 09:20:35 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R721e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=24;SR=0;TI=SMTPD_---0UF5gzB-_1605190819; Received: from IT-FVFX43SYHV2H.lan(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UF5gzB-_1605190819) by smtp.aliyun-inc.com(127.0.0.1); Thu, 12 Nov 2020 22:20:20 +0800 Subject: Re: [PATCH v21 17/19] mm/lru: replace pgdat lru_lock with lruvec lock To: Vlastimil Babka , akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Cc: Michal Hocko , Yang Shi References: <1604566549-62481-1-git-send-email-alex.shi@linux.alibaba.com> <1604566549-62481-18-git-send-email-alex.shi@linux.alibaba.com> From: Alex Shi Message-ID: Date: Thu, 12 Nov 2020 22:19:33 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2020/11/12 下午8:19, Vlastimil Babka 写道: > On 11/5/20 9:55 AM, Alex Shi wrote: >> This patch moves per node lru_lock into lruvec, thus bring a lru_lock for >> each of memcg per node. So on a large machine, each of memcg don't >> have to suffer from per node pgdat->lru_lock competition. They could go >> fast with their self lru_lock. >> >> After move memcg charge before lru inserting, page isolation could >> serialize page's memcg, then per memcg lruvec lock is stable and could >> replace per node lru lock. >> >> In func isolate_migratepages_block, compact_unlock_should_abort and >> lock_page_lruvec_irqsave are open coded to work with compact_control. >> Also add a debug func in locking which may give some clues if there are >> sth out of hands. >> >> Daniel Jordan's testing show 62% improvement on modified readtwice case >> on his 2P * 10 core * 2 HT broadwell box. >> https://lore.kernel.org/lkml/20200915165807.kpp7uhiw7l3loofu@ca-dmjordan1.us.oracle.com/ >> >> On a large machine with memcg enabled but not used, the page's lruvec >> seeking pass a few pointers, that may lead to lru_lock holding time >> increase and a bit regression. >> >> Hugh Dickins helped on the patch polish, thanks! >> >> Signed-off-by: Alex Shi >> Acked-by: Hugh Dickins >> Cc: Rong Chen >> Cc: Hugh Dickins >> Cc: Andrew Morton >> Cc: Johannes Weiner >> Cc: Michal Hocko >> Cc: Vladimir Davydov >> Cc: Yang Shi >> Cc: Matthew Wilcox >> Cc: Konstantin Khlebnikov >> Cc: Tejun Heo >> Cc: linux-kernel@vger.kernel.org >> Cc: linux-mm@kvack.org >> Cc: cgroups@vger.kernel.org > > I think I need some explanation about the rcu_read_lock() usage in lock_page_lruvec*() (and places effectively opencoding it). > Preferably in form of some code comment, but that can be also added as a additional patch later, I don't want to block the series. > Hi Vlastimil, Thanks for comments! Oh, we did talk about the rcu_read_lock which is used to block memcg destroy during locking. and the spin_lock actually includes a rcu_read_lock(). Yes, we could add this comments later. > mem_cgroup_page_lruvec() comment says > >  * This function relies on page->mem_cgroup being stable - see the >  * access rules in commit_charge(). > > commit_charge() comment: > >          * Any of the following ensures page->mem_cgroup stability: >          * >          * - the page lock >          * - LRU isolation >          * - lock_page_memcg() >          * - exclusive reference > > "LRU isolation" used to be quite clear, but now is it after TestClearPageLRU(page) or after deleting from the lru list as well? > Also it doesn't mention rcu_read_lock(), should it? The lru isolation still is same as old conception, a set actions that take a page from a lru list, and commit_charge do need a isoltion for the page. but the condition of page_memcg could be change since we don't rely on lru isolation for it. The comments could be changed later. > > So what exactly are we protecting by rcu_read_lock() in e.g. lock_page_lruvec()? > >         rcu_read_lock(); >         lruvec = mem_cgroup_page_lruvec(page, pgdat); >         spin_lock(&lruvec->lru_lock); >         rcu_read_unlock(); > > Looks like we are protecting the lruvec from going away and it can't go away anymore after we take the lru_lock? > > But then e.g. in __munlock_pagevec() we are doing this without an rcu_read_lock(): > >     new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); TestClearPageLRU could block the page from memcg migration/destory. Thanks Alex > > where new_lruvec is potentionally not the one that we have locked > > And the last thing mem_cgroup_page_lruvec() is doing is: > >         if (unlikely(lruvec->pgdat != pgdat)) >                 lruvec->pgdat = pgdat; >         return lruvec; > > So without the rcu_read_lock() is this potentionally accessing the pgdat field of lruvec that might have just gone away? > > Thanks, > Vlastimil