Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp2735751ybc; Mon, 18 Nov 2019 03:57:31 -0800 (PST) X-Google-Smtp-Source: APXvYqwJNtagkMcH1spNX6M2LmZFQpyhJUTYUmeHwLDf5tiVSADI1WUnXzlrBlZ3ukyXMn5+iSz3 X-Received: by 2002:a17:906:f209:: with SMTP id gt9mr25834456ejb.241.1574078250961; Mon, 18 Nov 2019 03:57:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574078250; cv=none; d=google.com; s=arc-20160816; b=VizJmIv83j2S+qMlrnr5CnnEdHtucVMKYqYe3J9OTEVpxdGu3qkBJARxkARvc22nPi kVu/TwgusNo5hcwGSHlFG/dlFqKLwKIWkcd9ursz0YlrNx84LgdD7Z9uITGjJ5uA6+rC pMhdcBhjTzUutmb5ExRjoNqGAr9oMCzp00y8nkEx6J8CVDi1HGvFv713+rgTjfWNM5LH rPnzFJ25lE0+fYtAQNcymgNhNLvPm+NCVaG0zv7nVcW/wRkizIJUHJbSMwIiN2ad60Mh 0Yu1yDe+qxJRxpB+OVogQrzEG9+yVg8II3p6hUuk9NS3ybrHSmxUWzAYmvQ2/4151SGO TmSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=PbRlBD/BbAMZ/u75yB/PEoRHNmdzVqi1DRhYTfZLfIk=; b=FK5tUfSPgnwNU8CjYUk0MouI5ap7FoGLLv5cqi2U+FDNsWmsXtCwydI46h7KADnC5i iw7OUKACHvTSi018k3e+oXcFTubD8vSidtfSo9mdsbpJudSm3YzFWeiRniHifSo226Oa cwMhx182jNIekDMkSNwv6IGDovnHDWfhALEOYxTnS3j0msOT9M0ntG8WXlKy3eExO7XM RLKIYkU5XNEj1S/ES8i8zAtM2YSlW5LXtQi2IqXPmrHV/ZZpNevjEkyw1UufjzdlHSlR bkYm2NREnA1HogjQT13bungH2TN7argHk/+ngHsM8tMET1nVRXUIuuw20ILsSwA4GGir TGHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d47si14324489ede.139.2019.11.18.03.57.06; Mon, 18 Nov 2019 03:57:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726747AbfKRLzy (ORCPT + 99 others); Mon, 18 Nov 2019 06:55:54 -0500 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:57089 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726506AbfKRLzy (ORCPT ); Mon, 18 Nov 2019 06:55:54 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R661e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=38;SR=0;TI=SMTPD_---0TiTpj16_1574078144; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TiTpj16_1574078144) by smtp.aliyun-inc.com(127.0.0.1); Mon, 18 Nov 2019 19:55:45 +0800 Subject: Re: [PATCH v3 3/7] mm/lru: replace pgdat lru_lock with lruvec lock To: Matthew Wilcox Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, Johannes Weiner , Michal Hocko , Vladimir Davydov , Roman Gushchin , Shakeel Butt , Chris Down , Thomas Gleixner , Vlastimil Babka , Qian Cai , Andrey Ryabinin , "Kirill A. Shutemov" , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrea Arcangeli , David Rientjes , "Aneesh Kumar K.V" , swkhack , "Potyra, Stefan" , Mike Rapoport , Stephen Rothwell , Colin Ian King , Jason Gunthorpe , Mauro Carvalho Chehab , Peng Fan , Nikolay Borisov , Ira Weiny , Kirill Tkhai , Yafang Shao References: <1573874106-23802-1-git-send-email-alex.shi@linux.alibaba.com> <1573874106-23802-4-git-send-email-alex.shi@linux.alibaba.com> <20191116043806.GD20752@bombadil.infradead.org> From: Alex Shi Message-ID: <0bfa9a03-b095-df83-9cfd-146da9aab89a@linux.alibaba.com> Date: Mon, 18 Nov 2019 19:55:43 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: <20191116043806.GD20752@bombadil.infradead.org> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ?? 2019/11/16 ????12:38, Matthew Wilcox ะด??: > On Sat, Nov 16, 2019 at 11:15:02AM +0800, Alex Shi wrote: >> This is the main patch to replace per node lru_lock with per memcg >> lruvec lock. It also fold the irqsave flags into lruvec. > > I have to say, I don't love the part where we fold the irqsave flags > into the lruvec. I know it saves us an argument, but it opens up the > possibility of mismatched expectations. eg we currently have: > > static void __split_huge_page(struct page *page, struct list_head *list, > struct lruvec *lruvec, pgoff_t end) > { > ... > spin_unlock_irqrestore(&lruvec->lru_lock, lruvec->irqflags); > > so if we introduce a new caller, we have to be certain that this caller > is also using lock_page_lruvec_irqsave() and not lock_page_lruvec_irq(). > I can't think of a way to make the compiler enforce that, and if we don't, > then we can get some odd crashes with interrupts being unexpectedly > enabled or disabled, depending on how ->irqflags was used last. > > So it makes the code more subtle. And that's not a good thing. Hi Matthew, Thanks for comments! Here, the irqflags is bound, and belong to lruvec, merging them into together helps us to take them as whole, and thus reduce a unnecessary code clues. The only thing maybe bad that it may take move place in pg_data_t.lruvec, but there are PADDINGs to remove this concern. As your concern for a 'new' caller, since __split_huge_page is a static helper here, no distub for anyothers. Do you agree on that? > >> +static inline struct lruvec *lock_page_lruvec_irq(struct page *page, >> + struct pglist_data *pgdat) >> +{ >> + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat); >> + >> + spin_lock_irq(&lruvec->lru_lock); >> + >> + return lruvec; >> +} > > ... > >> +static struct lruvec *lock_page_lru(struct page *page, int *isolated) >> { >> pg_data_t *pgdat = page_pgdat(page); >> + struct lruvec *lruvec = lock_page_lruvec_irq(page, pgdat); >> >> - spin_lock_irq(&pgdat->lru_lock); >> if (PageLRU(page)) { >> - struct lruvec *lruvec; >> >> - lruvec = mem_cgroup_page_lruvec(page, pgdat); >> ClearPageLRU(page); >> del_page_from_lru_list(page, lruvec, page_lru(page)); >> *isolated = 1; >> } else >> *isolated = 0; >> + >> + return lruvec; >> } > > But what if the page is !PageLRU? What lruvec did we just lock? like original pgdat->lru_lock, we need the lock from PageLRU racing. And it the lruvec which the page should be. > According to the comments on mem_cgroup_page_lruvec(), > > * This function is only safe when following the LRU page isolation > * and putback protocol: the LRU lock must be held, and the page must > * either be PageLRU() or the caller must have isolated/allocated it. > > and now it's being called in order to find out which LRU lock to take. > So this comment needs to be updated, if it's wrong, or this patch has > a race. Yes, the function reminder is a bit misunderstanding with new patch, How about the following changes: - * This function is only safe when following the LRU page isolation - * and putback protocol: the LRU lock must be held, and the page must - * either be PageLRU() or the caller must have isolated/allocated it. + * The caller needs to grantee the page's mem_cgroup is undisturbed during + * using. That could be done by lock_page_memcg or lock_page_lruvec. Thanks Alex