Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2189447pxb; Fri, 5 Feb 2021 11:04:22 -0800 (PST) X-Google-Smtp-Source: ABdhPJytiOP4/9DkBho4ZKiLLBV4nF0Rofx3c5v7ijjYgxNrrCfz48xSfm4UP7qBcnO6joHei3WE X-Received: by 2002:a05:6402:3495:: with SMTP id v21mr4887580edc.323.1612551862326; Fri, 05 Feb 2021 11:04:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612551862; cv=none; d=google.com; s=arc-20160816; b=Xiq2HOX8UpTbhH4dxsGImoVIBnQhTUEe6io1Vug4Ddr1k6dlNdZe9qWErTZLHaHXAp Lr/RXccxNlmNXwR48m6ArnmWjlfHq6DKSGHnCBubr7W+tYCGjgsVHfTAiyM4DuwRR3JQ HvcmMsATjE8ovz56c2O/Oo59oU5bEkDE/dKK/h3fSko9/TzINAHB3Abnu1/WP79vjJkz 6mhqwf5AePMjvI2y54VoOY1VQ9Ht6CBGr3lfpw6lb2wSGGkjPUlneytmO1Dq8pG+aKRR SZl5YCAkIP99uncg1kZ5NDvUG1lrqdq2QCpDpCNgYYrnGlrJ0O3XioqOOt82qKzfJ/oA CuKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=umcUJjWlYyqhcB34L7jgiVYkOGucvAbPTMYmElui7W8=; b=hrnU9WYbCHRPK+wsrqD+X/bQrm8WA1elgLkeOlNPCr6rn/R8vJXtM/6qZslKa9oYRz c+T4KPuNcWibt/10/tV7JjYlHbFS95+HdHDNUV70YwKR2DDSpm6WZgSgY8tUHbPH2OXT GW6uI7Kb6nrY7yLIkwkWH4mF5/W+v2nchHnm97ZKZgwNB4rrd/l579A8WY69FgDTdr29 yVXWb8Jv7W1YQ9L4MezYTv42n4NclexNpaaZxl1bRH0YGUN4TYlYOp2Y+D2as+6RntgQ Z+tQWxkDcSRJDTkRfDtemr3Mss3b/43qOzoi1laocIsJiG45zduzdLBTRt4YuxzgeiU6 +O1Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=sxvdKwLV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c5si6234752edk.64.2021.02.05.11.03.56; Fri, 05 Feb 2021 11:04:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=sxvdKwLV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233641AbhBERTl (ORCPT + 99 others); Fri, 5 Feb 2021 12:19:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229988AbhBERRg (ORCPT ); Fri, 5 Feb 2021 12:17:36 -0500 Received: from mail-il1-x133.google.com (mail-il1-x133.google.com [IPv6:2607:f8b0:4864:20::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6881AC06174A for ; Fri, 5 Feb 2021 10:59:19 -0800 (PST) Received: by mail-il1-x133.google.com with SMTP id m20so6746764ilj.13 for ; Fri, 05 Feb 2021 10:59:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=umcUJjWlYyqhcB34L7jgiVYkOGucvAbPTMYmElui7W8=; b=sxvdKwLVtJjOYJactSKqj9AfcdPfdvYp/V6Mg0fIQpYbLA8SHxX0K6BJDTpBjkL4Qv lmem8SOCb37UshDi2wl22+MCq4w+kmqrM8+C7BxN9ng+4g5Jv+E0D470uPlk9606zyDH H6607b6EDARbFm/hZH4nW6Dllif4XUfsJJkaxtFsAX3TRZPkDxlF+Y7KwKp/p5QD8t+q 2l2eaEjfR86d42X3W8MESoXsYXk7D9Fsx+TGcBIGbhFOvK3Hequ+8MYg4B7JcH4CzBAu xbReADtXCiPXx9Zr4+nMdp9YumSadxv6wRiu7O7zAAMyGK/+w72fe4Ti+3OysAhLZSKn NTzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=umcUJjWlYyqhcB34L7jgiVYkOGucvAbPTMYmElui7W8=; b=gQYRtu8fdQ94wAQXNsCr0qrCwYF/dTVrDtyTETIEtjWPF3ODBob31TvuxgeGYdWw6I vyUce5YC9IDDibKI8/LFrDrL1SEKzh7a/A4Qss4n1aPN6fdDDXvQ5KNi4UnHL3luFweT rzpjO341ZAbrV6w2j2EtfyMd2bI/cAQClhVJ2sngFsDwgb6jd6k36NgctMr2J6U7luKe Tn2407xHCsS/FMAD/e9UiMiCEmpS5iF8S7Tcv23OwB5BSrQ8+P8s61ttMihMqJ82St2A BWJMbTb2m4BQsVTFiKQJYNUXMwJsYzQscovUcFLwMl3FGuCSba9ITS9y91Izl2JZPJUg /tog== X-Gm-Message-State: AOAM532ZSI+8D3kiVJnuWx+iza64dJmssxwxrXwT/+G2gvup7/ZYsFW+ yp6SpGxWfhnajqPffDQRaUX2Nl2a5iff3pUHqUdD3A== X-Received: by 2002:a05:6e02:1be6:: with SMTP id y6mr4840036ilv.145.1612551558458; Fri, 05 Feb 2021 10:59:18 -0800 (PST) MIME-Version: 1.0 References: <20210205062719.74431-1-songmuchun@bytedance.com> In-Reply-To: From: Shakeel Butt Date: Fri, 5 Feb 2021 10:59:06 -0800 Message-ID: Subject: Re: [External] Re: [PATCH] mm: memcontrol: remove rcu_read_lock from get_mem_cgroup_from_page To: Johannes Weiner Cc: Michal Hocko , Muchun Song , Vladimir Davydov , Andrew Morton , Cgroups , Linux Memory Management List , LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 5, 2021 at 10:31 AM Johannes Weiner wrote: > > On Fri, Feb 05, 2021 at 11:32:24AM +0100, Michal Hocko wrote: > > On Fri 05-02-21 17:14:30, Muchun Song wrote: > > > On Fri, Feb 5, 2021 at 4:36 PM Michal Hocko wrote: > > > > > > > > On Fri 05-02-21 14:27:19, Muchun Song wrote: > > > > > The get_mem_cgroup_from_page() is called under page lock, so the page > > > > > memcg cannot be changed under us. > > > > > > > > Where is the page lock enforced? > > > > > > Because it is called from alloc_page_buffers(). This path is under > > > page lock. > > > > I do not see any page lock enforecement there. There is not even a > > comment requiring that. Can we grow more users where this is not the > > case? There is no actual relation between alloc_page_buffers and > > get_mem_cgroup_from_page except that the former is the only _current_ > > existing user. I would be careful to dictate locking based solely on > > that. > > Since alloc_page_buffers() holds the page lock throughout the entire > time it uses the memcg, there is no actual reason for it to use RCU or > even acquire an additional reference on the css. We know it's pinned, > the charge pins it, and the page lock pins the charge. It can neither > move to a different cgroup nor be uncharged. > > So what do you say we switch alloc_page_buffers() to page_memcg()? > > And because that removes the last user of get_mem_cgroup_from_page(), > we can kill it off and worry about a good interface once a consumer > materializes for it. This seems like even better approach. > > diff --git a/fs/buffer.c b/fs/buffer.c > index 96c7604f69b3..12a10f461b81 100644 > --- a/fs/buffer.c > +++ b/fs/buffer.c > @@ -847,7 +847,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, > if (retry) > gfp |= __GFP_NOFAIL; > > - memcg = get_mem_cgroup_from_page(page); > + memcg = page_memcg(page); A comment explaining why page_memcg(page) is safe here and then the patch looks good to me. > old_memcg = set_active_memcg(memcg); > > head = NULL; > @@ -868,7 +868,6 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size, > } > out: > set_active_memcg(old_memcg); > - mem_cgroup_put(memcg); > return head; > /* > * In case anything failed, we just free everything we got. > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index a8c7a0ccc759..a44b2d51aecc 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -687,8 +687,6 @@ struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); > > struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); > > -struct mem_cgroup *get_mem_cgroup_from_page(struct page *page); > - > struct lruvec *lock_page_lruvec(struct page *page); > struct lruvec *lock_page_lruvec_irq(struct page *page); > struct lruvec *lock_page_lruvec_irqsave(struct page *page, > @@ -1169,11 +1167,6 @@ static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm) > return NULL; > } > > -static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) > -{ > - return NULL; > -} > - > static inline void mem_cgroup_put(struct mem_cgroup *memcg) > { > } > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 490357945f2c..ff52550d2f65 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1048,29 +1048,6 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm) > } > EXPORT_SYMBOL(get_mem_cgroup_from_mm); > > -/** > - * get_mem_cgroup_from_page: Obtain a reference on given page's memcg. > - * @page: page from which memcg should be extracted. > - * > - * Obtain a reference on page->memcg and returns it if successful. Otherwise > - * root_mem_cgroup is returned. > - */ > -struct mem_cgroup *get_mem_cgroup_from_page(struct page *page) > -{ > - struct mem_cgroup *memcg = page_memcg(page); > - > - if (mem_cgroup_disabled()) > - return NULL; > - > - rcu_read_lock(); > - /* Page should not get uncharged and freed memcg under us. */ > - if (!memcg || WARN_ON_ONCE(!css_tryget(&memcg->css))) > - memcg = root_mem_cgroup; > - rcu_read_unlock(); > - return memcg; > -} > -EXPORT_SYMBOL(get_mem_cgroup_from_page); > - > static __always_inline struct mem_cgroup *active_memcg(void) > { > if (in_interrupt())