Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1708406pxu; Thu, 17 Dec 2020 17:13:10 -0800 (PST) X-Google-Smtp-Source: ABdhPJwQUXO1swd0wb4OcRKwuujeueBPgzJmZ8Ni5qxUfWXrfwxg+EGRlnEKDahDFsTmlE1VoJc3 X-Received: by 2002:a17:907:2506:: with SMTP id y6mr1691288ejl.53.1608253989920; Thu, 17 Dec 2020 17:13:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608253989; cv=none; d=google.com; s=arc-20160816; b=e35BG8IKn9Z7kaWTAhr9n8ItSHLKHWAI987mKmV/LrdLddC9SGKqt+cT0/16xT6bdP ritaGVWuVY/MLFRJQiQPdZiu1LrEAgB7aCmT/U6/WTeC+/IU/Dqm1de9I+4hWK/x2gON KrnI7vaBCoB9TH9/t36/y7G5IIUBAkPXJXSx4tspVS1SLGbudOv6nijzL1l6yC/rYV20 bPFKeuEcX5RZg7aMhxlD7lZYQP1+JGntGWQLNLfDJsLwVYMMn/Ecar8XUGFhqrbDmIdD DP74PtiOKzmF+AB7njnOMnE5mUICFnkoxJ4LE1LkTPwoTXkOzowqz+stP9VEsO6d8unO srig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=fbD/dRrGbDxXh6bC3folm4DWgmxDRpxAq9bBC6SC4+k=; b=lXbzYypOjDewsHZwL6zv7Q6u16cOcBi1WFbtM5OodYpRb0ezxj9JOtUvjovgfLy+xC +lAB+28Js96B/vgiKn7NnYCdTw1iDrUGFC1jSFjK5LURJU7mRH7x4RuN1yOnMnyCbJh0 id9ECG7mILSbCD5a2eqSo180nQmC/x0ZqAzxdCrc+ccHepzklOgwdQ53ZS3ekCkS3KS+ UjVFjTxa4Cyj8r99eKjq7e/iXTfT1ksAd9xkVJwfP5qEHokYF5WUXZfv0h+8iGFrFaE4 GKuyNkiUBMbW3cq4WRPXC9fTO1UsxkNt1qMfMTY2Jf6qfzVHthC7lYeK8w9GOtvAHBLL /PIw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ktLyBKf5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dt1si4385102ejc.558.2020.12.17.17.12.47; Thu, 17 Dec 2020 17:13:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ktLyBKf5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732042AbgLRA5n (ORCPT + 99 others); Thu, 17 Dec 2020 19:57:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726796AbgLRA5n (ORCPT ); Thu, 17 Dec 2020 19:57:43 -0500 Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D41EC0617A7; Thu, 17 Dec 2020 16:57:02 -0800 (PST) Received: by mail-ej1-x630.google.com with SMTP id 6so773487ejz.5; Thu, 17 Dec 2020 16:57:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=fbD/dRrGbDxXh6bC3folm4DWgmxDRpxAq9bBC6SC4+k=; b=ktLyBKf5jk0WsNYOBxZRR7EL07+cV0+O1zB7xk6iP83sDjUUrilu6t7VfNHv/4bWoe s2H0Tf2FvmJ8dhuCJX+m/fgvI1HW+YF8Km6OlQuNS4v4wjs+jnmVI4o900tvYZoBQyz4 PSswVY2ngKmnoOtWSZbcSzlxq4ZAi1znC5dIMLmCbGKF6Xj4+MG5DndjTYTUc+1HjQGn VnURW9+FYdAMPLi/F8j6qAxwNyWxrtdo9WQY2yHRBb7xwwaPf7IOgVxI7JdQp8JtRdz7 3zgSkscNXGHJVcFxIRd0GO90mWl2qqIZnES3ygBslfwnVAW6+IjIBdRc3ZNQhm9Z2QfF IV/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=fbD/dRrGbDxXh6bC3folm4DWgmxDRpxAq9bBC6SC4+k=; b=hEA1VOnVDqmiFyTc6wr8RyCqI1NYJLj/se8L8iNN/xO7rV5erqsb3rlZoewOgYjQ/L 4j/56L2/CRssqAaBNBOB/jwN9XmByhWrSKwuYT2h3Xkv55PA6EjI1Gr6ajldBQYWnOhb MQhSm+SSP5bPvXodbUJvuCEBxUtKNibWl8VX37BHZSXRJKawQfSMBsPkqCmQi+isLb9M xiDJlDs2QD+4YZSEsR+AiCy4e1sF2iweSV9wyN0Y+G8l3lIVdoQgbh9S1AdQRp2uuypL GrXoaYirKDR0QqOSR7udOqBTytJlfgPqPLJEuxOf7LKjQRNTk6iaRn5iIHdyNztBI2kO N+nA== X-Gm-Message-State: AOAM532KrQSK/45cpXbDfJ0Y0y3rb0guRlogIdWcvGUJtUaQASsoAgHP rwabWNcukncH6pnLppOaFd5pCg7DFAJjYNyuUCE= X-Received: by 2002:a17:906:720e:: with SMTP id m14mr603462ejk.161.1608253021411; Thu, 17 Dec 2020 16:57:01 -0800 (PST) MIME-Version: 1.0 References: <20201214223722.232537-1-shy828301@gmail.com> <20201214223722.232537-8-shy828301@gmail.com> <20201215030528.GN3913616@dread.disaster.area> In-Reply-To: From: Yang Shi Date: Thu, 17 Dec 2020 16:56:48 -0800 Message-ID: Subject: Re: [v2 PATCH 7/9] mm: vmscan: don't need allocate shrinker->nr_deferred for memcg aware shrinkers To: Dave Chinner Cc: Roman Gushchin , Kirill Tkhai , Shakeel Butt , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 15, 2020 at 3:07 PM Yang Shi wrote: > > On Mon, Dec 14, 2020 at 7:05 PM Dave Chinner wrote: > > > > On Mon, Dec 14, 2020 at 02:37:20PM -0800, Yang Shi wrote: > > > Now nr_deferred is available on per memcg level for memcg aware shrinkers, so don't need > > > allocate shrinker->nr_deferred for such shrinkers anymore. > > > > > > Signed-off-by: Yang Shi > > > --- > > > mm/vmscan.c | 28 ++++++++++++++-------------- > > > 1 file changed, 14 insertions(+), 14 deletions(-) > > > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > index bce8cf44eca2..8d5bfd818acd 100644 > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -420,7 +420,15 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone > > > */ > > > int prealloc_shrinker(struct shrinker *shrinker) > > > { > > > - unsigned int size = sizeof(*shrinker->nr_deferred); > > > + unsigned int size; > > > + > > > + if (is_deferred_memcg_aware(shrinker)) { > > > + if (prealloc_memcg_shrinker(shrinker)) > > > + return -ENOMEM; > > > + return 0; > > > + } > > > + > > > + size = sizeof(*shrinker->nr_deferred); > > > > > > if (shrinker->flags & SHRINKER_NUMA_AWARE) > > > size *= nr_node_ids; > > > @@ -429,26 +437,18 @@ int prealloc_shrinker(struct shrinker *shrinker) > > > if (!shrinker->nr_deferred) > > > return -ENOMEM; > > > > > > - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { > > > - if (prealloc_memcg_shrinker(shrinker)) > > > - goto free_deferred; > > > - } > > > - > > > return 0; > > > - > > > -free_deferred: > > > - kfree(shrinker->nr_deferred); > > > - shrinker->nr_deferred = NULL; > > > - return -ENOMEM; > > > } > > > > I'm trying to put my finger on it, but this seems wrong to me. If > > memcgs are disabled, then prealloc_memcg_shrinker() needs to fail. > > The preallocation code should not care about internal memcg details > > like this. > > > > /* > > * If the shrinker is memcg aware and memcgs are not > > * enabled, clear the MEMCG flag and fall back to non-memcg > > * behaviour for the shrinker. > > */ > > if (shrinker->flags & SHRINKER_MEMCG_AWARE) { > > error = prealloc_memcg_shrinker(shrinker); > > if (!error) > > return 0; > > if (error != -ENOSYS) > > return error; > > > > /* memcgs not enabled! */ > > shrinker->flags &= ~SHRINKER_MEMCG_AWARE; > > } > > > > size = sizeof(*shrinker->nr_deferred); > > .... > > return 0; > > } > > > > This guarantees that only the shrinker instances taht have a > > correctly set up memcg attached to them will have the > > SHRINKER_MEMCG_AWARE flag set. Hence in all the rest of the shrinker > > code, we only ever need to check for SHRINKER_MEMCG_AWARE to > > determine what we should do.... > > Thanks. I see your point. We could move the memcg specific details > into prealloc_memcg_shrinker(). > > It seems we have to acquire shrinker_rwsem before we check and modify > SHIRNKER_MEMCG_AWARE bit if we may clear it. Hi Dave, Is it possible that shrinker register races with shrinker unregister? It seems impossible to me by a quick visual code inspection. But I'm not a VFS expert so I'm not quite sure. If it is impossible the implementation would be quite simple otherwise we need move shrinker_rwsem acquire/release to prealloc_shrinker/free_prealloced_shrinker/unregister_shrinker to protect SHRINKER_MEMCG_AWARE update. > > > > > > void free_prealloced_shrinker(struct shrinker *shrinker) > > > { > > > - if (!shrinker->nr_deferred) > > > + if (is_deferred_memcg_aware(shrinker)) { > > > + unregister_memcg_shrinker(shrinker); > > > return; > > > + } > > > > > > - if (shrinker->flags & SHRINKER_MEMCG_AWARE) > > > - unregister_memcg_shrinker(shrinker); > > > + if (!shrinker->nr_deferred) > > > + return; > > > > > > kfree(shrinker->nr_deferred); > > > shrinker->nr_deferred = NULL; > > > > e.g. then this function can simply do: > > > > { > > if (shrinker->flags & SHRINKER_MEMCG_AWARE) > > return unregister_memcg_shrinker(shrinker); > > kfree(shrinker->nr_deferred); > > shrinker->nr_deferred = NULL; > > } > > > > Cheers, > > > > Dave. > > -- > > Dave Chinner > > david@fromorbit.com