Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp12087pxb; Mon, 8 Feb 2021 13:42:37 -0800 (PST) X-Google-Smtp-Source: ABdhPJw2PGIb6DM3SzBinYS+RZCxlnOLJv3Rdap6rxOQ3EDYCa8/9QDeFwlWye+Nbmm+EDLjlLgQ X-Received: by 2002:a17:906:27d8:: with SMTP id k24mr4933091ejc.339.1612820557582; Mon, 08 Feb 2021 13:42:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612820557; cv=none; d=google.com; s=arc-20160816; b=kJF5V5Ve427H+YgrAdEmg3KvIgIS/Dnh4YQp+GKbz5XvZ5PMIkxUL5dL+9bjWWZPNi p8LCcZl6wmmD6r+jgPPeXS+V86KaTlWljEGP3ouRrH/6DKgpprtBo1UGKva9jdBmtuYV XQowmUddysbpKPj2N7IG8yUIfZ3Dd3Tho/i2LfHrxFDM4lhiRAQot+MR6ZW9FZuDXOvh rqcN13jePq+EZI280Uy/G+pPEYPJyNhQtTlWWcDb9QBmHHlaCSYtnFFwKudqLzhFxYLG z1J8kxTOWbsGz7lspQhgHsK8QqLkBtpRu9SR8IXp8OyKZZJzG3b0BZRfkDolUymURqfS PHiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=WcpftL0I3Yn/sC6LOBOo9QXXxpF9jls61Dh3UGgrMhQ=; b=NrNPUm3rdWhDRV6udQ7lpGAfSc11nxAIDx7HYPQ4yW5CucW/ElReMHNPPo00IRjOyk xuKOOgknj+ERlZWadkEzMWwRvYZQSpt4PIENSaMGltmtKTXXErrPq260OstjHqPEIFiz rioN6EDQ/3HYTC4dYDlmQ3YLf8i8J23pKuIZZHnGM4Mn4D1Ed+sOwO8JI4HPLzj2izrs 5tdneCYD7uTjUpV7pCm3Iw0MXC7xzSiI/7m+gQbA7ukOOvwAzvmqMtcHJOp7Q+1v1U+g QXt72EAi9zD/EE403b1RvIQydG5QbeuHV2fTiYcBmae7w6mUi4Adb1XR79+9BXmTgO1I PJKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=XGEs+1RN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qu20si12071697ejb.436.2021.02.08.13.42.11; Mon, 08 Feb 2021 13:42:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=XGEs+1RN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236295AbhBHViO (ORCPT + 99 others); Mon, 8 Feb 2021 16:38:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232624AbhBHUy5 (ORCPT ); Mon, 8 Feb 2021 15:54:57 -0500 Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com [IPv6:2607:f8b0:4864:20::733]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30FABC06178B for ; Mon, 8 Feb 2021 12:54:16 -0800 (PST) Received: by mail-qk1-x733.google.com with SMTP id l27so15874179qki.9 for ; Mon, 08 Feb 2021 12:54:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=WcpftL0I3Yn/sC6LOBOo9QXXxpF9jls61Dh3UGgrMhQ=; b=XGEs+1RNC81c2udolSMD/41WbwHBvqroEIblFLM6gl9MVclTdONeCq1jP0jJVv3DW7 dqvBGlfkyGoSaJOFPO4C+/KygXrMmF6GgeZV38oFelwD8bYgQ4Y9FaDxTNylaeEkxhxN 9/wMWF5GLFfZLkCkBzWMoFr2YsBTvzztvlTODSSdkywa0ZqSPz7MMES7R4ORrBwmRXAa inRSRsCQhcz07fMpBpDsQUbvvWjmpmN/ObDNeouvOi73LV0BXKKTABmritwmYEUUmzuj N6MX+1gkgsjC7cBc8+SIZf6tPoSoD/WIgebrPDZfgKnsLOKOtM1fzFUShy4hNgpDqfjO 8aOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=WcpftL0I3Yn/sC6LOBOo9QXXxpF9jls61Dh3UGgrMhQ=; b=ICJgZOTb+Y9gh/hGuTpvOG5xdmAq40Woi1H/R6vZJinnbXWOYFhZEt5b+JolxjCRGn PFGfKfgWW2pwcTF4tg7bEdkYq7ILoG6bt7en5cMx4He84uBDyhaRw+rYKHMgbcVy81ad 0oebpV/8zI250YO5myuEeknKgWGX7tuOr/f8y6RhNDepJeysIMaLZ33H73mFh88z72Y5 Okw7jV55kYUq5b+mvogGA25FHKwEt7L5ofVEdTRqun/kQQFfa2KCv33wkQWj4i0SI4N6 rB/k9EPGYHCnxdYrvCbOz0nK25Z4QDVlGX+m0mLSqvrGOZ+f9KhVgF5a9DBIM4jiXwZW K46A== X-Gm-Message-State: AOAM533FKHk7UVp1yo0WuIt6+THPeL2EOMuBCcZWi9QNP/4Jna0CWF4L QIKeCob127lvMVsmh7NNiHoGrQ== X-Received: by 2002:a37:4a91:: with SMTP id x139mr2049044qka.102.1612817655489; Mon, 08 Feb 2021 12:54:15 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id 15sm14904060qty.65.2021.02.08.12.54.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 12:54:14 -0800 (PST) Date: Mon, 8 Feb 2021 15:54:14 -0500 From: Johannes Weiner To: Shakeel Butt Cc: Andrew Morton , Tejun Heo , Michal Hocko , Roman Gushchin , Linux MM , Cgroups , LKML , Kernel Team Subject: Re: [PATCH 7/8] mm: memcontrol: consolidate lruvec stat flushing Message-ID: References: <20210205182806.17220-1-hannes@cmpxchg.org> <20210205182806.17220-8-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Feb 07, 2021 at 06:28:37PM -0800, Shakeel Butt wrote: > On Fri, Feb 5, 2021 at 10:28 AM Johannes Weiner wrote: > > > > There are two functions to flush the per-cpu data of an lruvec into > > the rest of the cgroup tree: when the cgroup is being freed, and when > > a CPU disappears during hotplug. The difference is whether all CPUs or > > just one is being collected, but the rest of the flushing code is the > > same. Merge them into one function and share the common code. > > > > Signed-off-by: Johannes Weiner > > Reviewed-by: Shakeel Butt Thanks! > BTW what about the lruvec stats? Why not convert them to rstat as well? Great question. I actually started this series with the lruvec stats included, but I'm worried about the readers being too hot to use rstat (in its current shape, at least). For example, the refault code accesses the lruvec stats for every page that is refaulting - at the root level, in case of global reclaim. With an active workload, that would result in a very high rate of whole-tree flushes. We probably do need a better solution for the lruvecs as well, but in this case it just started holding up fixing the memory.stat issue for no reason and so I tabled it for another patch series.