Received: by 2002:a05:6358:5282:b0:b5:90e7:25cb with SMTP id g2csp1798151rwa; Sun, 21 Aug 2022 17:51:10 -0700 (PDT) X-Google-Smtp-Source: AA6agR6G7HLWVb2hBFLL+74beDR7G7PkFG0FpReBOB7dtbi6kjqDJ4PQ58ciVtgrf87GE7CB1vXD X-Received: by 2002:a05:6402:50cb:b0:440:87d4:3ad2 with SMTP id h11-20020a05640250cb00b0044087d43ad2mr14211305edb.219.1661129469489; Sun, 21 Aug 2022 17:51:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661129469; cv=none; d=google.com; s=arc-20160816; b=c9GL4vQ/M2SXD5BajNXu24VTZUy+Beu4CWlHBFFb6lTjUxn5MfvEdNAoR7mDcdbQdZ D2KuGXSGDKPC1wi+fF2ii1t5NsHT/APF/kZVvuWjR2SnmV3OOl1pBzB1KbJNuAyj1Fqi 9kRtMcw5jkKDqjDGvEz5KWEaGWVxbZVU71JBgZJNIVg4YN5E8xgrVFDDnB+mFPNklb83 WK/2vWqVC1MuiMWcYJZt3b7/fqTVXoytjj8GIPfUeKgld9vVv1PB/irgUhwKoF7OIyrd nppeMrV4wMueHD7VoaJFo8uzvJC9ElqS0i6bB1BZ9WzC799U6nOGlIlX6BCwoobetYli ZV7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Eqv/19N5yEMgS5A7uMKIDvv+xbcnmfhfYxaJZgO8F20=; b=UvgeJFu+OJyGAp70AqR+lWDtxrXklEJlEq8XQNBKLTntmfFxWRzyRbf9MH0mY7bXCG QnZfs4JrHzFQupm6V+jjfgAdl5DIh6ntdYnXzW2oarlv8hHTKadtJeWNIZjXnv4+X9v7 +4YDgeMBk6Xfpus1CqRaobeyohMMLLyUOUXtJyvEj8pEUdm+BUnLkYDhpu92aS8L6OaH iyOp+Pj444+8qSoAFZnCCtflFti6Qiap4BSt6KYhj+vCOG7o6Wmt007m/MjSV/PkthF5 d7gc3LeChBVupjnbRl7VgkDGfqceucec6g2g9poALeRbCrlYsAPMrTdUHTkEH+00O3e/ Kecg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=UicQpWpE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mp16-20020a1709071b1000b00730bcba7f70si8393499ejc.635.2022.08.21.17.50.35; Sun, 21 Aug 2022 17:51:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=UicQpWpE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232066AbiHVAYq (ORCPT + 99 others); Sun, 21 Aug 2022 20:24:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232054AbiHVAYo (ORCPT ); Sun, 21 Aug 2022 20:24:44 -0400 Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79F8C20199 for ; Sun, 21 Aug 2022 17:24:42 -0700 (PDT) Received: by mail-ej1-x630.google.com with SMTP id vw19so4619381ejb.1 for ; Sun, 21 Aug 2022 17:24:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=Eqv/19N5yEMgS5A7uMKIDvv+xbcnmfhfYxaJZgO8F20=; b=UicQpWpEmeF4kKoeqp/WmxAnf9tSyJSlCCvCKpoqea2kLCQ69aIPlMRNh8C+3XkGXM CKOhqFgao3DWFDsjsLNq8xeVqbewCc8NY9lCkqgwSywO8chlQH1+JVfwzfmlOpCeeddL ir8WDMIFe6sx2wEA55JyybCPBhEpqg/vfyoiWOWv7MhagbOKbO8lPdDqTDWvJscIAu6T g85dXRDbtNY6hSb/sZbRSHwGldG8kMBGLoi9Lcxh9F9wHALlSTO1M6IEA7bhddGDltAY uaPJXmxeBr/+rY0peyXTPwikF6BcvdKBZy1ivGuvz7hXUurZcqE3n6U9eAeqRYOQ8HXj ahsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=Eqv/19N5yEMgS5A7uMKIDvv+xbcnmfhfYxaJZgO8F20=; b=1b8DuHkyOzFqiiDseutqrTfv2HG4GQInHDLApMqkPHSbtspWqXMJ5O5IE5iKp/+GIk TvJLRt0h6hkOW7lak1R/68IgVFbvlBEv3mNyHKQkuhJQg8CRkQL6ZEaARFPEbayFNMxM OF8BV9vB8HoEkDpXgPpsQKQtVK1ECw1LFovkLsr/GVUD+mD/cHkhxUH4618Y1jsAGKu8 07S+eijBbTiVdcKtToObBQeqhQYZDaCS7a/faiuoy+ocHyPVKIZysxR3rF21bR/5WXrX euIPHpjD4bno5dXZMyXwzcj6f/MgBrI1E+mI1dQTS3AjOBWVJhjP+rtIb1+dnf8WCNaM TtRg== X-Gm-Message-State: ACgBeo1SYSXP2Ti4iKWLqCXn3utNSlC7xeovqBGhvYLRRIckzWQdgdtM 3xkeqAdfddnHo25lM8vqhcPMDGg8iT+JuNuvxTe8zA== X-Received: by 2002:a17:907:2e0b:b0:730:8aee:d674 with SMTP id ig11-20020a1709072e0b00b007308aeed674mr11635721ejc.104.1661127880648; Sun, 21 Aug 2022 17:24:40 -0700 (PDT) MIME-Version: 1.0 References: <20220822001737.4120417-1-shakeelb@google.com> <20220822001737.4120417-3-shakeelb@google.com> In-Reply-To: <20220822001737.4120417-3-shakeelb@google.com> From: Soheil Hassas Yeganeh Date: Sun, 21 Aug 2022 20:24:04 -0400 Message-ID: Subject: Re: [PATCH 2/3] mm: page_counter: rearrange struct page_counter fields To: Shakeel Butt Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Eric Dumazet , Feng Tang , Oliver Sang , Andrew Morton , lkp@lists.01.org, cgroups@vger.kernel.org, linux-mm , netdev , linux-kernel Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Aug 21, 2022 at 8:18 PM Shakeel Butt wrote: > > With memcg v2 enabled, memcg->memory.usage is a very hot member for > the workloads doing memcg charging on multiple CPUs concurrently. > Particularly the network intensive workloads. In addition, there is a > false cache sharing between memory.usage and memory.high on the charge > path. This patch moves the usage into a separate cacheline and move all > the read most fields into separate cacheline. > > To evaluate the impact of this optimization, on a 72 CPUs machine, we > ran the following workload in a three level of cgroup hierarchy with top > level having min and low setup appropriately. More specifically > memory.min equal to size of netperf binary and memory.low double of > that. > > $ netserver -6 > # 36 instances of netperf with following params > $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K > > Results (average throughput of netperf): > Without (6.0-rc1) 10482.7 Mbps > With patch 12413.7 Mbps (18.4% improvement) > > With the patch, the throughput improved by 18.4%. Shakeel, for my understanding: is this on top of the gains from the previous patch? > One side-effect of this patch is the increase in the size of struct > mem_cgroup. However for the performance improvement, this additional > size is worth it. In addition there are opportunities to reduce the size > of struct mem_cgroup like deprecation of kmem and tcpmem page counters > and better packing. > > Signed-off-by: Shakeel Butt > Reported-by: kernel test robot > --- > include/linux/page_counter.h | 34 +++++++++++++++++++++++----------- > 1 file changed, 23 insertions(+), 11 deletions(-) > > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h > index 679591301994..8ce99bde645f 100644 > --- a/include/linux/page_counter.h > +++ b/include/linux/page_counter.h > @@ -3,15 +3,27 @@ > #define _LINUX_PAGE_COUNTER_H > > #include > +#include > #include > #include > > +#if defined(CONFIG_SMP) > +struct pc_padding { > + char x[0]; > +} ____cacheline_internodealigned_in_smp; > +#define PC_PADDING(name) struct pc_padding name > +#else > +#define PC_PADDING(name) > +#endif > + > struct page_counter { > + /* > + * Make sure 'usage' does not share cacheline with any other field. The > + * memcg->memory.usage is a hot member of struct mem_cgroup. > + */ > + PC_PADDING(_pad1_); > atomic_long_t usage; > - unsigned long min; > - unsigned long low; > - unsigned long high; > - unsigned long max; > + PC_PADDING(_pad2_); > > /* effective memory.min and memory.min usage tracking */ > unsigned long emin; > @@ -23,16 +35,16 @@ struct page_counter { > atomic_long_t low_usage; > atomic_long_t children_low_usage; > > - /* legacy */ > unsigned long watermark; > unsigned long failcnt; > > - /* > - * 'parent' is placed here to be far from 'usage' to reduce > - * cache false sharing, as 'usage' is written mostly while > - * parent is frequently read for cgroup's hierarchical > - * counting nature. > - */ > + /* Keep all the read most fields in a separete cacheline. */ > + PC_PADDING(_pad3_); > + > + unsigned long min; > + unsigned long low; > + unsigned long high; > + unsigned long max; > struct page_counter *parent; > }; > > -- > 2.37.1.595.g718a3a8f04-goog >