Received: by 2002:a05:6358:45e:b0:b5:b6eb:e1f9 with SMTP id 30csp773972rwe; Thu, 25 Aug 2022 08:54:43 -0700 (PDT) X-Google-Smtp-Source: AA6agR7HKIaFDXvNnarIh5hlSGiFlCYn3Q6cmaSmHlkVlXgF9GlMmDmM/+xzCVrzrK36FtoFFDr/ X-Received: by 2002:a17:907:3ea7:b0:73d:7596:8958 with SMTP id hs39-20020a1709073ea700b0073d75968958mr3017248ejc.726.1661442882918; Thu, 25 Aug 2022 08:54:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661442882; cv=none; d=google.com; s=arc-20160816; b=fbOXlZYLd7YSPtGN7xzU1duat3yWtLdxTXbB6dKiPbtZ3YHo8VZxiS6x7aBAMysQAB 1+/KhV/4pVPH8hqKoeGval1VEd6NFOkQW8bALo3g3pqVu5w09yafgqLeaNg6VDi0Pqfj D0LGLgqlWoQ900iWM43IbIWswjSkU9cV81rZ87Gg0VwZHqdyJ2sEihiutZlauRPP6gYD 6RN+rTHKrwrWNa/Lhqtg5tgqPUsrLJx4f0ivyITksi8PQvxzDbKhpP2xA6mnwtTCRQCF PmgoiNRSniuosJHF5tK45kTkPhF2f31Ligac1FSjv1jwmGKjxduJcgnATNGCv2bVOOHO AAaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Zsqv9AykgJ+WV9feY4vZLdyL0HO37C5wC/zUOZBB/Rs=; b=gKH7eDMg2/xF2SZHEG6suOuvYoDGQL2RDhPQIu+t+pOZcw7b/1aLXZN+kqxuYqjGFk 8FYxMetsT3u/n8L+dXVbU0AEDN/iqIGK1zkJwi3/KtxEgTd/Cxw505/IMFjXCLOuitUm pxGtDfSD6js4VmI9AKPjTnyZ7iolHMBz4ENXUqsSghqOYhSJ2AhrH008LThkNHyr8iWa WgC2fEy+S4rgMYDPrTBiFgJqFLhT70sfn4MKCChTXKKFPUVF0aU69SG2iBZ4d1jFsTOc NErZHQGF+6XZNoyS2cLTV5P6iWUCFe9iig1OY8lXtqmmOVtFgA15A3BVEIMwCSUF7+CW nDIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=HWnl7UrU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kk24-20020a170907767800b00730a8153033si3974703ejc.813.2022.08.25.08.54.17; Thu, 25 Aug 2022 08:54:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=HWnl7UrU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242336AbiHYP1s (ORCPT + 99 others); Thu, 25 Aug 2022 11:27:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242207AbiHYP1S (ORCPT ); Thu, 25 Aug 2022 11:27:18 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FFC5BB6A6 for ; Thu, 25 Aug 2022 08:26:03 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id t5so1540697pjs.0 for ; Thu, 25 Aug 2022 08:26:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=Zsqv9AykgJ+WV9feY4vZLdyL0HO37C5wC/zUOZBB/Rs=; b=HWnl7UrUYlzNgRl6uEFPplyGVawiIDXFH7m8PBsKuwLAyY4wVOH5/4W+KHE3cm4dmy 4VKpGXbIOjUVTNJ5qTAEfFLY5j3Yf3cnEjwgexlgBZzwA/NZqy+OA837il6nxkXV5MZ8 bWeOe7RYnCeBZeCaEDApmwAvBPMIcLaY9ZVBN4TcWiNWpqvDtnBHm4rNi4l+iIhXSBbt Vb5TpW9i3LVxD0fenWNvYo2kmlhk6MMD/fpNzQjOSsWwzJLxUQdlnEY1aV6QOjHMKsIC o6Aj0pItiOXOcalFXaXlzd4qp/y4sLZ4X8K4ywWFjTAOSTa/Uzz4A9Ct6uhXYJn4n24B gkOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=Zsqv9AykgJ+WV9feY4vZLdyL0HO37C5wC/zUOZBB/Rs=; b=w6KHKy+alY7W/ca68vKorIlSEsMKbxSJCoXObjsfk5LGTdsqNU5VrKIw14Z9ecvu+F JMI4w6cIxDPyrrs3EzoQoja3KHcwbd8ucsW4pvFXkMKRePvET55fL4Ne3NnS2e5gFecK KenMlUYTKXUJl3uK+oTkq0UJWIpZnQl+vJ4NGSRQ7ILA0GKoZ8zTLbyphcIac8/J1QHr JyuxCLV4ox73JRIFm4oKEAl1Gz+dGJX85u1ILtpqdDi4+ammkZLvAJ2+f9QkQwxnAJNG ydaHFCN+GvcsOG/7pzZYL3N1nm3f7gHZ9s0sOXShb15j6ZWaZhJWS7cFjxfOS8FYrnzw XERQ== X-Gm-Message-State: ACgBeo2a2abakGdvCCmROGt6IptJp6efGEVGdmxKXwYfSy55bD6Ughvm fj7xPnTh44gan9BNZC3FsbdRuKx99npKM3UdCZBMqg== X-Received: by 2002:a17:902:d58f:b0:173:75b:6ad with SMTP id k15-20020a170902d58f00b00173075b06admr4194683plh.172.1661441150947; Thu, 25 Aug 2022 08:25:50 -0700 (PDT) MIME-Version: 1.0 References: <20220825000506.239406-1-shakeelb@google.com> <20220825000506.239406-3-shakeelb@google.com> In-Reply-To: From: Shakeel Butt Date: Thu, 25 Aug 2022 08:25:38 -0700 Message-ID: Subject: Re: [PATCH v2 2/3] mm: page_counter: rearrange struct page_counter fields To: Michal Hocko Cc: Johannes Weiner , Roman Gushchin , Muchun Song , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Eric Dumazet , Soheil Hassas Yeganeh , Feng Tang , Oliver Sang , Andrew Morton , lkp@lists.01.org, Cgroups , Linux MM , netdev , LKML Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 24, 2022 at 11:47 PM Michal Hocko wrote: > > On Thu 25-08-22 00:05:05, Shakeel Butt wrote: > > With memcg v2 enabled, memcg->memory.usage is a very hot member for > > the workloads doing memcg charging on multiple CPUs concurrently. > > Particularly the network intensive workloads. In addition, there is a > > false cache sharing between memory.usage and memory.high on the charge > > path. This patch moves the usage into a separate cacheline and move all > > the read most fields into separate cacheline. > > > > To evaluate the impact of this optimization, on a 72 CPUs machine, we > > ran the following workload in a three level of cgroup hierarchy. > > > > $ netserver -6 > > # 36 instances of netperf with following params > > $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K > > > > Results (average throughput of netperf): > > Without (6.0-rc1) 10482.7 Mbps > > With patch 12413.7 Mbps (18.4% improvement) > > > > With the patch, the throughput improved by 18.4%. > > > > One side-effect of this patch is the increase in the size of struct > > mem_cgroup. For example with this patch on 64 bit build, the size of > > struct mem_cgroup increased from 4032 bytes to 4416 bytes. However for > > the performance improvement, this additional size is worth it. In > > addition there are opportunities to reduce the size of struct > > mem_cgroup like deprecation of kmem and tcpmem page counters and > > better packing. > > > > Signed-off-by: Shakeel Butt > > Reported-by: kernel test robot > > Reviewed-by: Feng Tang > > Acked-by: Soheil Hassas Yeganeh > > Acked-by: Roman Gushchin > > Acked-by: Michal Hocko > Thanks. > One nit below > > > --- > > Changes since v1: > > - Updated the commit message > > - Make struct page_counter cache align. > > > > include/linux/page_counter.h | 35 +++++++++++++++++++++++------------ > > 1 file changed, 23 insertions(+), 12 deletions(-) > > > > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h > > index 679591301994..78a1c934e416 100644 > > --- a/include/linux/page_counter.h > > +++ b/include/linux/page_counter.h > > @@ -3,15 +3,26 @@ > > #define _LINUX_PAGE_COUNTER_H > > > > #include > > +#include > > #include > > #include > > > > +#if defined(CONFIG_SMP) > > +struct pc_padding { > > + char x[0]; > > +} ____cacheline_internodealigned_in_smp; > > +#define PC_PADDING(name) struct pc_padding name > > +#else > > +#define PC_PADDING(name) > > +#endif > > + > > struct page_counter { > > + /* > > + * Make sure 'usage' does not share cacheline with any other field. The > > + * memcg->memory.usage is a hot member of struct mem_cgroup. > > + */ > > atomic_long_t usage; > > - unsigned long min; > > - unsigned long low; > > - unsigned long high; > > - unsigned long max; > > + PC_PADDING(_pad1_); > > > > /* effective memory.min and memory.min usage tracking */ > > unsigned long emin; > > @@ -23,18 +34,18 @@ struct page_counter { > > atomic_long_t low_usage; > > atomic_long_t children_low_usage; > > > > - /* legacy */ > > unsigned long watermark; > > unsigned long failcnt; > > These two are also touched in the charging path so we could squeeze them > into the same cache line as usage. > > 0-day machinery was quite good at hitting noticeable regression anytime > we have changed layout so let's see what they come up with after this > patch ;) I will try this locally first (after some cleanups) to see if there is any positive or negative impact and report here. > -- > Michal Hocko > SUSE Labs