Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp1195975pxb; Fri, 20 Nov 2020 03:46:25 -0800 (PST) X-Google-Smtp-Source: ABdhPJxddcqiYS/RWGCxNu/tFB6iBV3MyOBhLudObg3wbj1ksHWFL/c8g0YXBKAoNE/YeywM+NHD X-Received: by 2002:a17:906:b01:: with SMTP id u1mr33750754ejg.427.1605872784868; Fri, 20 Nov 2020 03:46:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605872784; cv=none; d=google.com; s=arc-20160816; b=OXvZiBj1glPdmdCQ3BWCLvnDcu9fJldnOCSeOd8ld7hI7NmCGTRNUrrwe3ReiDeGDd BVj4Ye5TRul6ohv+RtCqKFMo6Glpp/ojly6okkSy7r5pzFP5IUfhJG6KWTauhwsNzDjJ 69KHghXtPnd9O21r/DXhvzYC4J4AUYvkmiyAbwivwE2d9v8nhSNkqcagpt94hfST0obH YBzF+YSdVURkpJjDnd9gXPa04xBqTd24YPIfsQgxFWLxafqBlL5lzkGrmiTz1hsOKtmX 2HXz9KkwWKCvqkntE/LL5iKiJRZVeNoTy4hlzXQbO8Hkdy3OzupT1U7H0eLodZO4zWov 1Wvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :ironport-sdr:ironport-sdr; bh=wA4DHy2Lj7gnJByVfXlvgxw+bOVVH1kTpVXjYRFSvuk=; b=odwZDXUf4RBFLwFSTkW3HAzKM5OTsO3LLWNnBcQoCvDAWl5beLV8CdMHT5r1/AgiN6 WCbvbtVZ0ayYh7Tq41uQBzR7IZ7TFjcJ0eEfxV3+tpiQrrf+B5OYT0MXi+ALGZIwOlB6 TQL2o0AZviEbT1JsdGMfuaz44cjg/Eo9SGK19wM+vEGDYxxiV91LDOBI09mGsPpv/OEE 8sEfyEs2Sw7+MlIWiATAICHnFwXgow2e10IYs4XSUojTDkB6KpvtZxlgRB3VfrpjSl0O Zm97/JYg/iUNZEdLObCGAEypi+rQEcuypVt6iqlfraa6A2iC04ptEVN6hQvUYxfjTy8O 8j7Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ok23si1477086ejb.243.2020.11.20.03.46.01; Fri, 20 Nov 2020 03:46:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727657AbgKTLoa (ORCPT + 99 others); Fri, 20 Nov 2020 06:44:30 -0500 Received: from mga03.intel.com ([134.134.136.65]:27927 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727120AbgKTLoa (ORCPT ); Fri, 20 Nov 2020 06:44:30 -0500 IronPort-SDR: n5dhM4T8aTo6Z3Xuf3mJh6tjfTUIBgY3QZvhPcYirOumsbTDO7GM0LFJM/8KHR2tCOUm8CTq8n Auhyy0sKJRbA== X-IronPort-AV: E=McAfee;i="6000,8403,9810"; a="171557037" X-IronPort-AV: E=Sophos;i="5.78,356,1599548400"; d="scan'208";a="171557037" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Nov 2020 03:44:29 -0800 IronPort-SDR: sD/CrGkoG6LVm/YDt8AtXAcCg7Tg1tzShXmFUato1xXvvtW6ZAwX6bH4UjWpCKf+FsP7CZztbi aOWEyHe5AMZQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,356,1599548400"; d="scan'208";a="360411401" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.147.98]) by fmsmga004.fm.intel.com with ESMTP; 20 Nov 2020 03:44:25 -0800 Date: Fri, 20 Nov 2020 19:44:24 +0800 From: Feng Tang To: Michal Hocko Cc: Xing Zhengjun , Waiman Long , Linus Torvalds , Andrew Morton , Shakeel Butt , Chris Down , Johannes Weiner , Roman Gushchin , Tejun Heo , Vladimir Davydov , Yafang Shao , LKML , lkp@lists.01.org, lkp@intel.com, zhengjun.xing@intel.com, ying.huang@intel.com Subject: Re: [LKP] Re: [mm/memcg] bd0b230fe1: will-it-scale.per_process_ops -22.7% regression Message-ID: <20201120114424.GA103521@shbuild999.sh.intel.com> References: <20201102091543.GM31092@shao2-debian> <20201102092754.GD22613@dhcp22.suse.cz> <82d73ebb-a31e-4766-35b8-82afa85aa047@intel.com> <20201102100247.GF22613@dhcp22.suse.cz> <20201104081546.GB10052@dhcp22.suse.cz> <20201112122844.GA11000@shbuild999.sh.intel.com> <20201112141654.GC12240@dhcp22.suse.cz> <20201113073436.GA113119@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201113073436.GA113119@shbuild999.sh.intel.com> User-Agent: Mutt/1.5.24 (2015-08-30) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 13, 2020 at 03:34:36PM +0800, Feng Tang wrote: > On Thu, Nov 12, 2020 at 03:16:54PM +0100, Michal Hocko wrote: > > > > > I add one phony page_counter after the union and re-test, the regression > > > > > reduced to -1.2%. It looks like the regression caused by the data structure > > > > > layout change. > > > > > > > > Thanks for double checking. Could you try to cache align the > > > > page_counter struct? If that helps then we should figure which counters > > > > acks against each other by adding the alignement between the respective > > > > counters. > > > > > > We tried below patch to make the 'page_counter' aligned. > > > > > > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h > > > index bab7e57..9efa6f7 100644 > > > --- a/include/linux/page_counter.h > > > +++ b/include/linux/page_counter.h > > > @@ -26,7 +26,7 @@ struct page_counter { > > > /* legacy */ > > > unsigned long watermark; > > > unsigned long failcnt; > > > -}; > > > +} ____cacheline_internodealigned_in_smp; > > > > > > and with it, the -22.7% peformance change turns to a small -1.7%, which > > > confirms the performance bump is caused by the change to data alignment. > > > > > > After the patch, size of 'page_counter' increases from 104 bytes to 128 > > > bytes, and the size of 'mem_cgroup' increases from 2880 bytes to 3008 > > > bytes(with our kernel config). Another major data structure which > > > contains 'page_counter' is 'hugetlb_cgroup', whose size will change > > > from 912B to 1024B. > > > > > > Should we make these page_counters aligned to reduce cacheline conflict? > > > > I would rather focus on a more effective mem_cgroup layout. It is very > > likely that we are just stumbling over two counters here. > > > > Could you try to add cache alignment of counters after memory and see > > which one makes the difference? I do not expect memsw to be the one > > because that one is used together with the main counter. But who knows > > maybe the way it crosses the cache line has the exact effect. Hard to > > tell without other numbers. > > I added some alignments change around the 'memsw', but neither of them can > restore the -22.7%. Following are some log showing how the alignments > are: > > tl: memcg=0x7cd1000 memory=0x7cd10d0 memsw=0x7cd1140 kmem=0x7cd11b0 tcpmem=0x7cd1220 > t2: memcg=0x7cd0000 memory=0x7cd00d0 memsw=0x7cd0140 kmem=0x7cd01c0 tcpmem=0x7cd0230 > > So both of the 'memsw' are aligned, but t2's 'kmem' is aligned while > t1's is not. > > I will check more on the perf data about detailed hotspots. Some more check updates about it: Waiman's patch is effectively removing one 'struct page_counter' between 'memory' and "memsw'. And the mem_cgroup is: struct mem_cgroup { ... struct page_counter memory; /* Both v1 & v2 */ union { struct page_counter swap; /* v2 only */ struct page_counter memsw; /* v1 only */ }; /* Legacy consumer-oriented counters */ struct page_counter kmem; /* v1 only */ struct page_counter tcpmem; /* v1 only */ ... ... MEMCG_PADDING(_pad1_); atomic_t moving_account; struct task_struct *move_lock_task; ... }; I do experiments by inserting a 'page_counter' between 'memory' and the 'MEMCG_PADDING(_pad1_)', no matter where I put it, the benchmark result can be recovered from 145K to 185K, which is really confusing, as adding a 'page_counter' right before the '_pad1_' doesn't change cache alignment of any members. Thanks, Feng