Received: by 2002:a05:6500:1b8f:b0:1fa:5c73:8e2d with SMTP id df15csp93857lqb; Tue, 28 May 2024 09:41:10 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCUCJDNZk2c8I0hNYaa+aN2SQs2nnEX0gVHALKd8SrWYEW34p2D72ZiLD3kyVqevgBWfIipFImZIFNrfmed+UEB6ZX0r+JY4ZaaK3cb9Aw== X-Google-Smtp-Source: AGHT+IHreX5rZbHKY1Yruc67yJiHpkYS6NNUpIDm9729EmUt4WEFn1dn9ziIfFuJq6tT+gOWKHdb X-Received: by 2002:a17:906:478c:b0:a59:d0be:dba4 with SMTP id a640c23a62f3a-a62641c2e07mr1345797866b.13.1716914469966; Tue, 28 May 2024 09:41:09 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1716914469; cv=pass; d=google.com; s=arc-20160816; b=poTwYIyPZlpldebMBu87C+8j35gs987meoIXb3QoswMSUf/3GrVhcZd8k8Yt8B5uCa LQ3ai/8mpJa5h/JIFLoqmNh4e0M3LhduAKEiugjh5+rcAyDVNCJwZUtFPBTGPjwnSk5C kMln1jhk106Lr9eNuxLQhysrom+2wCr9bBJsuIq1+P/uL9lzNA9bwjwe5L3Tzshc2/mO xFNOSzRFfSlWzCmdd3DZ1sp741lGCg2tln27oXMZn2Zjy2Q9ea5jzMTwBjUFfCAtVX2X 9Q7sVSqIbdbsgOEY9JRPaQhJ2eFRNrY+Sp4QHD1dcPJ0HXpbaxJxW11x2TUj5xXFL6pL pf+w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=bjv8TNU4rz5hziSH1zaNvZ7GqPhfO0iO59ilrtodBaM=; fh=fcrWar4Cmi6HCgLjiR88FR7u+cu80UgcBJ/NK2r1+No=; b=yRRM4R4Rj1DOUEekPMZp6NV7r25ZFTvOrT93RI1sBZqV821hOuCf0TwOozZn/JFz7a V0o0ZswK6zBCSaecZgUIyCPOtECReB3IzfalYopDXz9w73LtHwEkiA1fqqTPdiKJg7FQ Cam8uAN5OV8RUdr67Bj0/nKckCuTxnIUI7qHasDohZM9HDj7HOgZQmtWAZTzRCUZO5O+ QOQJnILV2YTfDsi/1w1iwbJ7vb4cW6IdpdE4PeQiuEIMkxD3q6bXFWlvoRfu/I9s5UZW ZVEltNQrdekC13oJnbvDCYHhI2BwO1+fiMRnVokBE8dqfuyrcdksjAX0+AE9pFEHoIds YY0w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=g7JDLu4l; arc=pass (i=1 spf=pass spfdomain=linux.dev dkim=pass dkdomain=linux.dev dmarc=pass fromdomain=linux.dev); spf=pass (google.com: domain of linux-kernel+bounces-192768-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-192768-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id a640c23a62f3a-a62cff3ab8asi323620066b.565.2024.05.28.09.41.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 May 2024 09:41:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-192768-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=g7JDLu4l; arc=pass (i=1 spf=pass spfdomain=linux.dev dkim=pass dkdomain=linux.dev dmarc=pass fromdomain=linux.dev); spf=pass (google.com: domain of linux-kernel+bounces-192768-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-192768-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id AD9DA1F22EA9 for ; Tue, 28 May 2024 16:41:09 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 042B8172BCB; Tue, 28 May 2024 16:41:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="g7JDLu4l" Received: from out-180.mta1.migadu.com (out-180.mta1.migadu.com [95.215.58.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38C7E16D4FF for ; Tue, 28 May 2024 16:41:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.180 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716914463; cv=none; b=WIIwXK94W0qwuDzdp15L1+I4/ZAHeDat9mNnvMMSuhK5Ol3bSX5zWcCQjeVDbNf1u+ek1QUoIujMbPuEoi7RUiTDLZKfKGkTTMBqxOdO1RRgyg2qB49kGgWEI+O103ccTdRXxf+sEjZPoLHwIMX5AxgsTgjLvReseSmfGGqnPPQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716914463; c=relaxed/simple; bh=pb7WaoUAvRs2HlZIO/SWOeNWkHbSp4D/J+8XJjejJSw=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=dZUCxmStP3fQEMUVAAswWiheOzF297UtbaA3NjWTgpbxaNz2vTUQYkb194x9r2w1T5HUk/+W3nloS+5kLcO1do8NmMzCpxdd2wbelRmueG+rjxWDXVJWkZ05Zn7Kui6VCFPZXEjYvLAcG0+oKgxdZi9tA7+CCbyk5v/zU6Qn2SQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=g7JDLu4l; arc=none smtp.client-ip=95.215.58.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev X-Envelope-To: akpm@linux-foundation.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1716914459; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=bjv8TNU4rz5hziSH1zaNvZ7GqPhfO0iO59ilrtodBaM=; b=g7JDLu4lkMTPRMYnCvZtv9QOeBBMy35Cf/2rer78ItzGcWsajNv21QFYe3i1ZTXk/Rh3GO eUwELKOYoE/ocQp1p5X9VkG7eARAVjnsb8jQqbwDWxEC/9aS5AcOmkMhoAqgBmuFYs+2rG 4YNicB0958XasW2/4J2ozFFeszlo2Yg= X-Envelope-To: hannes@cmpxchg.org X-Envelope-To: mhocko@kernel.org X-Envelope-To: roman.gushchin@linux.dev X-Envelope-To: muchun.song@linux.dev X-Envelope-To: yosryahmed@google.com X-Envelope-To: ying.huang@intel.com X-Envelope-To: feng.tang@intel.com X-Envelope-To: fengwei.yin@intel.com X-Envelope-To: oliver.sang@intel.com X-Envelope-To: kernel-team@meta.com X-Envelope-To: linux-mm@kvack.org X-Envelope-To: linux-kernel@vger.kernel.org X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Shakeel Butt To: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Yosry Ahmed Cc: ying.huang@intel.com, feng.tang@intel.com, fengwei.yin@intel.com, oliver.sang@intel.com, kernel-team@meta.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] memcg: rearrange fields of mem_cgroup_per_node Date: Tue, 28 May 2024 09:40:50 -0700 Message-ID: <20240528164050.2625718-1-shakeel.butt@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT Kernel test robot reported [1] performance regression for will-it-scale test suite's page_fault2 test case for the commit 70a64b7919cb ("memcg: dynamically allocate lruvec_stats"). After inspection it seems like the commit has unintentionally introduced false cache sharing. After the commit the fields of mem_cgroup_per_node which get read on the performance critical path share the cacheline with the fields which get updated often on LRU page allocations or deallocations. This has caused contention on that cacheline and the workloads which manipulates a lot of LRU pages are regressed as reported by the test report. The solution is to rearrange the fields of mem_cgroup_per_node such that the false sharing is eliminated. Let's move all the read only pointers at the start of the struct, followed by memcg-v1 only fields and at the end fields which get updated often. Experiment setup: Ran fallocate1, fallocate2, page_fault1, page_fault2 and page_fault3 from the will-it-scale test suite inside a three level memcg with /tmp mounted as tmpfs on two different machines, one a single numa node and the other one, two node machine. $ ./[testcase]_processes -t $NR_CPUS -s 50 Results for single node, 52 CPU machine: Testcase base with-patch fallocate1 1031081 1431291 (38.80 %) fallocate2 1029993 1421421 (38.00 %) page_fault1 2269440 3405788 (50.07 %) page_fault2 2375799 3572868 (50.30 %) page_fault3 28641143 28673950 ( 0.11 %) Results for dual node, 80 CPU machine: Testcase base with-patch fallocate1 2976288 3641185 (22.33 %) fallocate2 2979366 3638181 (22.11 %) page_fault1 6221790 7748245 (24.53 %) page_fault2 6482854 7847698 (21.05 %) page_fault3 28804324 28991870 ( 0.65 %) Fixes: 70a64b7919cb ("memcg: dynamically allocate lruvec_stats") Reported-by: kernel test robot Reviewed-by: Yosry Ahmed Reviewed-by: Roman Gushchin Signed-off-by: Shakeel Butt --- Changes since v1: - Added comment as requested by Yosry. - Removed the Closed tag to keep the regression open and keep improving. include/linux/memcontrol.h | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3d1599146afe..7403dd5926eb 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -96,23 +96,29 @@ struct mem_cgroup_reclaim_iter { * per-node information in memory controller. */ struct mem_cgroup_per_node { - struct lruvec lruvec; + /* Keep the read-only fields at the start */ + struct mem_cgroup *memcg; /* Back pointer, we cannot */ + /* use container_of */ struct lruvec_stats_percpu __percpu *lruvec_stats_percpu; struct lruvec_stats *lruvec_stats; - - unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; - - struct mem_cgroup_reclaim_iter iter; - struct shrinker_info __rcu *shrinker_info; + /* + * Memcg-v1 only stuff in middle as buffer between read mostly fields + * and update often fields to avoid false sharing. Once v1 stuff is + * moved in a separate struct, an explicit padding is needed. + */ + struct rb_node tree_node; /* RB tree node */ unsigned long usage_in_excess;/* Set to the value by which */ /* the soft limit is exceeded*/ bool on_tree; - struct mem_cgroup *memcg; /* Back pointer, we cannot */ - /* use container_of */ + + /* Fields which get updated often at the end. */ + struct lruvec lruvec; + unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; + struct mem_cgroup_reclaim_iter iter; }; struct mem_cgroup_threshold { -- 2.43.0