Received: by 2002:a05:7412:5112:b0:fa:6e18:a558 with SMTP id fm18csp1012956rdb; Wed, 24 Jan 2024 02:00:59 -0800 (PST) X-Google-Smtp-Source: AGHT+IGTguR5x/VJn0IbcvyEpRB2bKzYl+VM5JipgHK4MkfOMwPflCReSTzCYhJPe1qGFqNUbVM2 X-Received: by 2002:a05:6102:5f4:b0:469:aec1:b437 with SMTP id w20-20020a05610205f400b00469aec1b437mr3393926vsf.6.1706090458864; Wed, 24 Jan 2024 02:00:58 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706090458; cv=pass; d=google.com; s=arc-20160816; b=qevX9CDOK1l0U0MCBUOgS6s2me+gf7VIQ6DGj6V6mOzY0pIZ+uhr49iqkA8f3qGxFn MZOSs6o7cSclRTeQvbLT4HPj6j7HH6v9S2WDtILj7l8Vil6R+sTXgGmgfp0sTwo1SikR ZcjHNNvBH8oyTZduGHVwEhd4eHv7HsjMDy+OwCCYrU31Zsrics0WsLvLyYrCQ1bRpzLa 0vFopw8ZjLbdMBGZXDdpzlT1sbbG4b4oUTQUQrVnKzA+DI4azXeF74pjct/l++WaqDiV ysIUI5Ry9Ous+D+/SIIi4qcddLP5dsGHRCwLBUY6RbHGl5bBNWqeXVgxrFs8Xu0pNRnM MZqQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:dkim-signature; bh=FKHi8mwFd4NN6LHc1BfTFS1XT0Ge9Y10ExrRil/WNW0=; fh=v89nGmoFTHIsv02SVsrK/Cy1j9WLZYYZxHtFM9LSQok=; b=yBvLFB+fmrBI6qvsvyVBmaPHhfumgPXAN1XzQweWaiuk/S8O6qMgzhvYhrL3JX8hL7 h2iRU2vHH8HqK1nKo9RnAjtdMXjxdvSZ8FE0QvSBMD2zcbdkVMW4KzX4a/Qfq6P5+q2v 7qCF0egLOK7wx3s6reKoV+osAiGZH8adEKHelYGFtKzUUulRJdJFwiGxOmgNZyeOPoSR HvWSyfAjtnkprFmcUygzz15QgLhKoYRZ9ma2fLHmFfnPqN/yynZp8CvrF8IhW/8F1Fy8 mSIu51UbjIum+5DhjYH5Y0cQY96VxqhvGs/6xf9vttvCBCzXciD9UqHe83eXtqMlT2hM k7PA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Z3IgeYbF; arc=pass (i=1 spf=pass spfdomain=flex--yosryahmed.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-36762-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36762-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id p2-20020a0ccb82000000b006816d1e04b2si9822783qvk.129.2024.01.24.02.00.58 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 02:00:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36762-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Z3IgeYbF; arc=pass (i=1 spf=pass spfdomain=flex--yosryahmed.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-36762-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36762-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 30D581C24658 for ; Wed, 24 Jan 2024 10:00:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 39E34182C3; Wed, 24 Jan 2024 10:00:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Z3IgeYbF" Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69A5F182BE for ; Wed, 24 Jan 2024 10:00:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706090429; cv=none; b=qxDCKjnLqd4O0u6GrLFyiXUv00DawVBQq3lSYfGeSBLCfPzcf04REJFRiuutT8JOlMVvhRYrQenZGug2R1kxWeg0Xm1Pd34jfUnJVrb4sW33PYCU3GHCYK5h9zda0JBJQK7KOjgvMfEjsjymEaMiZVB8rX4s22XFjoAV9icohvM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706090429; c=relaxed/simple; bh=891w0zQHfb16Qwem9jKX/4xM+JuEJjYspwRsDJ/qiT8=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=gu8zJQObYFxBXbypDZqWJ1A5lhQnBF0xd+whXge8MNd15V8H6BF2bXfDbrghfiipw1ohseIecm5J4pTxQ/gbxzluFK7hQidOxJQImuTZlmGEx/VU3Jm+YTMJqd9zZq6dJb+PC+wwU3uKbNUkxPth7vVCiUTY+egrajby/ri5C5o= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yosryahmed.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Z3IgeYbF; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yosryahmed.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc26605c273so8448370276.0 for ; Wed, 24 Jan 2024 02:00:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706090425; x=1706695225; darn=vger.kernel.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=FKHi8mwFd4NN6LHc1BfTFS1XT0Ge9Y10ExrRil/WNW0=; b=Z3IgeYbFvC3Jyo0M4FY54tru5li1uTx4xRy9y4iy1Z75U+tPN0U8q0yxOSh6J2Nc30 CwuSAr3yjw4iepnu7UnbhBRNf8PpxOhZurb8m5AObt1CqgK5TKhOJ/gKWCOAnhKVlX2z Mr8nCPm3/XqIEsdarRPqvfSD2ZNdOMNT91k/xbMRAbkct3jb3bF0J8z/W4afXZ7k5JQi O5jJrh6q21lLFpHlMpSRJs8en09zr/J6yc4AusHRsJnHv9EmQFHzrWXgZLl3bK0cRkN+ Tfr8huaQWeJq3U/lwR9czJXbLRRg31CVc64hFwQs10FfkUNWLSKeXvedC/mkUdV7Ibjc OmKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706090425; x=1706695225; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=FKHi8mwFd4NN6LHc1BfTFS1XT0Ge9Y10ExrRil/WNW0=; b=U3k/+1AhFytcoUXtIE1Gn80MJ4og3HrNLuHDD12keXc7Sw+V7LhPIEwmHSftM2Nfyd Ky9Ib1g/JMvxkbfooFFx67bayPEPF3Hdms8M+lSSR/gt1qduuZnuaoKjwapMsup+PT2j 5iamxyGBMTDVGvgzi9Y7NlHdZ/eI3/QCjU70JHGHtvYl6THEwHAe51A/2O+OEx5S37vb 224FmGP19ueXSghicNP9ZYgToDmX9oPnOcytUfYeehCfWJPCos+8cVf4X9dCBt958sa7 2Rx4G/HmfEbsb/WYM9ZSnOCeVZQmNWVRDgQL3O5B2lK454OD96yiIJePHtkduZtc7IQ7 ZjmQ== X-Gm-Message-State: AOJu0YzpetZtH9F9/ho/wi/qPs8yySYhyhi5fEOTThXigTsj7eiw0nmM Y5ASzN3mDbejT8XYFIVK8CV0Dew52CxMKAPq5Jyu4Wx/Q6jYW26I74cSJo2fY783fHixAs3Cqna NpHshY93XD5SyZcMn+A== X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a05:6902:2412:b0:dc2:4cf5:9ed0 with SMTP id dr18-20020a056902241200b00dc24cf59ed0mr272945ybb.5.1706090425470; Wed, 24 Jan 2024 02:00:25 -0800 (PST) Date: Wed, 24 Jan 2024 10:00:22 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240124100023.660032-1-yosryahmed@google.com> Subject: [PATCH] mm: memcg: optimize parent iteration in memcg_rstat_updated() From: Yosry Ahmed To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yosry Ahmed , kernel test robot Content-Type: text/plain; charset="UTF-8" In memcg_rstat_updated(), we iterate the memcg being updated and its parents to update memcg->vmstats_percpu->stats_updates in the fast path (i.e. no atomic updates). According to my math, this is 3 memory loads (and potentially 3 cache misses) per memcg: - Load the address of memcg->vmstats_percpu. - Load vmstats_percpu->stats_updates (based on some percpu calculation). - Load the address of the parent memcg. Avoid most of the cache misses by caching a pointer from each struct memcg_vmstats_percpu to its parent on the corresponding CPU. In this case, for the first memcg we have 2 memory loads (same as above): - Load the address of memcg->vmstats_percpu. - Load vmstats_percpu->stats_updates (based on some percpu calculation). Then for each additional memcg, we need a single load to get the parent's stats_updates directly. This reduces the number of loads from O(3N) to O(2+N) -- where N is the number of memcgs we need to iterate. Additionally, stash a pointer to memcg->vmstats in each struct memcg_vmstats_percpu such that we can access the atomic counter that all CPUs fold into, memcg->vmstats->stats_updates. memcg_should_flush_stats() is changed to memcg_vmstats_needs_flush() to accept a struct memcg_vmstats pointer accordingly. In struct memcg_vmstats_percpu, make sure both pointers together with stats_updates live on the same cacheline. Finally, update mem_cgroup_alloc() to take in a parent pointer and initialize the new cache pointers on each CPU. The percpu loop in mem_cgroup_alloc() may look concerning, but there are multiple similar loops in the cgroup creation path (e.g. cgroup_rstat_init()), most of which are hidden within alloc_percpu(). According to Oliver's testing [1], this fixes multiple 30-38% regressions in vm-scalability, will-it-scale-tlb_flush2, and will-it-scale-fallocate1. This comes at a cost of 2 more pointers per CPU (<2KB on a machine with 128 CPUs). [1] https://lore.kernel.org/lkml/ZbDJsfsZt2ITyo61@xsang-OptiPlex-9020/ Fixes: 8d59d2214c23 ("mm: memcg: make stats flushing threshold per-memcg") Tested-by: kernel test robot Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-lkp/202401221624.cb53a8ca-oliver.sang@intel.com Signed-off-by: Yosry Ahmed --- The only noticeable change I made after Oliver's testing is adding the cacheline padding in struct memcg_vmstats_percpu. In my config, the added pointers happen to be on the same cacheline as stats_updates, and I assume with Oliver's testing given the results. However, this can change on different configs and as new stats are added, so I added the cacheline padding to make sure they are always on the same cachline. --- mm/memcontrol.c | 49 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 32 insertions(+), 17 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e4c8735e7c85c..868da91cceb48 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -633,8 +633,15 @@ struct memcg_vmstats_percpu { unsigned long nr_page_events; unsigned long targets[MEM_CGROUP_NTARGETS]; + /* Fit members below in a single cacheline for memcg_rstat_updated() */ + CACHELINE_PADDING(_pad1_); + /* Stats updates since the last flush */ unsigned int stats_updates; + + /* Cached pointers for fast iteration in memcg_rstat_updated() */ + struct memcg_vmstats_percpu *parent; + struct memcg_vmstats *vmstats; }; struct memcg_vmstats { @@ -698,36 +705,35 @@ static void memcg_stats_unlock(void) } -static bool memcg_should_flush_stats(struct mem_cgroup *memcg) +static bool memcg_vmstats_needs_flush(struct memcg_vmstats *vmstats) { - return atomic64_read(&memcg->vmstats->stats_updates) > + return atomic64_read(&vmstats->stats_updates) > MEMCG_CHARGE_BATCH * num_online_cpus(); } static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) { + struct memcg_vmstats_percpu *statc; int cpu = smp_processor_id(); - unsigned int x; if (!val) return; cgroup_rstat_updated(memcg->css.cgroup, cpu); - - for (; memcg; memcg = parent_mem_cgroup(memcg)) { - x = __this_cpu_add_return(memcg->vmstats_percpu->stats_updates, - abs(val)); - - if (x < MEMCG_CHARGE_BATCH) + statc = this_cpu_ptr(memcg->vmstats_percpu); + for (; statc; statc = statc->parent) { + statc->stats_updates += abs(val); + if (statc->stats_updates < MEMCG_CHARGE_BATCH) continue; /* * If @memcg is already flush-able, increasing stats_updates is * redundant. Avoid the overhead of the atomic update. */ - if (!memcg_should_flush_stats(memcg)) - atomic64_add(x, &memcg->vmstats->stats_updates); - __this_cpu_write(memcg->vmstats_percpu->stats_updates, 0); + if (!memcg_vmstats_needs_flush(statc->vmstats)) + atomic64_add(statc->stats_updates, + &statc->vmstats->stats_updates); + statc->stats_updates = 0; } } @@ -756,7 +762,7 @@ void mem_cgroup_flush_stats(struct mem_cgroup *memcg) if (!memcg) memcg = root_mem_cgroup; - if (memcg_should_flush_stats(memcg)) + if (memcg_vmstats_needs_flush(memcg->vmstats)) do_flush_stats(memcg); } @@ -770,7 +776,7 @@ void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg) static void flush_memcg_stats_dwork(struct work_struct *w) { /* - * Deliberately ignore memcg_should_flush_stats() here so that flushing + * Deliberately ignore memcg_vmstats_needs_flush() here so that flushing * in latency-sensitive paths is as cheap as possible. */ do_flush_stats(root_mem_cgroup); @@ -5456,10 +5462,11 @@ static void mem_cgroup_free(struct mem_cgroup *memcg) __mem_cgroup_free(memcg); } -static struct mem_cgroup *mem_cgroup_alloc(void) +static struct mem_cgroup *mem_cgroup_alloc(struct mem_cgroup *parent) { + struct memcg_vmstats_percpu *statc, *pstatc; struct mem_cgroup *memcg; - int node; + int node, cpu; int __maybe_unused i; long error = -ENOMEM; @@ -5483,6 +5490,14 @@ static struct mem_cgroup *mem_cgroup_alloc(void) if (!memcg->vmstats_percpu) goto fail; + for_each_possible_cpu(cpu) { + if (parent) + pstatc = per_cpu_ptr(parent->vmstats_percpu, cpu); + statc = per_cpu_ptr(memcg->vmstats_percpu, cpu); + statc->parent = parent ? pstatc : NULL; + statc->vmstats = memcg->vmstats; + } + for_each_node(node) if (alloc_mem_cgroup_per_node_info(memcg, node)) goto fail; @@ -5528,7 +5543,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) struct mem_cgroup *memcg, *old_memcg; old_memcg = set_active_memcg(parent); - memcg = mem_cgroup_alloc(); + memcg = mem_cgroup_alloc(parent); set_active_memcg(old_memcg); if (IS_ERR(memcg)) return ERR_CAST(memcg); -- 2.43.0.429.g432eaa2c6b-goog