Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp1424009pxb; Fri, 1 Oct 2021 10:15:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyuI10iZR11kxpPM+nQ6qET5FbHcF2CRM0cmusBNO2CokcN2neVPPb7kcoajQvoAGeXizN9 X-Received: by 2002:a17:90b:3b8b:: with SMTP id pc11mr14619324pjb.180.1633108507418; Fri, 01 Oct 2021 10:15:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633108507; cv=none; d=google.com; s=arc-20160816; b=IpPMyn95pyPGv1Wk9BaFDnPLUHQ50/nzUgvUWQ/5dxm2XbpSzosIb0zK5GOi7F6jse ztGm6b2MLgWnPIntMDgIQpZUof6sSTehYk8WsHDtswT2U5JaRqm2dT4lj625aRZnwZqZ HyLiezIQ52p+cZKUcpDs9QFPdI5F0FjZaqxwY851AFBEXw7TeSzGMTfGcmSmbCWNabhu CLD9tQvar1akZw2TtclYLXj/REf78+gDmrqJL8OzapZAvIwklQxndzkSXdJQCpnidWqj 88DLYlRw8e2faJCWctvzKkf+dj+eF64NFJcEJdiwp39l4Fn0/6cBwVQU1WfFhm1U8OFF LSbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=upxggu0CKNjrWeg2+Dki72ksQiC45ABHkkWuHZ/aw84=; b=FKg3H/hSxCZM+AHBcXQjRYqhanS7Tcx/s/QF/TQNoRp62ZaTDujpB/VUgMVoMK60UK 0JAVAJ5+d6f9DtX55KUlLM1fHWbKPb1hqXCYKoNkC7Wys4PFxNVxjyO2fu8ltkncZrmc sprkiRecBTuKpg/SL+TEKe0qYOxTiJ4wliPZpY5Vl/koHzCVQPuv9K8MKMRvI/TXxf2k TIsy2NMTx1tDu1xpqz4Zl2174idNln52QlUMeRhF4JG2GDlK2D3MLwiXzmUV8rP5+IuV /fNgFrWXcaYoN0xEc7kMlsbMVPQXOi8gFv1bfHzrQCVa7b7CpzbnTfflsB87AEnKcTW4 HCNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=yrl8VKo4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 6si7683449pgr.2.2021.10.01.10.14.49; Fri, 01 Oct 2021 10:15:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=yrl8VKo4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353768AbhJAO1z (ORCPT + 99 others); Fri, 1 Oct 2021 10:27:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354297AbhJAO1u (ORCPT ); Fri, 1 Oct 2021 10:27:50 -0400 Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com [IPv6:2607:f8b0:4864:20::72d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79DECC061775 for ; Fri, 1 Oct 2021 07:26:05 -0700 (PDT) Received: by mail-qk1-x72d.google.com with SMTP id p4so9266438qki.3 for ; Fri, 01 Oct 2021 07:26:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=upxggu0CKNjrWeg2+Dki72ksQiC45ABHkkWuHZ/aw84=; b=yrl8VKo4jwzyMKQoR3R0fXVFw1oymfqrdVi89dQGHlrRiU0WU2OtgYwcHTEOU+wkYj Q4sTiKEMlV9CQsbPW7r9/xcvm+AqH83mnI1OSsAF4S442Q0/Xk7YhFke/7SMBqCf+230 3ffMHbx9IPY9JGW2EMYTkJ3d/KSlMyQx72VHkVzxjPcyrXzxYL7xcTzVH5xoV3pHIWfq D3mYwlDtt8BHeBBpP1DBQnC6PrbZTLtHFXDfE1xbNcXss4TsFm0tDENlkLeOPK46qaWh RAnGLXFGH0umVkOL2amkrE3YMKqv6J6Yj9teNId5gaz3E/noj7eOLdvwOlGTLqpEb1/x PyHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=upxggu0CKNjrWeg2+Dki72ksQiC45ABHkkWuHZ/aw84=; b=lEs3uo7XNZBq1kDi3nhDE0Bz8OO8at1mH574lvkWOEsI6NkzSPeqUsyC2sd8tDrmIL D1FWQrfCjE7AtZEKdMLqq4ZCub3Z5DsKAnz24i1x3SFVK8aNfggUCKrtXp4Q+8Au9Lsd IvyDfw3Sd4zrbrKRvo/xmV1eSI8VahiWWPqDXMzsxktyI/csMZiNeUUMjaaea4T5AUHi a1iZqddEQeAXAWSi67HjbeddbbXJrx/FyVXSMG0TJKzQi6SONfAkLNkLkWcFVftURO8+ 7H1IXs+z48jHNvi6JynMkskaL2+pvtbi2cCI6vpaQLHtaGdQdi8h8Q39wP8Te7Q3XBky 0KeA== X-Gm-Message-State: AOAM5314a5K3kQjym/F9neM+snWZQdi/4V2Q4vwDMysJewXSaNP/vcnY qn66KuVw2ft3jXYI/727FJwglg== X-Received: by 2002:a37:ad4:: with SMTP id 203mr9718148qkk.455.1633098364642; Fri, 01 Oct 2021 07:26:04 -0700 (PDT) Received: from localhost (cpe-98-15-154-102.hvc.res.rr.com. [98.15.154.102]) by smtp.gmail.com with ESMTPSA id u13sm3453128qki.38.2021.10.01.07.26.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 07:26:03 -0700 (PDT) Date: Fri, 1 Oct 2021 10:28:10 -0400 From: Johannes Weiner To: Shakeel Butt Cc: Michal Hocko , Michal =?iso-8859-1?Q?Koutn=FD?= , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] memcg: flush stats only if updated Message-ID: References: <20210930044711.2892660-1-shakeelb@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210930044711.2892660-1-shakeelb@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 29, 2021 at 09:47:10PM -0700, Shakeel Butt wrote: > At the moment, the kernel flushes the memcg stats on every refault and > also on every reclaim iteration. Although rstat maintains per-cpu update > tree but on the flush the kernel still has to go through all the cpu > rstat update tree to check if there is anything to flush. This patch > adds the tracking on the stats update side to make flush side more > clever by skipping the flush if there is no update. > > The stats update codepath is very sensitive performance wise for many > workloads and benchmarks. So, we can not follow what the commit > aa48e47e3906 ("memcg: infrastructure to flush memcg stats") did which > was triggering async flush through queue_work() and caused a lot > performance regression reports. That got reverted by the commit > 1f828223b799 ("memcg: flush lruvec stats in the refault"). > > In this patch we kept the stats update codepath very minimal and let the > stats reader side to flush the stats only when the updates are over a > specific threshold. For now the threshold is (nr_cpus * CHARGE_BATCH). > > To evaluate the impact of this patch, an 8 GiB tmpfs file is created on > a system with swap-on-zram and the file was pushed to swap through > memory.force_empty interface. On reading the whole file, the memcg stat > flush in the refault code path is triggered. With this patch, we > bserved 63% reduction in the read time of 8 GiB file. > > Signed-off-by: Shakeel Butt This is a great idea. Acked-by: Johannes Weiner One minor nit: > @@ -107,6 +107,8 @@ static bool do_memsw_account(void) > static void flush_memcg_stats_dwork(struct work_struct *w); > static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork); > static DEFINE_SPINLOCK(stats_flush_lock); > +static DEFINE_PER_CPU(unsigned int, stats_updates); > +static atomic_t stats_flush_threshold = ATOMIC_INIT(0); > > #define THRESHOLDS_EVENTS_TARGET 128 > #define SOFTLIMIT_EVENTS_TARGET 1024 > @@ -635,6 +637,13 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) > return mz; > } > > +static inline void memcg_rstat_updated(struct mem_cgroup *memcg) > +{ > + cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id()); > + if (!(__this_cpu_inc_return(stats_updates) % MEMCG_CHARGE_BATCH)) > + atomic_inc(&stats_flush_threshold); > +} > + > /** > * __mod_memcg_state - update cgroup memory statistics > * @memcg: the memory cgroup > @@ -647,7 +656,7 @@ void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val) > return; > > __this_cpu_add(memcg->vmstats_percpu->state[idx], val); > - cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id()); > + memcg_rstat_updated(memcg); > } > > /* idx can be of type enum memcg_stat_item or node_stat_item. */ > @@ -675,10 +684,12 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, > memcg = pn->memcg; > > /* Update memcg */ > - __mod_memcg_state(memcg, idx, val); > + __this_cpu_add(memcg->vmstats_percpu->state[idx], val); > > /* Update lruvec */ > __this_cpu_add(pn->lruvec_stats_percpu->state[idx], val); > + > + memcg_rstat_updated(memcg); > } > > /** > @@ -780,7 +791,7 @@ void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx, > return; > > __this_cpu_add(memcg->vmstats_percpu->events[idx], count); > - cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id()); > + memcg_rstat_updated(memcg); > } > > static unsigned long memcg_events(struct mem_cgroup *memcg, int event) > @@ -5341,15 +5352,22 @@ static void mem_cgroup_css_reset(struct cgroup_subsys_state *css) > memcg_wb_domain_size_changed(memcg); > } > > -void mem_cgroup_flush_stats(void) > +static void __mem_cgroup_flush_stats(void) > { > if (!spin_trylock(&stats_flush_lock)) > return; > > cgroup_rstat_flush_irqsafe(root_mem_cgroup->css.cgroup); > + atomic_set(&stats_flush_threshold, 0); > spin_unlock(&stats_flush_lock); > } > > +void mem_cgroup_flush_stats(void) > +{ > + if (atomic_read(&stats_flush_threshold) > num_online_cpus()) > + __mem_cgroup_flush_stats(); > +} Because of the way the updates and the flush interact through these variables now, it might be better to move these up and together. It'd also be good to have a small explanation of the optimization in the code as well - that we accept (limited) percpu fuzz in lieu of not having to check all percpus for every flush.