Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp2052720pxb; Sun, 18 Apr 2021 17:02:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxvV96STQq2DRwA+ZLD+38vVSeJtKOv9QDkxuUB4smpx6vgWyRTDPBm3Dm0dhFHmBU/CDc/ X-Received: by 2002:a05:6402:7cf:: with SMTP id u15mr16180880edy.297.1618790564389; Sun, 18 Apr 2021 17:02:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618790564; cv=none; d=google.com; s=arc-20160816; b=vM1fqJyeDtLQ5yHcPJ6GTJvOekbdlz0dUOEpMiskwA0uXTkVuuXZmhWO6+qtI13Syw bFW3dC9umK4nseure5nus7B7SeM9OHZyKwt8Vtp/EfnK5IMnqkq2cf5V3DB5loxsOsL7 KUeKq2Rfx1qhfLh12vgFrqwj/7S2y0A9290n11DyLC2ZAmgNuc9W+4F3tuTELG/mLNrK NddXOwzgjeysuLRDtY3hgx4HCnMxrn5LGlgTUoWyb4gJhs/Pzk8+2IXooIybbG+uOQSf IPM8aNL1z8dZK/P7qFNj5w1b3+Nxb1N9a5SUFYqNPiQSBA0n9IJKooOXU9mCtLXXlCnO RhVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=jrHNa3qY0mDnQZ5jXhLj28hpfDWjX5kUgWYop2vZoe8=; b=dExv8Uyc2zXk1YqheDS6ekblxYlnQZrjMaB5hRRP/VMzqPMKVWjU+B4V6YCGonKB/8 DM73PQiqRepwlvy0pi4iAcfUkLvDt8j339kQodA1bmnKnj3+7oJw+EsZAJjuM61FxyTi djRMxHWPaB7IZZ6C20N1F7G053l24AkgV1ufx9VeEFz/2bOohLEfdhA8l61ZJ7TPP+CI tH1maAHkjFMMe/g+JGu3wkOjj5B4rwrgwWrKbE/F4dSzVqV+dVZGjHzMS4nmOmpZPZuO 0tNqOLRjcxonCpA38xGzrXeY/t8wL3oqtfo7Sru8WRndmweDZRXHajsqs5UWf+FaTwiS p2FQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Qu35lsaD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h15si3027833eje.694.2021.04.18.17.02.21; Sun, 18 Apr 2021 17:02:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Qu35lsaD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236084AbhDSABa (ORCPT + 99 others); Sun, 18 Apr 2021 20:01:30 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:49185 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232880AbhDSAB0 (ORCPT ); Sun, 18 Apr 2021 20:01:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618790457; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=jrHNa3qY0mDnQZ5jXhLj28hpfDWjX5kUgWYop2vZoe8=; b=Qu35lsaDKBH7+N+czRWl/3kICwClT1rGMaTiar3dQVHXRYrVhsJxFm/JAZEluE9hu5S8Y3 jUUqFGvPYRvWXzTRlme/Sh6oNoPGodlxVsbhtY5Tg/UOksq5uG6c1y7k5USXIRHjoagjI/ px1p7EPSXFzqVQywh4QwGtYs2no24g4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-506-TS38zplUOHKClT0Y41qKsw-1; Sun, 18 Apr 2021 20:00:55 -0400 X-MC-Unique: TS38zplUOHKClT0Y41qKsw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DC23210054F6; Mon, 19 Apr 2021 00:00:52 +0000 (UTC) Received: from llong.com (ovpn-112-235.rdu2.redhat.com [10.10.112.235]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E34A5D72E; Mon, 19 Apr 2021 00:00:50 +0000 (UTC) From: Waiman Long To: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Muchun Song , Alex Shi , Chris Down , Yafang Shao , Wei Yang , Masayoshi Mizuma , Xing Zhengjun , Matthew Wilcox , Waiman Long Subject: [PATCH v4 2/5] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp Date: Sun, 18 Apr 2021 20:00:29 -0400 Message-Id: <20210419000032.5432-3-longman@redhat.com> In-Reply-To: <20210419000032.5432-1-longman@redhat.com> References: <20210419000032.5432-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Before the new slab memory controller with per object byte charging, charging and vmstat data update happen only when new slab pages are allocated or freed. Now they are done with every kmem_cache_alloc() and kmem_cache_free(). This causes additional overhead for workloads that generate a lot of alloc and free calls. The memcg_stock_pcp is used to cache byte charge for a specific obj_cgroup to reduce that overhead. To further reducing it, this patch makes the vmstat data cached in the memcg_stock_pcp structure as well until it accumulates a page size worth of update or when other cached data change. Caching the vmstat data in the per-cpu stock eliminates two writes to non-hot cachelines for memcg specific as well as memcg-lruvecs specific vmstat data by a write to a hot local stock cacheline. On a 2-socket Cascade Lake server with instrumentation enabled and this patch applied, it was found that about 20% (634400 out of 3243830) of the time when mod_objcg_state() is called leads to an actual call to __mod_objcg_state() after initial boot. When doing parallel kernel build, the figure was about 17% (24329265 out of 142512465). So caching the vmstat data reduces the number of calls to __mod_objcg_state() by more than 80%. Signed-off-by: Waiman Long Reviewed-by: Shakeel Butt --- mm/memcontrol.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 61 insertions(+), 3 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index dc9032f28f2e..693453f95d99 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2213,7 +2213,10 @@ struct memcg_stock_pcp { #ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *cached_objcg; + struct pglist_data *cached_pgdat; unsigned int nr_bytes; + int vmstat_idx; + int vmstat_bytes; #endif struct work_struct work; @@ -3150,8 +3153,9 @@ void __memcg_kmem_uncharge_page(struct page *page, int order) css_put(&memcg->css); } -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, - enum node_stat_item idx, int nr) +static inline void __mod_objcg_state(struct obj_cgroup *objcg, + struct pglist_data *pgdat, + enum node_stat_item idx, int nr) { struct mem_cgroup *memcg; struct lruvec *lruvec = NULL; @@ -3159,10 +3163,53 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, rcu_read_lock(); memcg = obj_cgroup_memcg(objcg); lruvec = mem_cgroup_lruvec(memcg, pgdat); - mod_memcg_lruvec_state(lruvec, idx, nr); + __mod_memcg_lruvec_state(lruvec, idx, nr); rcu_read_unlock(); } +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, + enum node_stat_item idx, int nr) +{ + struct memcg_stock_pcp *stock; + unsigned long flags; + + local_irq_save(flags); + stock = this_cpu_ptr(&memcg_stock); + + /* + * Save vmstat data in stock and skip vmstat array update unless + * accumulating over a page of vmstat data or when pgdat or idx + * changes. + */ + if (stock->cached_objcg != objcg) { + /* Output the current data as is */ + } else if (!stock->vmstat_bytes) { + /* Save the current data */ + stock->vmstat_bytes = nr; + stock->vmstat_idx = idx; + stock->cached_pgdat = pgdat; + nr = 0; + } else if ((stock->cached_pgdat != pgdat) || + (stock->vmstat_idx != idx)) { + /* Output the cached data & save the current data */ + swap(nr, stock->vmstat_bytes); + swap(idx, stock->vmstat_idx); + swap(pgdat, stock->cached_pgdat); + } else { + stock->vmstat_bytes += nr; + if (abs(stock->vmstat_bytes) > PAGE_SIZE) { + nr = stock->vmstat_bytes; + stock->vmstat_bytes = 0; + } else { + nr = 0; + } + } + if (nr) + __mod_objcg_state(objcg, pgdat, idx, nr); + + local_irq_restore(flags); +} + static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) { struct memcg_stock_pcp *stock; @@ -3213,6 +3260,17 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock) stock->nr_bytes = 0; } + /* + * Flush the vmstat data in current stock + */ + if (stock->vmstat_bytes) { + __mod_objcg_state(old, stock->cached_pgdat, stock->vmstat_idx, + stock->vmstat_bytes); + stock->cached_pgdat = NULL; + stock->vmstat_bytes = 0; + stock->vmstat_idx = 0; + } + obj_cgroup_put(old); stock->cached_objcg = NULL; } -- 2.18.1