Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753635AbdHWI1T (ORCPT ); Wed, 23 Aug 2017 04:27:19 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:33440 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753480AbdHWI1Q (ORCPT ); Wed, 23 Aug 2017 04:27:16 -0400 Date: Wed, 23 Aug 2017 11:27:12 +0300 From: Vladimir Davydov To: Kirill Tkhai Cc: apolyakov@beget.ru, linux-kernel@vger.kernel.org, linux-mm@kvack.org, aryabinin@virtuozzo.com, akpm@linux-foundation.org Subject: Re: [PATCH 3/3] mm: Count list_lru_one::nr_items lockless Message-ID: <20170823082712.tw6qtyllctn25puq@esperanza> References: <150340381428.3845.6099251634440472539.stgit@localhost.localdomain> <150340497499.3845.3045559119569209195.stgit@localhost.localdomain> <20170822194725.ik3xwxu67wcthisb@esperanza> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2425 Lines: 35 On Wed, Aug 23, 2017 at 11:00:56AM +0300, Kirill Tkhai wrote: > On 22.08.2017 22:47, Vladimir Davydov wrote: > > On Tue, Aug 22, 2017 at 03:29:35PM +0300, Kirill Tkhai wrote: > >> During the reclaiming slab of a memcg, shrink_slab iterates > >> over all registered shrinkers in the system, and tries to count > >> and consume objects related to the cgroup. In case of memory > >> pressure, this behaves bad: I observe high system time and > >> time spent in list_lru_count_one() for many processes on RHEL7 > >> kernel (collected via $perf record --call-graph fp -j k -a): > >> > >> 0,50% nixstatsagent [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock > >> 0,26% nixstatsagent [kernel.vmlinux] [k] shrink_slab [k] shrink_slab > >> 0,23% nixstatsagent [kernel.vmlinux] [k] super_cache_count [k] super_cache_count > >> 0,15% nixstatsagent [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock > >> 0,15% nixstatsagent [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2 > >> > >> 0,94% mysqld [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock > >> 0,57% mysqld [kernel.vmlinux] [k] shrink_slab [k] shrink_slab > >> 0,51% mysqld [kernel.vmlinux] [k] super_cache_count [k] super_cache_count > >> 0,32% mysqld [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock > >> 0,32% mysqld [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2 > >> > >> 0,73% sshd [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock > >> 0,35% sshd [kernel.vmlinux] [k] shrink_slab [k] shrink_slab > >> 0,32% sshd [kernel.vmlinux] [k] super_cache_count [k] super_cache_count > >> 0,21% sshd [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock > >> 0,21% sshd [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2 > > > > It would be nice to see how this is improved by this patch. > > Can you try to record the traces on the vanilla kernel with > > and without this patch? > > Sadly, the talk is about a production node, and it's impossible to use vanila kernel there. I see :-( Then maybe you could try to come up with a contrived test?