Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933537AbdDEKn0 (ORCPT ); Wed, 5 Apr 2017 06:43:26 -0400 Received: from mx2.suse.de ([195.135.220.15]:44817 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932961AbdDEKma (ORCPT ); Wed, 5 Apr 2017 06:42:30 -0400 Date: Wed, 5 Apr 2017 12:42:25 +0200 From: Michal Hocko To: Andrey Ryabinin Cc: Thomas Hellstrom , akpm@linux-foundation.org, penguin-kernel@I-love.SAKURA.ne.jp, linux-kernel@vger.kernel.org, linux-mm@kvack.org, hpa@zytor.com, chris@chris-wilson.co.uk, hch@lst.de, mingo@elte.hu, jszhang@marvell.com, joelaf@google.com, joaodias@google.com, willy@infradead.org, tglx@linutronix.de, stable@vger.kernel.org Subject: Re: [PATCH 1/4] mm/vmalloc: allow to call vfree() in atomic context Message-ID: <20170405104224.GH6035@dhcp22.suse.cz> References: <20170330102719.13119-1-aryabinin@virtuozzo.com> <2cfc601e-3093-143e-b93d-402f330a748a@vmware.com> <20170404094148.GJ15132@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2401 Lines: 62 On Wed 05-04-17 13:31:23, Andrey Ryabinin wrote: > On 04/04/2017 12:41 PM, Michal Hocko wrote: > > On Thu 30-03-17 17:48:39, Andrey Ryabinin wrote: > >> From: Andrey Ryabinin > >> Subject: mm/vmalloc: allow to call vfree() in atomic context fix > >> > >> Don't spawn worker if we already purging. > >> > >> Signed-off-by: Andrey Ryabinin > > > > I would rather put this into a separate patch. Ideally with some numners > > as this is an optimization... > > > > It's quite simple optimization and don't think that this deserves to > be a separate patch. I disagree. I am pretty sure nobody will remember after few years. I do not want to push too hard on this but I can tell you from my own experience that we used to do way too many optimizations like that in the past and they tend to be real head scratchers these days. Moreover people just tend to build on top of them without understadning and then chances are quite high that they are no longer relevant anymore. > But I did some measurements though. With enabled VMAP_STACK=y and > NR_CACHED_STACK changed to 0 running fork() 100000 times gives this: > > With optimization: > > ~ # grep try_purge /proc/kallsyms > ffffffff811d0dd0 t try_purge_vmap_area_lazy > ~ # perf stat --repeat 10 -ae workqueue:workqueue_queue_work --filter 'function == 0xffffffff811d0dd0' ./fork > > Performance counter stats for 'system wide' (10 runs): > > 15 workqueue:workqueue_queue_work ( +- 0.88% ) > > 1.615368474 seconds time elapsed ( +- 0.41% ) > > > Without optimization: > ~ # grep try_purge /proc/kallsyms > ffffffff811d0dd0 t try_purge_vmap_area_lazy > ~ # perf stat --repeat 10 -ae workqueue:workqueue_queue_work --filter 'function == 0xffffffff811d0dd0' ./fork > > Performance counter stats for 'system wide' (10 runs): > > 30 workqueue:workqueue_queue_work ( +- 1.31% ) > > 1.613231060 seconds time elapsed ( +- 0.38% ) > > > So there is no measurable difference on the test itself, but we queue > twice more jobs without this optimization. It should decrease load of > kworkers. And this is really valueable for the changelog! Thanks! -- Michal Hocko SUSE Labs