Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752401AbcJIT1J (ORCPT ); Sun, 9 Oct 2016 15:27:09 -0400 Received: from mail.fireflyinternet.com ([109.228.58.192]:60647 "EHLO fireflyinternet.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752173AbcJIT1I (ORCPT ); Sun, 9 Oct 2016 15:27:08 -0400 X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Date: Sun, 9 Oct 2016 20:26:11 +0100 From: Chris Wilson To: Joel Fernandes Cc: Joel Fernandes , Jisheng Zhang , npiggin@kernel.dk, Linux Kernel Mailing List , linux-mm@kvack.org, rientjes@google.com, Andrew Morton , mgorman@techsingularity.net, iamjoonsoo.kim@lge.com, Linux ARM Kernel List Subject: Re: [PATCH] mm/vmalloc: reduce the number of lazy_max_pages to reduce latency Message-ID: <20161009192610.GB2718@nuc-i3427.alporthouse.com> References: <20160929073411.3154-1-jszhang@marvell.com> <20160929081818.GE28107@nuc-i3427.alporthouse.com> <20161009124242.GA2718@nuc-i3427.alporthouse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 766 Lines: 16 On Sun, Oct 09, 2016 at 12:00:31PM -0700, Joel Fernandes wrote: > Ok. So I'll submit a patch with mutex for purge_lock and use > cond_resched_lock for the vmap_area_lock as you suggested. I'll also > drop the lazy_max_pages to 8MB as Andi suggested to reduce the lock > hold time. Let me know if you have any objections. The downside of using a mutex here though, is that we may be called from contexts that cannot sleep (alloc_vmap_area), or reschedule for that matter! If we change the notion of purged, we can forgo the mutex in favour of spinning on the direct reclaim path. That just leaves the complication of whether to use cond_resched_lock() or a lock around the individual __free_vmap_area(). -Chris -- Chris Wilson, Intel Open Source Technology Centre