Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752264AbaAFATb (ORCPT ); Sun, 5 Jan 2014 19:19:31 -0500 Received: from LGEMRELSE1Q.lge.com ([156.147.1.111]:61364 "EHLO LGEMRELSE1Q.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751407AbaAFATa (ORCPT ); Sun, 5 Jan 2014 19:19:30 -0500 X-AuditID: 9c93016f-b7b4eae000000985-77-52c9f690ed36 Date: Mon, 6 Jan 2014 09:19:38 +0900 From: Joonsoo Kim To: Davidlohr Bueso Cc: Mel Gorman , Andrew Morton , Rik van Riel , Michal Hocko , "Aneesh Kumar K.V" , KAMEZAWA Hiroyuki , Hugh Dickins , Davidlohr Bueso , David Gibson , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Wanpeng Li , Naoya Horiguchi , Hillf Danton , aswin@hp.com Subject: Re: [PATCH v3 13/14] mm, hugetlb: retry if failed to allocate and there is concurrent user Message-ID: <20140106001938.GB696@lge.com> References: <1387349640-8071-1-git-send-email-iamjoonsoo.kim@lge.com> <1387349640-8071-14-git-send-email-iamjoonsoo.kim@lge.com> <20131219170202.0df2d82a2adefa3ab616bdaa@linux-foundation.org> <20131220140153.GC11295@suse.de> <1387608497.3119.17.camel@buesod1.americas.hpqcorp.net> <20131223004438.GA19388@lge.com> <20131223021118.GA2487@lge.com> <1388778945.2956.20.camel@buesod1.americas.hpqcorp.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1388778945.2956.20.camel@buesod1.americas.hpqcorp.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-Brightmail-Tracker: AAAAAA== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 03, 2014 at 11:55:45AM -0800, Davidlohr Bueso wrote: > Hi Joonsoo, > > Sorry about the delay... > > On Mon, 2013-12-23 at 11:11 +0900, Joonsoo Kim wrote: > > On Mon, Dec 23, 2013 at 09:44:38AM +0900, Joonsoo Kim wrote: > > > On Fri, Dec 20, 2013 at 10:48:17PM -0800, Davidlohr Bueso wrote: > > > > On Fri, 2013-12-20 at 14:01 +0000, Mel Gorman wrote: > > > > > On Thu, Dec 19, 2013 at 05:02:02PM -0800, Andrew Morton wrote: > > > > > > On Wed, 18 Dec 2013 15:53:59 +0900 Joonsoo Kim wrote: > > > > > > > > > > > > > If parallel fault occur, we can fail to allocate a hugepage, > > > > > > > because many threads dequeue a hugepage to handle a fault of same address. > > > > > > > This makes reserved pool shortage just for a little while and this cause > > > > > > > faulting thread who can get hugepages to get a SIGBUS signal. > > > > > > > > > > > > > > To solve this problem, we already have a nice solution, that is, > > > > > > > a hugetlb_instantiation_mutex. This blocks other threads to dive into > > > > > > > a fault handler. This solve the problem clearly, but it introduce > > > > > > > performance degradation, because it serialize all fault handling. > > > > > > > > > > > > > > Now, I try to remove a hugetlb_instantiation_mutex to get rid of > > > > > > > performance degradation. > > > > > > > > > > > > So the whole point of the patch is to improve performance, but the > > > > > > changelog doesn't include any performance measurements! > > > > > > > > > > > > > > > > I don't really deal with hugetlbfs any more and I have not examined this > > > > > series but I remember why I never really cared about this mutex. It wrecks > > > > > fault scalability but AFAIK fault scalability almost never mattered for > > > > > workloads using hugetlbfs. The most common user of hugetlbfs by far is > > > > > sysv shared memory. The memory is faulted early in the lifetime of the > > > > > workload and after that it does not matter. At worst, it hurts application > > > > > startup time but that is still poor motivation for putting a lot of work > > > > > into removing the mutex. > > > > > > > > Yep, important hugepage workloads initially pound heavily on this lock, > > > > then it naturally decreases. > > > > > > > > > Microbenchmarks will be able to trigger problems in this area but it'd > > > > > be important to check if any workload that matters is actually hitting > > > > > that problem. > > > > > > > > I was thinking of writing one to actually get some numbers for this > > > > patchset -- I don't know of any benchmark that might stress this lock. > > > > > > > > However I first measured the amount of cycles it costs to start an > > > > Oracle DB and things went south with these changes. A simple 'startup > > > > immediate' calls hugetlb_fault() ~5000 times. For a vanilla kernel, this > > > > costs ~7.5 billion cycles and with this patchset it goes up to ~27.1 > > > > billion. While there is naturally a fair amount of variation, these > > > > changes do seem to do more harm than good, at least in real world > > > > scenarios. > > > > > > Hello, > > > > > > I think that number of cycles is not proper to measure this patchset, > > > because cycles would be wasted by fault handling failure. Instead, it > > > targeted improved elapsed time. > > Fair enough, however the fact of the matter is this approach does en up > hurting performance. Regarding total startup time, I didn't see hardly > any differences, with both vanilla and this patchset it takes close to > 33.5 seconds. > > > Could you tell me how long it > > > takes to fault all of it's hugepages? > > > > > > Anyway, this order of magnitude still seems a problem. :/ > > > > > > I guess that cycles are wasted by zeroing hugepage in fault-path like as > > > Andrew pointed out. > > > > > > I will send another patches to fix this problem. > > > > Hello, Davidlohr. > > > > Here goes the fix on top of this series. > > ... and with this patch we go from 27 down to 11 billion cycles, so this > approach still costs more than what we currently have. A perf stat shows > that an entire 1Gb huge page aware DB startup costs around ~30 billion > cycles on a vanilla kernel, so the impact of hugetlb_fault() is > definitely non trivial and IMO worth considering. Really thanks for your help. :) > > Now, I took my old patchset (https://lkml.org/lkml/2013/7/26/299) for a > ride and things do look quite better, which is basically what Andrew was > suggesting previously anyway. With the hash table approach the startup > time did go down to ~25.1 seconds, which is a nice -24.7% time > reduction, with hugetlb_fault() consuming roughly 5.3 billion cycles. > This hash table was on a 80 core system, so since we do the power of two > round up we end up with 256 entries -- I think we can do better if we > enlarger further, maybe something like statically 1024, or probably > better, 8-ish * nr cpus. > > Thoughts? Is there any reason why we cannot go with this instead? Yes, > we still keep the mutex, but the approach is (1) proven better for > performance on real world workloads and (2) far less invasive. I have no more idea to improve my patches now, so I agree with your approach. When I reviewed your approach last time, I found one race condition. In that time, I didn't think of a solution about it. If you resend it, I will review and re-think about it. Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/