Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753198Ab3G1GA4 (ORCPT ); Sun, 28 Jul 2013 02:00:56 -0400 Received: from mail-ob0-f180.google.com ([209.85.214.180]:43436 "EHLO mail-ob0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752863Ab3G1GAz (ORCPT ); Sun, 28 Jul 2013 02:00:55 -0400 MIME-Version: 1.0 In-Reply-To: <1374848845-1429-3-git-send-email-davidlohr.bueso@hp.com> References: <1374848845-1429-1-git-send-email-davidlohr.bueso@hp.com> <1374848845-1429-3-git-send-email-davidlohr.bueso@hp.com> Date: Sun, 28 Jul 2013 14:00:54 +0800 Message-ID: Subject: Re: [PATCH 2/2] hugepage: allow parallelization of the hugepage fault path From: Hillf Danton To: Davidlohr Bueso Cc: Andrew Morton , Rik van Riel , Michel Lespinasse , Mel Gorman , Michal Hocko , "AneeshKumarK.V" , KAMEZAWA Hiroyuki , Hugh Dickins , Joonsoo Kim , David Gibson , Eric B Munson , Anton Blanchard , Konstantin Khlebnikov , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4355 Lines: 109 On Fri, Jul 26, 2013 at 10:27 PM, Davidlohr Bueso wrote: > From: David Gibson > > At present, the page fault path for hugepages is serialized by a > single mutex. This is used to avoid spurious out-of-memory conditions > when the hugepage pool is fully utilized (two processes or threads can > race to instantiate the same mapping with the last hugepage from the > pool, the race loser returning VM_FAULT_OOM). This problem is > specific to hugepages, because it is normal to want to use every > single hugepage in the system - with normal pages we simply assume > there will always be a few spare pages which can be used temporarily > until the race is resolved. > > Unfortunately this serialization also means that clearing of hugepages > cannot be parallelized across multiple CPUs, which can lead to very > long process startup times when using large numbers of hugepages. > > This patch improves the situation by replacing the single mutex with a > table of mutexes, selected based on a hash, which allows us to know > which page in the file we're instantiating. For shared mappings, the > hash key is selected based on the address space and file offset being faulted. > Similarly, for private mappings, the mm and virtual address are used. > > From: Anton Blanchard > [https://lkml.org/lkml/2011/7/15/31] > Forward ported and made a few changes: > > - Use the Jenkins hash to scatter the hash, better than using just the > low bits. > > - Always round num_fault_mutexes to a power of two to avoid an > expensive modulus in the hash calculation. > > I also tested this patch on a large POWER7 box using a simple parallel > fault testcase: > > http://ozlabs.org/~anton/junkcode/parallel_fault.c > > Command line options: > > parallel_fault > > First the time taken to fault 128GB of 16MB hugepages: > > 40.68 seconds > > Now the same test with 64 concurrent threads: > 39.34 seconds > > Hardly any speedup. Finally the 64 concurrent threads test with > this patch applied: > 0.85 seconds > > We go from 40.68 seconds to 0.85 seconds, an improvement of 47.9x > > This was tested with the libhugetlbfs test suite, and the PASS/FAIL > count was the same before and after this patch. > > From: Davidlohr Bueso > - Cleaned up and forward ported to Linus' latest. > - Cache aligned mutexes. > - Keep non SMP systems using a single mutex. > > It was found that this mutex can become quite contended > during the early phases of large databases which make use of huge pages - for instance > startup and initial runs. One clear example is a 1.5Gb Oracle database, where lockstat > reports that this mutex can be one of the top 5 most contended locks in the kernel during > the first few minutes: > > hugetlb_instantiation_mutex: 10678 10678 > --------------------------- > hugetlb_instantiation_mutex 10678 [] hugetlb_fault+0x9e/0x340 > --------------------------- > hugetlb_instantiation_mutex 10678 [] hugetlb_fault+0x9e/0x340 > > contentions: 10678 > acquisitions: 99476 > waittime-total: 76888911.01 us > > With this patch we see a much less contention and wait time: > > &htlb_fault_mutex_table[i]: 383 > -------------------------- > &htlb_fault_mutex_table[i] 383 [] hugetlb_fault+0x1eb/0x440 > -------------------------- > &htlb_fault_mutex_table[i] 383 [] hugetlb_fault+0x1eb/0x440 > > contentions: 383 > acquisitions: 120546 > waittime-total: 1381.72 us > I see same figures in the message of Jul 18, contentions: 10678 acquisitions: 99476 waittime-total: 76888911.01 us and contentions: 383 acquisitions: 120546 waittime-total: 1381.72 us if I copy and paste correctly. Were they measured with the global semaphore introduced in 1/8 for serializing changes in file regions? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/