Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751978AbdHHEZY (ORCPT ); Tue, 8 Aug 2017 00:25:24 -0400 Received: from mga04.intel.com ([192.55.52.120]:27616 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750776AbdHHEZX (ORCPT ); Tue, 8 Aug 2017 00:25:23 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,341,1498546800"; d="scan'208";a="134870257" From: "Huang\, Ying" To: Mike Kravetz Cc: "Huang\, Ying" , Andrew Morton , , , Andrea Arcangeli , "Kirill A. Shutemov" , Nadia Yvette Chambers , Michal Hocko , Jan Kara , Matthew Wilcox , Hugh Dickins , Minchan Kim , Shaohua Li Subject: Re: [PATCH -mm] mm: Clear to access sub-page last when clearing huge page References: <20170807072131.8343-1-ying.huang@intel.com> Date: Tue, 08 Aug 2017 12:24:27 +0800 In-Reply-To: (Mike Kravetz's message of "Mon, 7 Aug 2017 21:07:27 -0700") Message-ID: <87inhyd6v8.fsf@yhuang-mobile.sh.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3012 Lines: 60 Mike Kravetz writes: > On 08/07/2017 12:21 AM, Huang, Ying wrote: >> From: Huang Ying >> >> Huge page helps to reduce TLB miss rate, but it has higher cache >> footprint, sometimes this may cause some issue. For example, when >> clearing huge page on x86_64 platform, the cache footprint is 2M. But >> on a Xeon E5 v3 2699 CPU, there are 18 cores, 36 threads, and only 45M >> LLC (last level cache). That is, in average, there are 2.5M LLC for >> each core and 1.25M LLC for each thread. If the cache pressure is >> heavy when clearing the huge page, and we clear the huge page from the >> begin to the end, it is possible that the begin of huge page is >> evicted from the cache after we finishing clearing the end of the huge >> page. And it is possible for the application to access the begin of >> the huge page after clearing the huge page. >> >> To help the above situation, in this patch, when we clear a huge page, >> the order to clear sub-pages is changed. In quite some situation, we >> can get the address that the application will access after we clear >> the huge page, for example, in a page fault handler. Instead of >> clearing the huge page from begin to end, we will clear the sub-pages >> farthest from the the sub-page to access firstly, and clear the >> sub-page to access last. This will make the sub-page to access most >> cache-hot and sub-pages around it more cache-hot too. If we cannot >> know the address the application will access, the begin of the huge >> page is assumed to be the the address the application will access. >> >> With this patch, the throughput increases ~28.3% in vm-scalability >> anon-w-seq test case with 72 processes on a 2 socket Xeon E5 v3 2699 >> system (36 cores, 72 threads). The test case creates 72 processes, >> each process mmap a big anonymous memory area and writes to it from >> the begin to the end. For each process, other processes could be seen >> as other workload which generates heavy cache pressure. At the same >> time, the cache miss rate reduced from ~33.4% to ~31.7%, the >> IPC (instruction per cycle) increased from 0.56 to 0.74, and the time >> spent in user space is reduced ~7.9% >> >> Thanks Andi Kleen to propose to use address to access to determine the >> order of sub-pages to clear. >> >> The hugetlbfs access address could be improved, will do that in >> another patch. > > hugetlb_fault masks off the actual faulting address with, > address &= huge_page_mask(h); > before calling hugetlb_no_page. > > But, we could pass down the actual (unmasked) address to take advantage > of this optimization for hugetlb faults as well. hugetlb_fault is the > only caller of hugetlb_no_page, so this should be pretty straight forward. > > Were you thinking of additional improvements? No. I am thinking of something like this. If the basic idea is accepted, I plan to add better support like this for hugetlbfs in another patch. Best Regards, Huang, Ying