Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1036226AbdDZIMi (ORCPT ); Wed, 26 Apr 2017 04:12:38 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:35357 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2993846AbdDZIJJ (ORCPT ); Wed, 26 Apr 2017 04:09:09 -0400 Date: Wed, 26 Apr 2017 10:09:05 +0200 From: Ingo Molnar To: Xunlei Pang Cc: linux-kernel@vger.kernel.org, kexec@lists.infradead.org, akpm@linux-foundation.org, Eric Biederman , Dave Young , x86@kernel.org, Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , Yinghai Lu , Borislav Petkov , Andy Lutomirski Subject: Re: [PATCH v2 1/2] x86/mm/ident_map: Add PUD level 1GB page support Message-ID: <20170426080905.gvfjwpoknsgcrsyd@gmail.com> References: <1493192562-6669-1-git-send-email-xlpang@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1493192562-6669-1-git-send-email-xlpang@redhat.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1769 Lines: 43 * Xunlei Pang wrote: > The current kernel_ident_mapping_init() creates the identity > mapping using 2MB page(PMD level), this patch adds the 1GB > page(PUD level) support. > > This is useful on large machines to save some reserved memory > (as paging structures) in the kdump case when kexec setups up > identity mappings before booting into the new kernel. > > We will utilize this new support in the following patch. Well, the primary advantage would be better TLB coverage/performance, because we'd utilize 1GB TLBs instead of 2MB ones, right? Any kexec fallout is secondary. And I'd like to hear more about the primary advantage: what are the effects of this change on a typical test system you have access to: - For example what percentage of the identity mapping was 4K mapped (if any) and 2MB mapped - and how did this change due to the patch - how many 2MB mappings remained and how many 1GB mappings were added? - Is there anything else we could do to improve the in-RAM layout of kernel data structures. For example IIRC the CPU breaks up all TLBs under 2MB physical into 4K TLBs. Is this the current limit and could we just reserve all that space and not use it for anything important? 2MB of RAM wasted is a very small amount of space, compared to the potential performance advantages. > void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ > void *context; /* context for alloc_pgt_page */ > - unsigned long pmd_flag; /* page flag for PMD entry */ > + unsigned long page_flag; /* page flag for PMD or PUD entry */ > unsigned long offset; /* ident mapping offset */ > + bool direct_gbpages; /* PUD level 1GB page support */ Doesn't follow the existing alignment. Thanks, Ingo