Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp159945ybh; Tue, 14 Jul 2020 21:32:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy8Vpbj6qc2HPUgfZSoxFM4MpqxQp+R60E0ue8POEtZa5fdGRHokjF5IpGzM50xjg2/H/Jb X-Received: by 2002:a17:906:144b:: with SMTP id q11mr7147811ejc.511.1594787531653; Tue, 14 Jul 2020 21:32:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594787531; cv=none; d=google.com; s=arc-20160816; b=amssOLJ9zM0rugUKU9/8berQovOmrXVts547cc3P40XfZM4BFN9GWgPDVl8F9d3IrT cEv3hojICnB6vOctxNL55FeE2V6S6OtAkFSYL2B6rFFMOip5zJpPCzA1vVh5pX8XMY1/ hM+UjG1i0VHm8bal7Gy94jt6JtQEb/f/25QlAIspD1jia4iwjf5oZhPRJ6oykZmB7jrg 4IetJHfrcuT2WUnRI4x5VSgbi3CT0+gswoTTenuYHZf9DPKjZYE16qX9RotDvogeVDNq 8OTSDgEjxbrD4Gc/Nly9wroAWIYSyx9tWP3RXX3I42BAXyGwz7sBx0x0OrdOomqtpKF0 hfsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=Y1te+q8vEJm1GFvwz4sq/64ieVLHBm9KxHy1/yblIZw=; b=pYjRw1V+T/fziX5leEoc5FTMEqepmHjuXst5g/nxqTfcJFcvTUwvEf8dWazErMDFla G9gXhN6xFI+wZj6sAeP3qd5obBoPKt9iQMZQCjZKuWLoDzZDqgXSdv9NjawFspW84qzE Uzo+iQ7rZKHLyjyIqZVP+bWk6GRR4yS5RwgSOYcUw9+PUmmcnV12zKyetd/6SwlpWDnS nNn1cvC9Fdy4PfwjJJ4Wz/HEPZzJ+sSpQC0EQWilpnjX0BVMnPWFFDcHogVRW+dZvFaE aUAUU7zYefcp+vj9dXXDr7eN0Sb36TXEBl9yGICCDeAbpxR/krvSBik1VrOIYqrPTGtS 4iXg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y6si556721ejr.560.2020.07.14.21.31.41; Tue, 14 Jul 2020 21:32:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728277AbgGOE2D (ORCPT + 99 others); Wed, 15 Jul 2020 00:28:03 -0400 Received: from mga05.intel.com ([192.55.52.43]:59540 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726780AbgGOE12 (ORCPT ); Wed, 15 Jul 2020 00:27:28 -0400 IronPort-SDR: H7eK1RSY1BZZagiC3h5/Jpbj99JPH4F+JQ1xlgzJPe1VHK17CcR2xgKjgSqv49tRg/JOYe9gXt MalkdKGVNosQ== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936297" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936297" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:27 -0700 IronPort-SDR: 4b5FfUz1z2vyggpQuTzBGG5pbJLZUer9B/ILHKL5vCY3QJpNVVrpJSUEeZKkF663KHcIJbdQ+S 41dbgY74ibTg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118786" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:26 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 4/8] KVM: x86/mmu: Capture requested page level before NX huge page workaround Date: Tue, 14 Jul 2020 21:27:21 -0700 Message-Id: <20200715042725.10961-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Apply the "huge page disallowed" adjustment of the max level only after capturing the original requested level. The requested level will be used in a future patch to skip adding pages to the list of disallowed huge pages if a huge page wasn't possible anyways, e.g. if the page isn't mapped as a huge page in the host. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++++------- arch/x86/kvm/mmu/paging_tmpl.h | 8 +++----- 2 files changed, 18 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bbd7e8be2b936..974c9a89c2454 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3238,7 +3238,8 @@ static int host_pfn_mapping_level(struct kvm_vcpu *vcpu, gfn_t gfn, } static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, - int max_level, kvm_pfn_t *pfnp) + int max_level, kvm_pfn_t *pfnp, + bool huge_page_disallowed, int *req_level) { struct kvm_memory_slot *slot; struct kvm_lpage_info *linfo; @@ -3246,6 +3247,8 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t mask; int level; + *req_level = PG_LEVEL_4K; + if (unlikely(max_level == PG_LEVEL_4K)) return PG_LEVEL_4K; @@ -3270,7 +3273,14 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, if (level == PG_LEVEL_4K) return level; - level = min(level, max_level); + *req_level = level = min(level, max_level); + + /* + * Enforce the iTLB multihit workaround after capturing the requested + * level, which will be used to do precise, accurate accounting. + */ + if (huge_page_disallowed) + return PG_LEVEL_4K; /* * mmu_notifier_retry() was successful and mmu_lock is held, so @@ -3316,17 +3326,15 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled; struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; - int level, ret; + int level, req_level, ret; gfn_t gfn = gpa >> PAGE_SHIFT; gfn_t base_gfn = gfn; if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa))) return RET_PF_RETRY; - if (huge_page_disallowed) - max_level = PG_LEVEL_4K; - - level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn); + level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn, + huge_page_disallowed, &req_level); trace_kvm_mmu_spte_requested(gpa, level, pfn); for_each_shadow_entry(vcpu, gpa, it) { diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5536b2004dac8..b92d936c0900d 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -636,7 +636,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, struct kvm_mmu_page *sp = NULL; struct kvm_shadow_walk_iterator it; unsigned direct_access, access = gw->pt_access; - int top_level, hlevel, ret; + int top_level, hlevel, req_level, ret; gfn_t base_gfn = gw->gfn; direct_access = gw->pte_access; @@ -682,10 +682,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, link_shadow_page(vcpu, it.sptep, sp); } - if (huge_page_disallowed) - max_level = PG_LEVEL_4K; - - hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn); + hlevel = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn, + huge_page_disallowed, &req_level); trace_kvm_mmu_spte_requested(addr, gw->level, pfn); -- 2.26.0