Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933248AbcCHLpB (ORCPT ); Tue, 8 Mar 2016 06:45:01 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:35081 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932929AbcCHLoe (ORCPT ); Tue, 8 Mar 2016 06:44:34 -0500 From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: guangrong.xiao@linux.intel.com, stable@vger.kernel.org, Xiao Guangrong Subject: [PATCH 2/2] KVM: MMU: fix reserved bit check for pte.u=0/pte.w=0/CR0.WP=0/CR4.SMEP=1/EFER.NX=0 Date: Tue, 8 Mar 2016 12:44:27 +0100 Message-Id: <1457437467-65707-3-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1457437467-65707-1-git-send-email-pbonzini@redhat.com> References: <1457437467-65707-1-git-send-email-pbonzini@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1689 Lines: 43 KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by setting U=0 and W=1 in the shadow PTE. This will cause a user write to fault and a supervisor write to succeed (which is correct because CR0.WP=0). A user read instead will flip U=0 to 1 and W=1 back to 0. This enables user reads; it also disables supervisor writes, the next of which will then flip the bits again. When SMEP is in effect, however, pte.u=0 will enable kernel execution of this page. To avoid this, KVM also sets pte.nx=1. The reserved bit catches this because it only looks at the guest's EFER.NX bit. Teach it that smep_andnot_wp will also use the NX bit of SPTEs. Cc: stable@vger.kernel.org Cc: Xiao Guangrong Fixes: c258b62b264fdc469b6d3610a907708068145e3b Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 95a955de5964..0cd4ee01de94 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3721,13 +3721,15 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu, void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) { + int uses_nx = context->nx || context->base_role.smep_andnot_wp; + /* * Passing "true" to the last argument is okay; it adds a check * on bit 8 of the SPTEs which KVM doesn't use anyway. */ __reset_rsvds_bits_mask(vcpu, &context->shadow_zero_check, boot_cpu_data.x86_phys_bits, - context->shadow_root_level, context->nx, + context->shadow_root_level, uses_nx, guest_cpuid_has_gbpages(vcpu), is_pse(vcpu), true); } -- 1.8.3.1