Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp8098214imu; Tue, 4 Dec 2018 02:59:20 -0800 (PST) X-Google-Smtp-Source: AFSGD/WjTsCkfGXOHTZsPRkF1vBjAUk/9VVXpkWrSHFq7mPl+vYmasKaXIHZA83R3Rfk7yAswnpC X-Received: by 2002:a17:902:4681:: with SMTP id p1mr20093578pld.184.1543921160102; Tue, 04 Dec 2018 02:59:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543921160; cv=none; d=google.com; s=arc-20160816; b=IWAz1zIaE6lenj7DA+hTnxB+NDZUn8o6uNRI25atj6ZS0XE6Q+KLJxC8sy98PLH97U 01IWocW98XBX5Voekn9sxw5aOqKz/K8kKrygjZuI8hLs0se/M9yWaEa2ECIpnAZS9wlQ WwSGcdWaBxr4ZN/LVSBRlGH84nsqfGT98rt0mO2FRCedSg9mxiNN/vEmYPaFsM/UNoh/ 8Z8d+Vy1NQnIFk9OB4+ykln0+e65l8JMybonIoLnpsIlOD8M08xwnL//M62W64RPHqPM K3Karkrcqdk14BWJRqTlUk2abEM2xz+0P3nLm3vyHwcH5w9rOhyGyAxDlFS6WmnHDGOn PcgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yDGa86jzq5CBhSg7lIk6oVfqwWlM8oEgbfmSqXK4UKY=; b=bwQoNMgjMjOmGVh+vVxWycXa+h1VC5zAu7C+2e8aQ42tULPijcg7cXFbPybO67EBlA m6teju2SDpwgDA4bDjBWCLOQ0au4n29SbDpoaU5X73tgDRnyFnZhX4ki8ZUm7+pJasa+ Fw3J7sJNpqzTPcA6UTxDgd8a4gAvP7To7wNMFHLHpYu098mJ9M+0nbbsk/7K01XEbLVM BjEHhByY5aY8ES8v4PokT56mwljrUDITyMMGeC1x4gxpP1Tfo3krccy/d5zDHg9bFyjo d2wuKVfWptyzZl/pE7KOFUqCuxyxYpZUOkfq4hBzByy2BJ9Xjx5oyDskjAJeI9/KdJ/j cHKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Wsbr1rW1; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z31si17649242plb.402.2018.12.04.02.59.05; Tue, 04 Dec 2018 02:59:20 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Wsbr1rW1; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726822AbeLDK5p (ORCPT + 99 others); Tue, 4 Dec 2018 05:57:45 -0500 Received: from mail.kernel.org ([198.145.29.99]:41770 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726810AbeLDK5n (ORCPT ); Tue, 4 Dec 2018 05:57:43 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C7A612146D; Tue, 4 Dec 2018 10:57:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1543921062; bh=fXYE8XQrIAC4oTNobHDjADFumnA0SneZFbu/wgQLY38=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wsbr1rW1kNdto1vQksH4YDwVuY7Q2aYvWtl7wlolPQIdPjitP2+888CxWhn+JQSGx CHAA9g2KU/h6BlA1QshrSAckDc4Njc2jkkmWPn4XGhAEbSufoOpq0UsU0gB3l1VOSI cetYvdHzo+BKoIpsERHvxdyZd9uMghEozAYRCARg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Liran Alon , Mihai Carabas , Krish Sadhukhan , Leonid Shatz , Paolo Bonzini Subject: [PATCH 4.19 065/139] KVM: nVMX/nSVM: Fix bug which sets vcpu->arch.tsc_offset to L1 tsc_offset Date: Tue, 4 Dec 2018 11:49:06 +0100 Message-Id: <20181204103652.702958456@linuxfoundation.org> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181204103649.950154335@linuxfoundation.org> References: <20181204103649.950154335@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Leonid Shatz commit 326e742533bf0a23f0127d8ea62fb558ba665f08 upstream. Since commit e79f245ddec1 ("X86/KVM: Properly update 'tsc_offset' to represent the running guest"), vcpu->arch.tsc_offset meaning was changed to always reflect the tsc_offset value set on active VMCS. Regardless if vCPU is currently running L1 or L2. However, above mentioned commit failed to also change kvm_vcpu_write_tsc_offset() to set vcpu->arch.tsc_offset correctly. This is because vmx_write_tsc_offset() could set the tsc_offset value in active VMCS to given offset parameter *plus vmcs12->tsc_offset*. However, kvm_vcpu_write_tsc_offset() just sets vcpu->arch.tsc_offset to given offset parameter. Without taking into account the possible addition of vmcs12->tsc_offset. (Same is true for SVM case). Fix this issue by changing kvm_x86_ops->write_tsc_offset() to return actually set tsc_offset in active VMCS and modify kvm_vcpu_write_tsc_offset() to set returned value in vcpu->arch.tsc_offset. In addition, rename write_tsc_offset() callback to write_l1_tsc_offset() to make it clear that it is meant to set L1 TSC offset. Fixes: e79f245ddec1 ("X86/KVM: Properly update 'tsc_offset' to represent the running guest") Reviewed-by: Liran Alon Reviewed-by: Mihai Carabas Reviewed-by: Krish Sadhukhan Signed-off-by: Leonid Shatz Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/svm.c | 5 +++-- arch/x86/kvm/vmx.c | 21 +++++++++------------ arch/x86/kvm/x86.c | 6 +++--- 4 files changed, 17 insertions(+), 18 deletions(-) --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1046,7 +1046,8 @@ struct kvm_x86_ops { bool (*has_wbinvd_exit)(void); u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu); - void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); + /* Returns actual tsc_offset set in active VMCS */ + u64 (*write_l1_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2); --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1444,7 +1444,7 @@ static u64 svm_read_l1_tsc_offset(struct return vcpu->arch.tsc_offset; } -static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) +static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) { struct vcpu_svm *svm = to_svm(vcpu); u64 g_tsc_offset = 0; @@ -1462,6 +1462,7 @@ static void svm_write_tsc_offset(struct svm->vmcb->control.tsc_offset = offset + g_tsc_offset; mark_dirty(svm->vmcb, VMCB_INTERCEPTS); + return svm->vmcb->control.tsc_offset; } static void avic_init_vmcb(struct vcpu_svm *svm) @@ -7155,7 +7156,7 @@ static struct kvm_x86_ops svm_x86_ops __ .has_wbinvd_exit = svm_has_wbinvd_exit, .read_l1_tsc_offset = svm_read_l1_tsc_offset, - .write_tsc_offset = svm_write_tsc_offset, + .write_l1_tsc_offset = svm_write_l1_tsc_offset, .set_tdp_cr3 = set_tdp_cr3, --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3433,11 +3433,9 @@ static u64 vmx_read_l1_tsc_offset(struct return vcpu->arch.tsc_offset; } -/* - * writes 'offset' into guest's timestamp counter offset register - */ -static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) +static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) { + u64 active_offset = offset; if (is_guest_mode(vcpu)) { /* * We're here if L1 chose not to trap WRMSR to TSC. According @@ -3445,17 +3443,16 @@ static void vmx_write_tsc_offset(struct * set for L2 remains unchanged, and still needs to be added * to the newly set TSC to get L2's TSC. */ - struct vmcs12 *vmcs12; - /* recalculate vmcs02.TSC_OFFSET: */ - vmcs12 = get_vmcs12(vcpu); - vmcs_write64(TSC_OFFSET, offset + - (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ? - vmcs12->tsc_offset : 0)); + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING)) + active_offset += vmcs12->tsc_offset; } else { trace_kvm_write_tsc_offset(vcpu->vcpu_id, vmcs_read64(TSC_OFFSET), offset); - vmcs_write64(TSC_OFFSET, offset); } + + vmcs_write64(TSC_OFFSET, active_offset); + return active_offset; } /* @@ -14203,7 +14200,7 @@ static struct kvm_x86_ops vmx_x86_ops __ .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit, .read_l1_tsc_offset = vmx_read_l1_tsc_offset, - .write_tsc_offset = vmx_write_tsc_offset, + .write_l1_tsc_offset = vmx_write_l1_tsc_offset, .set_tdp_cr3 = vmx_set_cr3, --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1582,8 +1582,7 @@ EXPORT_SYMBOL_GPL(kvm_read_l1_tsc); static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) { - kvm_x86_ops->write_tsc_offset(vcpu, offset); - vcpu->arch.tsc_offset = offset; + vcpu->arch.tsc_offset = kvm_x86_ops->write_l1_tsc_offset(vcpu, offset); } static inline bool kvm_check_tsc_unstable(void) @@ -1711,7 +1710,8 @@ EXPORT_SYMBOL_GPL(kvm_write_tsc); static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, s64 adjustment) { - kvm_vcpu_write_tsc_offset(vcpu, vcpu->arch.tsc_offset + adjustment); + u64 tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu); + kvm_vcpu_write_tsc_offset(vcpu, tsc_offset + adjustment); } static inline void adjust_tsc_offset_host(struct kvm_vcpu *vcpu, s64 adjustment)