Received: by 2002:a4a:311b:0:0:0:0:0 with SMTP id k27-v6csp4783764ooa; Tue, 14 Aug 2018 10:29:49 -0700 (PDT) X-Google-Smtp-Source: AA+uWPwoKIyebs5qPe8lffOJEfY/eUDJcVkZuvlKUisKsbsE5fkZS0QpBVJ8qXVeGDnvGnS8Fyzc X-Received: by 2002:a65:5288:: with SMTP id y8-v6mr19124046pgp.284.1534267789911; Tue, 14 Aug 2018 10:29:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534267789; cv=none; d=google.com; s=arc-20160816; b=UNSGzqx/chEEuoIjRJwPnPBzKtUHmEQTLYFLBtaB9PEYdfOOWgxJgosj5uX56ZcCHY dlobobO1TSEhNuVjj0VlGRi4yjO9zyqkEvT0it4fDTgWHPErMmzaDnbVvADaayQt5rc+ 7GKMrPCcBMXqQwsK/lp7UIMe6EPXFsl6hyQ7dgmyHMEQB1K3q/FRfpfC8ePK33dBwmnv 2inwboQDRzsu836o3eE1dDmC7JfMQaYXlIomdzmkwp2bJL6V4lBxEQaSk+Jg0GWIHBs1 DgABG3SZGkNl4xBVXmBS+Zu2wFNOQ8oOc2sGPax8a50/Chpzj7xO/F56/oddCspf16vU DbOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=TLtiUlQwCE6EpUzmOCD/bmc7BMDfPNlZ4/7/fsAMrBc=; b=Q7FZZgT2VNmeQCBk9sCJTPDUr5Bwrfgv42pzIs+yVDpr8H0ZIlkCdYk1Id14Q8nyfS MYfD1W+grw6RduK64Z5cnVC4fFX2+L2bFqQkfi042cIDFcEIoNM4Sc9/otas1rSRTBxz 0Nfb3Qzl1ZN3ZkHbfjaXLqe7+Br3wdD3MkqRkVV9YygHmNdbyiZHyQDIAINP66CDoh36 kFOOpR7ZOLtgXgku6ES9C0LVBg0eeJ3qD2Af7c79v1q3y15/XGapR4rBd7DciLJDHdGI t4wswcR9hMVn/CcCI3Tr7gaikR19upidPCYrxzouBH3hdjI80zpCkU4eXwrBSatjmVyJ IdAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 194-v6si21475767pgf.651.2018.08.14.10.29.35; Tue, 14 Aug 2018 10:29:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388831AbeHNUQB (ORCPT + 99 others); Tue, 14 Aug 2018 16:16:01 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:52546 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387825AbeHNUQA (ORCPT ); Tue, 14 Aug 2018 16:16:00 -0400 Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id EE410C5C; Tue, 14 Aug 2018 17:27:52 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Konrad Rzeszutek Wilk , Thomas Gleixner Subject: [PATCH 4.18 40/79] x86/KVM/VMX: Split the VMX MSR LOAD structures to have an host/guest numbers Date: Tue, 14 Aug 2018 19:16:59 +0200 Message-Id: <20180814171338.341329218@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180814171336.799314117@linuxfoundation.org> References: <20180814171336.799314117@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Konrad Rzeszutek Wilk There is no semantic change but this change allows an unbalanced amount of MSRs to be loaded on VMEXIT and VMENTER, i.e. the number of MSRs to save or restore on VMEXIT or VMENTER may be different. Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx.c | 65 ++++++++++++++++++++++++++++------------------------- 1 file changed, 35 insertions(+), 30 deletions(-) --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -808,6 +808,11 @@ static inline int pi_test_sn(struct pi_d (unsigned long *)&pi_desc->control); } +struct vmx_msrs { + unsigned int nr; + struct vmx_msr_entry val[NR_AUTOLOAD_MSRS]; +}; + struct vcpu_vmx { struct kvm_vcpu vcpu; unsigned long host_rsp; @@ -841,9 +846,8 @@ struct vcpu_vmx { struct loaded_vmcs *loaded_vmcs; bool __launched; /* temporary, used in vmx_vcpu_run */ struct msr_autoload { - unsigned nr; - struct vmx_msr_entry guest[NR_AUTOLOAD_MSRS]; - struct vmx_msr_entry host[NR_AUTOLOAD_MSRS]; + struct vmx_msrs guest; + struct vmx_msrs host; } msr_autoload; struct { int loaded; @@ -2451,18 +2455,18 @@ static void clear_atomic_switch_msr(stru } break; } - - for (i = 0; i < m->nr; ++i) - if (m->guest[i].index == msr) + for (i = 0; i < m->guest.nr; ++i) + if (m->guest.val[i].index == msr) break; - if (i == m->nr) + if (i == m->guest.nr) return; - --m->nr; - m->guest[i] = m->guest[m->nr]; - m->host[i] = m->host[m->nr]; - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr); - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr); + --m->guest.nr; + --m->host.nr; + m->guest.val[i] = m->guest.val[m->guest.nr]; + m->host.val[i] = m->host.val[m->host.nr]; + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr); + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr); } static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx, @@ -2514,24 +2518,25 @@ static void add_atomic_switch_msr(struct wrmsrl(MSR_IA32_PEBS_ENABLE, 0); } - for (i = 0; i < m->nr; ++i) - if (m->guest[i].index == msr) + for (i = 0; i < m->guest.nr; ++i) + if (m->guest.val[i].index == msr) break; if (i == NR_AUTOLOAD_MSRS) { printk_once(KERN_WARNING "Not enough msr switch entries. " "Can't add msr %x\n", msr); return; - } else if (i == m->nr) { - ++m->nr; - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr); - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr); + } else if (i == m->guest.nr) { + ++m->guest.nr; + ++m->host.nr; + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr); + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr); } - m->guest[i].index = msr; - m->guest[i].value = guest_val; - m->host[i].index = msr; - m->host[i].value = host_val; + m->guest.val[i].index = msr; + m->guest.val[i].value = guest_val; + m->host.val[i].index = msr; + m->host.val[i].value = host_val; } static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset) @@ -6325,9 +6330,9 @@ static void vmx_vcpu_setup(struct vcpu_v vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0); vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, 0); - vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host)); + vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val)); vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, 0); - vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest)); + vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val)); if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat); @@ -11383,10 +11388,10 @@ static void prepare_vmcs02_full(struct k * Set the MSR load/store lists to match L0's settings. */ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0); - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr); - vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host)); - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr); - vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest)); + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); + vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val)); + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); + vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val)); set_cr4_guest_host_mask(vmx); @@ -12545,8 +12550,8 @@ static void nested_vmx_vmexit(struct kvm vmx_segment_cache_clear(vmx); /* Update any VMCS fields that might have changed while L2 ran */ - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr); - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr); + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset); if (vmx->hv_deadline_tsc == -1) vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL,