Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2508943imm; Thu, 7 Jun 2018 11:51:13 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKrYzNFmWP7xqMbUxABCqYNR2jXCu/XT86ziLan3i0ofZvTka71DEzZh/HVl1gEPXAoEfha X-Received: by 2002:a17:902:7604:: with SMTP id k4-v6mr3159496pll.13.1528397473019; Thu, 07 Jun 2018 11:51:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528397472; cv=none; d=google.com; s=arc-20160816; b=AQ2YNqfSLwBbvY7D5Ae0MBIo5p3OpPx70XVwSGpJAhJoz3QrElvmJo5euacMQdR+go yRpoAC4mxjl6e+bsW7sd/TSh3x8SBenFtDowMEVWU3VOKY2SJ3sOimC665y5n01nqr51 3Ny6XPqiW7Kfa9CFD03eRDqRxR1558GjRz3EijHuM/4LueG5zhHGdp6aY2pklI8L/iSK kxAy5pLSJkCRNUFXZck36AFn+mjlghhhJhY3baoO/ejgFw/52rZ3sjswRcENvpcNwuiH nq8nc/gnc4GwkSaVGTIvSbfxU+Pmn2c1yNKRWSIRKopBP1kDTQtPf8HMA4hTx01aBiAs QGAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition :arc-authentication-results; bh=WS48TXJK0MxasIBmd7Wlm2vBBHsxw1ZUALMjzeoD/Wo=; b=cGyA2AWomj/TL2m0L5rcYUJ4fSt4IEZxs4yBm9DAZ+3UHlK1ESX8BDtEzUpsC8DWs7 gVcLpAyexNsfryNdybazqaAK0FDG4yLoCXMOf4Rs1083appyuUznM0WG2GFglubxGv6J KEI0AbAAqxIKyKKS0re81DjTBn/tE8QOY1VyhBPoEdQGuHHgZ4ePsUo35t8XY4GcNtlG d1Srni/3wL1zWi4Tqn1/hKH9aUfu5lS/vq+/1I44vfMksMb49HuCzwzyykkjUjfhvjiG WLbkXDVqYW7UKMYPZ8EsO9RaBOgyigA+f+pWikX530vT09Wh7mA/oJm9rj742eY9LKaP O5DA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a38-v6si14193015pla.541.2018.06.07.11.50.58; Thu, 07 Jun 2018 11:51:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935390AbeFGPTw (ORCPT + 99 others); Thu, 7 Jun 2018 11:19:52 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:41275 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935647AbeFGO71 (ORCPT ); Thu, 7 Jun 2018 10:59:27 -0400 Received: from [148.252.241.226] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from ) id 1fQvby-0005Zs-3E; Thu, 07 Jun 2018 15:09:58 +0100 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1fQvaz-0002kB-37; Thu, 07 Jun 2018 15:08:57 +0100 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Radim =?UTF-8?Q?Kr=C4=8Dm=C3=A1=C5=99?=" , "David Matlack" , "David Woodhouse" , "Greg Kroah-Hartman" , "Paolo Bonzini" Date: Thu, 07 Jun 2018 15:05:21 +0100 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 051/410] KVM: nVMX: mark vmcs12 pages dirty on L2 exit In-Reply-To: X-SA-Exim-Connect-IP: 148.252.241.226 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.57-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: David Matlack commit c9f04407f2e0b3fc9ff7913c65fcfcb0a4b61570 upstream. The host physical addresses of L1's Virtual APIC Page and Posted Interrupt descriptor are loaded into the VMCS02. The CPU may write to these pages via their host physical address while L2 is running, bypassing address-translation-based dirty tracking (e.g. EPT write protection). Mark them dirty on every exit from L2 to prevent them from getting out of sync with dirty tracking. Also mark the virtual APIC page and the posted interrupt descriptor dirty when KVM is virtualizing posted interrupt processing. Signed-off-by: David Matlack Reviewed-by: Paolo Bonzini Signed-off-by: Radim Krčmář Signed-off-by: David Woodhouse Signed-off-by: Greg Kroah-Hartman [bwh: Backported to 3.16: - No nested posted interrupt support - No SMM support, so use mark_page_dirty() instead of kvm_vcpu_mark_page_dirty()] Signed-off-by: Ben Hutchings --- arch/x86/kvm/vmx.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -4197,6 +4197,23 @@ static int vmx_vm_has_apicv(struct kvm * return enable_apicv && irqchip_in_kernel(kvm); } +static void nested_mark_vmcs12_pages_dirty(struct kvm_vcpu *vcpu) +{ + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + gfn_t gfn; + + /* + * Don't need to mark the APIC access page dirty; it is never + * written to by the CPU during APIC virtualization. + */ + + if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { + gfn = vmcs12->virtual_apic_page_addr >> PAGE_SHIFT; + mark_page_dirty(vcpu->kvm, gfn); + } +} + + /* * Send interrupt to vcpu via posted interrupt way. * 1. If target vcpu is running(non-root mode), send posted interrupt @@ -6902,6 +6919,18 @@ static bool nested_vmx_exit_handled(stru vmcs_read32(VM_EXIT_INTR_ERROR_CODE), KVM_ISA_VMX); + /* + * The host physical addresses of some pages of guest memory + * are loaded into VMCS02 (e.g. L1's Virtual APIC Page). The CPU + * may write to these pages via their host physical address while + * L2 is running, bypassing any address-translation-based dirty + * tracking (e.g. EPT write protection). + * + * Mark them dirty on every exit from L2 to prevent them from + * getting out of sync with dirty tracking. + */ + nested_mark_vmcs12_pages_dirty(vcpu); + if (vmx->nested.nested_run_pending) return 0;