Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp22228ybp; Thu, 3 Oct 2019 09:37:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqxsFw4169JZHBBcT2WRDC5PKSmr+/oFVhqQ9+1/avGpaBwznaYBaixW8uYVTTZ8m5pPcmMm X-Received: by 2002:a50:9734:: with SMTP id c49mr10562354edb.93.1570120656540; Thu, 03 Oct 2019 09:37:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570120656; cv=none; d=google.com; s=arc-20160816; b=azMNTWDjFxUGFBi7Y1gwsnNhiz+ZPVr/yIm2aVPNN4v4DXYWhDFHH47o/Za2gp6dc0 ySnwfyx1GFFElhBWhGE6btpnVBztPhgOShi5RlsoskWnhsjibj8teF4CZwUg5TbByMKe 1lJ6AjstITYzyykSxC90PUeqGSB8ysXmuSw/Efp5YKN40qtmX2A7bKRJ21nJ5Y6vKR8Z wQVYDcsu1+9StRyCDfcbBYAQYWS8DgkO/8X+78yWJPQfVdKclWJ4ftqB0psuirk4a3gk 2j1J5MGvEmPHVkPe//C0L4M4XTdn5qVWbH63Ayf6PjwMQ3YSPDY55TvTC7DKeLoU47/4 WJxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=S1Vw+HjB+5GZByn5486LdKWdJdApVzOak3T4HeV2wvw=; b=jkidmFq3D5RLhdr3Q2X67PHWUwOEDDh9Yz7h71MGMlrRJBkKFaYjti3YvCsT+niKCp Wcz2C5iY26m4Y9bdQVCitu8IedKq/QejJitiVZHDgdSZTplyHI5+GGlKJLf11gvtXB6i 5G82bIdyzZ/+zVDrfsiKRHsgh6MMb+M+jxuyUonmhUiy0WcCeitSi3yVxTYVOQVP49GT 171e1Chn1lthxq0VbED03y7ICJvOI41umUoAziMU8vigKJ161iMAwv0iL55xwzU1+fUU /R0ZsMnmKQe+fU/8W25Di479G7m6tkMVwIjnpSHPZ7J+hTtfj2+kX0UEFuoF21xWoK/x 3UtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=p2vjDdJV; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b56si1739906edb.418.2019.10.03.09.37.12; Thu, 03 Oct 2019 09:37:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=p2vjDdJV; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404324AbfJCQfs (ORCPT + 99 others); Thu, 3 Oct 2019 12:35:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:44728 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404301AbfJCQfo (ORCPT ); Thu, 3 Oct 2019 12:35:44 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CCBE320830; Thu, 3 Oct 2019 16:35:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570120543; bh=EO7ryEqMLUz6/vAOoxJO8OwMgOIlxoB/IhAvgKSHIYA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p2vjDdJVzSh57+z15MPG7km+RKEk6BR+JgAz1RPEPHon1URPm+vvg3wHwrZAirb62 LddM9ZQ84sjM0u042Vk5okga0xiyJG2OBLdkDR/HfL9EHoSrDFNYP2jG8uCmW7gP+2 TYaeHA47scal8JFLzpy1mOlWSqPEuc2GiCN6J/pw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sean Christopherson , Paolo Bonzini Subject: [PATCH 5.2 244/313] KVM: x86/mmu: Use fast invalidate mechanism to zap MMIO sptes Date: Thu, 3 Oct 2019 17:53:42 +0200 Message-Id: <20191003154557.046977117@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191003154533.590915454@linuxfoundation.org> References: <20191003154533.590915454@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson commit 92f58b5c0181596d9f1e317b49ada2e728fb76eb upstream. Use the fast invalidate mechasim to zap MMIO sptes on a MMIO generation wrap. The fast invalidate flow was reintroduced to fix a livelock bug in kvm_mmu_zap_all() that can occur if kvm_mmu_zap_all() is invoked when the guest has live vCPUs. I.e. using kvm_mmu_zap_all() to handle the MMIO generation wrap is theoretically susceptible to the livelock bug. This effectively reverts commit 4771450c345dc ("Revert "KVM: MMU: drop kvm_mmu_zap_mmio_sptes""), i.e. restores the behavior of commit a8eca9dcc656a ("KVM: MMU: drop kvm_mmu_zap_mmio_sptes"). Note, this actually fixes commit 571c5af06e303 ("KVM: x86/mmu: Voluntarily reschedule as needed when zapping MMIO sptes"), but there is no need to incrementally revert back to using fast invalidate, e.g. doing so doesn't provide any bisection or stability benefits. Fixes: 571c5af06e303 ("KVM: x86/mmu: Voluntarily reschedule as needed when zapping MMIO sptes") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/mmu.c | 17 +++-------------- 1 file changed, 3 insertions(+), 14 deletions(-) --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -389,8 +389,6 @@ static void mark_mmio_spte(struct kvm_vc mask |= (gpa & shadow_nonpresent_or_rsvd_mask) << shadow_nonpresent_or_rsvd_mask_len; - page_header(__pa(sptep))->mmio_cached = true; - trace_mark_mmio_spte(sptep, gfn, access, gen); mmu_spte_set(sptep, mask); } @@ -5952,7 +5950,7 @@ void kvm_mmu_slot_set_dirty(struct kvm * } EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty); -static void __kvm_mmu_zap_all(struct kvm *kvm, bool mmio_only) +void kvm_mmu_zap_all(struct kvm *kvm) { struct kvm_mmu_page *sp, *node; LIST_HEAD(invalid_list); @@ -5961,14 +5959,10 @@ static void __kvm_mmu_zap_all(struct kvm spin_lock(&kvm->mmu_lock); restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { - if (mmio_only && !sp->mmio_cached) - continue; if (sp->role.invalid && sp->root_count) continue; - if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) { - WARN_ON_ONCE(mmio_only); + if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) goto restart; - } if (cond_resched_lock(&kvm->mmu_lock)) goto restart; } @@ -5977,11 +5971,6 @@ restart: spin_unlock(&kvm->mmu_lock); } -void kvm_mmu_zap_all(struct kvm *kvm) -{ - return __kvm_mmu_zap_all(kvm, false); -} - void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) { WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS); @@ -6003,7 +5992,7 @@ void kvm_mmu_invalidate_mmio_sptes(struc */ if (unlikely(gen == 0)) { kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n"); - __kvm_mmu_zap_all(kvm, true); + kvm_mmu_zap_all_fast(kvm); } }