Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3326495pxf; Mon, 5 Apr 2021 09:06:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxjJXMoHBbpd3Gd+JZ8C5iCjl21nbm2ssx5WFtuvIc3qoPnzlslCof9/mtKa126vTevlH1o X-Received: by 2002:a05:6402:31b4:: with SMTP id dj20mr554770edb.45.1617638786109; Mon, 05 Apr 2021 09:06:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617638786; cv=none; d=google.com; s=arc-20160816; b=b2+yvbxSgfvJi1yHW2TcrchW5k9J8iENhb2U/6beRKTh3Ne0j7bB7JUvGTE9fEjwiP 5J3f5TX06J0iiD2aUP7tBrwxi2nbixwfVxcB1vl+eOenGEL9B5V6kijWUoG6/A/mK5e8 3rOO6FIFO3pmQo4FVQWMooQV5s19cQ65brs7AOiscbIt+xi5UC7V++WvHnswzqezO4mt PoReG6RMRHuit03OC6AmyRcGssb/pEMCaQB5jVuQcrC7hz012MFiiVhZffTDQi+r7Y4+ K06fzTzoY6nykAxfX9tOw/TWbKS2QD/gAZoGaZ7qdD0PRqPaGzawi3Bd6ylD48dA3cON +NjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8DtK24WHdlhh625Tc58Y19PhLNhPFn6BW4KdoxjCKHk=; b=wN8bZ1s2pcXDtHne301F3tISy8ZDrFB/FHF6wdZQvzqB2x8mBb+K+j+kiAxddnsTOS V7SSVHBJg1lox8IdStc/lBKDcL04nDEopANNEVNuySdrO1J/nGSRPugRiLFTed5y3082 NVNqLI5PHr0kmFfDHrSRAkEMU4PytgTl3TfWMJCHtNcA+9POMB9d+1C00PEi8BC3xtqJ snRBMiI8vh04qC76k0UbXwVTlDtqZZU8BE0QgpmHEQjyOlAMeZB/2divuDXow+FjpuWK Z6AE8e5EvEe6/9CPksHV7YpYpkfEOsR/gcQWYG/TaJpjqFWfdbmiOrc0+NH5Yw8JH1lq Hf8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=EtbfWv8y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b8si13581964edz.99.2021.04.05.09.06.02; Mon, 05 Apr 2021 09:06:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=EtbfWv8y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239509AbhDEJNc (ORCPT + 99 others); Mon, 5 Apr 2021 05:13:32 -0400 Received: from mail.kernel.org ([198.145.29.99]:55434 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239354AbhDEJJy (ORCPT ); Mon, 5 Apr 2021 05:09:54 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4457061393; Mon, 5 Apr 2021 09:09:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1617613785; bh=6LPeolNlHI+0JREf+xrum12fwofLfxMTazazS8w0K8I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EtbfWv8yJlO8aq4DOTKJagoO7El7TETTGPZAGf02SjM47xR+yjbFb5VRXUnovw0oB tiTeJZ4p8kgHldWto/+vkDeN6JQdArueU30Xdg4AucvIs3OrJw8PEDidvm5+HB/NJi PugiyPKmEAmmYXlRWcqpyZ46SpVeVIaT1rT1Cy/Q= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ben Gardon , Sean Christopherson , Paolo Bonzini , Sasha Levin Subject: [PATCH 5.10 091/126] KVM: x86/mmu: Ensure TLBs are flushed when yielding during GFN range zap Date: Mon, 5 Apr 2021 10:54:13 +0200 Message-Id: <20210405085034.075409532@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210405085031.040238881@linuxfoundation.org> References: <20210405085031.040238881@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson [ Upstream commit a835429cda91621fca915d80672a157b47738afb ] When flushing a range of GFNs across multiple roots, ensure any pending flush from a previous root is honored before yielding while walking the tables of the current root. Note, kvm_tdp_mmu_zap_gfn_range() now intentionally overwrites its local "flush" with the result to avoid redundant flushes. zap_gfn_range() preserves and return the incoming "flush", unless of course the flush was performed prior to yielding and no new flush was triggered. Fixes: 1af4a96025b3 ("KVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changed") Cc: stable@vger.kernel.org Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson Message-Id: <20210325200119.1359384-2-seanjc@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- arch/x86/kvm/mmu/tdp_mmu.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a54a9ed979d1..34ef3e1a0f84 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -111,7 +111,7 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa) } static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, - gfn_t start, gfn_t end, bool can_yield); + gfn_t start, gfn_t end, bool can_yield, bool flush); void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) { @@ -124,7 +124,7 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) list_del(&root->link); - zap_gfn_range(kvm, root, 0, max_gfn, false); + zap_gfn_range(kvm, root, 0, max_gfn, false, false); free_page((unsigned long)root->spt); kmem_cache_free(mmu_page_header_cache, root); @@ -504,20 +504,21 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, * scheduler needs the CPU or there is contention on the MMU lock. If this * function cannot yield, it will not release the MMU lock or reschedule and * the caller must ensure it does not supply too large a GFN range, or the - * operation can cause a soft lockup. + * operation can cause a soft lockup. Note, in some use cases a flush may be + * required by prior actions. Ensure the pending flush is performed prior to + * yielding. */ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, - gfn_t start, gfn_t end, bool can_yield) + gfn_t start, gfn_t end, bool can_yield, bool flush) { struct tdp_iter iter; - bool flush_needed = false; rcu_read_lock(); tdp_root_for_each_pte(iter, root, start, end) { if (can_yield && - tdp_mmu_iter_cond_resched(kvm, &iter, flush_needed)) { - flush_needed = false; + tdp_mmu_iter_cond_resched(kvm, &iter, flush)) { + flush = false; continue; } @@ -535,11 +536,11 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, continue; tdp_mmu_set_spte(kvm, &iter, 0); - flush_needed = true; + flush = true; } rcu_read_unlock(); - return flush_needed; + return flush; } /* @@ -554,7 +555,7 @@ bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end) bool flush = false; for_each_tdp_mmu_root_yield_safe(kvm, root) - flush |= zap_gfn_range(kvm, root, start, end, true); + flush = zap_gfn_range(kvm, root, start, end, true, flush); return flush; } @@ -757,7 +758,7 @@ static int zap_gfn_range_hva_wrapper(struct kvm *kvm, struct kvm_mmu_page *root, gfn_t start, gfn_t end, unsigned long unused) { - return zap_gfn_range(kvm, root, start, end, false); + return zap_gfn_range(kvm, root, start, end, false, false); } int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start, -- 2.30.1