Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3379998pxf; Mon, 5 Apr 2021 10:28:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJywMbTYJ2Gs3YofFbGEpMbwJY6uLP8jT49eIUhh6lYbvcSkNYgeFVXVfmkJo0yrq226rkmx X-Received: by 2002:a50:e607:: with SMTP id y7mr33076535edm.18.1617643688668; Mon, 05 Apr 2021 10:28:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617643688; cv=none; d=google.com; s=arc-20160816; b=xsIJ4T/d7qL0jpW2wvRXEQ7HRudSVuGX6U++jdbaZxKtokCUq3dQKBBAUkzfxUWeHs zGBrMGKHVOxh9Ocvplo1b2M608KZSiN2l760JbOy/U8sx/LUyqKf3C6RT3nvdj/1dSVw JhqWMpcA5tLCZHaoAEA+zpzKFv8FV2DHnrSOtWxvz9h4Jsas1S2LiCtkYEFdKmxSDbBs 5wDUY47FWp53RbNoDSRLLpmQ3wkxPpnckCVpVpIpiCDvrQ8qXC2OYCCXBQwg4Itm1/mX zpOHeqKONiFmSLzuXEANUjbhDfvumUrxLnVYj1qW5vUoqoDumbfxnge7uXYyI0GElIW9 1OQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=KruFfrF6dFkWHooa55V530CtnWPKNc3ysp78+qroLPg=; b=A8Nydnp/yOAVSGEeqAeoL6zGDuFLIQ7cEWzuCKWlNuh5VndahrJPwqafi8rx0EfX7M xCXiel95Q098WdxFsW+R0GEIBi7gXVhynI/bAkY0sf460mAESkROsx9SyI4OcQYRdv8N x9Qi6IZDsy3A8xe6syEK//1oD8i+iflGhCrDvDPlsEd9Zmz2hl+8TNQb0DtlKakBRpGs B/esFNc9HoO+dF7Sg3ej0A19N55ToeKfrrvXhaM9WoJQuYyqDWVzQwGBSWOvDApEpt1J rypZRGOAgHnBOaHBaE06Z42asIrFfp4aT2rYXgf/OjCpmS1RV3y0S7Zs0SujdIRnvXeA utPA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=Rd96NulI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y20si7250641edr.519.2021.04.05.10.27.45; Mon, 05 Apr 2021 10:28:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=Rd96NulI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240571AbhDEJVu (ORCPT + 99 others); Mon, 5 Apr 2021 05:21:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:38552 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240427AbhDEJQx (ORCPT ); Mon, 5 Apr 2021 05:16:53 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 49DBB60FE4; Mon, 5 Apr 2021 09:16:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1617614207; bh=w94Exbj3wG+x8FDGSMLZjc53rRP9qW1ZFPlGSj6EVLg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Rd96NulIpUdDvuRo8Jic4TYstYwMw3esSYERul8FoU5Ii7+3uRriPrZoBcdQLcl69 32PD3DfW5Y7tUa193rFZZuvidDB6LD3GJG1kXkfJtJNqyoe3DaRPsiJBMwCpdiGfVY 9GKTPJSEmYppeHn+nNUmTuh0wRjkprdzljt8ao6o= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ben Gardon , Paolo Bonzini , Sasha Levin Subject: [PATCH 5.11 106/152] KVM: x86/mmu: Merge flush and non-flush tdp_mmu_iter_cond_resched Date: Mon, 5 Apr 2021 10:54:15 +0200 Message-Id: <20210405085037.681910678@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210405085034.233917714@linuxfoundation.org> References: <20210405085034.233917714@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ben Gardon [ Upstream commit e139a34ef9d5627a41e1c02210229082140d1f92 ] The flushing and non-flushing variants of tdp_mmu_iter_cond_resched have almost identical implementations. Merge the two functions and add a flush parameter. Signed-off-by: Ben Gardon Message-Id: <20210202185734.1680553-12-bgardon@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- arch/x86/kvm/mmu/tdp_mmu.c | 42 ++++++++++++-------------------------- 1 file changed, 13 insertions(+), 29 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index abdd89771b9b..0dd27767c770 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -412,33 +412,13 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, for_each_tdp_pte(_iter, __va(_mmu->root_hpa), \ _mmu->shadow_root_level, _start, _end) -/* - * Flush the TLB and yield if the MMU lock is contended or this thread needs to - * return control to the scheduler. - * - * If this function yields, it will also reset the tdp_iter's walk over the - * paging structure and the calling function should allow the iterator to - * continue its traversal from the paging structure root. - * - * Return true if this function yielded, the TLBs were flushed, and the - * iterator's traversal was reset. Return false if a yield was not needed. - */ -static bool tdp_mmu_iter_flush_cond_resched(struct kvm *kvm, struct tdp_iter *iter) -{ - if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { - kvm_flush_remote_tlbs(kvm); - cond_resched_lock(&kvm->mmu_lock); - tdp_iter_refresh_walk(iter); - return true; - } - - return false; -} - /* * Yield if the MMU lock is contended or this thread needs to return control * to the scheduler. * + * If this function should yield and flush is set, it will perform a remote + * TLB flush before yielding. + * * If this function yields, it will also reset the tdp_iter's walk over the * paging structure and the calling function should allow the iterator to * continue its traversal from the paging structure root. @@ -446,9 +426,13 @@ static bool tdp_mmu_iter_flush_cond_resched(struct kvm *kvm, struct tdp_iter *it * Return true if this function yielded and the iterator's traversal was reset. * Return false if a yield was not needed. */ -static bool tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter) +static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, + struct tdp_iter *iter, bool flush) { if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { + if (flush) + kvm_flush_remote_tlbs(kvm); + cond_resched_lock(&kvm->mmu_lock); tdp_iter_refresh_walk(iter); return true; @@ -491,7 +475,7 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_set_spte(kvm, &iter, 0); flush_needed = !can_yield || - !tdp_mmu_iter_flush_cond_resched(kvm, &iter); + !tdp_mmu_iter_cond_resched(kvm, &iter, true); } return flush_needed; } @@ -864,7 +848,7 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); spte_set = true; - tdp_mmu_iter_cond_resched(kvm, &iter); + tdp_mmu_iter_cond_resched(kvm, &iter, false); } return spte_set; } @@ -923,7 +907,7 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); spte_set = true; - tdp_mmu_iter_cond_resched(kvm, &iter); + tdp_mmu_iter_cond_resched(kvm, &iter, false); } return spte_set; } @@ -1039,7 +1023,7 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_set_spte(kvm, &iter, new_spte); spte_set = true; - tdp_mmu_iter_cond_resched(kvm, &iter); + tdp_mmu_iter_cond_resched(kvm, &iter, false); } return spte_set; @@ -1092,7 +1076,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, tdp_mmu_set_spte(kvm, &iter, 0); - spte_set = !tdp_mmu_iter_flush_cond_resched(kvm, &iter); + spte_set = !tdp_mmu_iter_cond_resched(kvm, &iter, true); } if (spte_set) -- 2.30.1