Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp1670762pxy; Thu, 6 May 2021 12:52:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw+lyK9lRXz/xCKNTy5JDenTTnuY+Oe4WaTyP3awxRq6H07TD5KvI9PhLlzaNVuxcWTXmef X-Received: by 2002:a63:ff25:: with SMTP id k37mr6123064pgi.360.1620330777697; Thu, 06 May 2021 12:52:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620330777; cv=none; d=google.com; s=arc-20160816; b=js+xCG+TNyRnOQV7wgVYnVCgcQLtMuLnxqRyUHF2oYdBz0q3lK9LXuQELQYnAZfHbi Z0/mpNMe8DGkJ3ATRbEt4fVzukg0jTyhtWNLDyfm75mffFtT7Y/r4LWnsG3kIf+J0TRr PkDOAJHIpM6YSwM62Ffe5XUxPIbgWX4CZKnKAwTa6SHS3NGBCrCzlsNuA2MNHnONvIjm XiP/kpTUdqXrqWZY9bmaf0nsFxCZ8/33LVfEeDmGcRFPDtlkGfpFvRM4ujBk4BmGwSGO vAAa1lHFdSpk7DdR50sLu5c7JFselEQ+BNmGBQ4iPUBqO3dy6TIF0OSVHNBAN/IRPcIK JuqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=zhL8FcfNScm9PkXj7Tnrl0I6RhUdnvKmvH1J8Dy7QUQ=; b=MGqiznZjW802q7GZ/a31NPZN79UutlofkFAikbNiKS+BKljMMMhXjHTVeGRJnymc6G 63yZjoqEGtTu67Efe4Hp7fJ3xIjYQvyFSmtnB2EaDMOnktOZqueUb9GUz0RByI3buteU r6FR1CjKPf2LI0UGiOldeJ/bC1j27fq1dSmElFPfBSyYlkHXGa+SEFCd4pVzTiNNvhZ2 MY12Ea1jKtCM2gC4OE9c1UWGtBzOzUCQe4fZIhtNxwxsvBpxDEZeeA+M5j0Ev7BaTQDE E2XjskaR3xjr3bDyxEe+VhthliVH+5dKaUe61si/kO8XwiA6uhHMECJXMugRMe9EEEif QsYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=DNrJDk6W; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q17si3915462plr.1.2021.05.06.12.52.45; Thu, 06 May 2021 12:52:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=DNrJDk6W; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236483AbhEFSoX (ORCPT + 99 others); Thu, 6 May 2021 14:44:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236470AbhEFSoN (ORCPT ); Thu, 6 May 2021 14:44:13 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76E68C06138F for ; Thu, 6 May 2021 11:43:14 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id i8-20020a0569020688b02904ef3bd00ce7so7017269ybt.7 for ; Thu, 06 May 2021 11:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zhL8FcfNScm9PkXj7Tnrl0I6RhUdnvKmvH1J8Dy7QUQ=; b=DNrJDk6WIDGw5JhVnZ+VsOQCNnNUjt/avv4geA9QT5TONqnlHYiM7VuWXVzY4pul6e NR8m80fQb4EYvj1Ud3mmlDd7/GfLTdsD87plp5x+cw2ja6zHuJ2i9ZEJk/EIBqGjvHsP P2SAu4Dc/Zr2sCXvozOgiWVTbxUXqwZHUXylSXGRxPN6NkHWcnqOmr582WajrL/nNk3C uWvh23HoiCgFoDTTpLAuRnx0YASCjWruSl5OM/MBYf6A6KoUHvzPa4tDxYWF+jZVdC3S Wqfl+yilGvTkJ6VPbuepGFC+3+ePBHp6OjKItg3UMd+4auMCodyGt5dzsNbrVnWjt+/R tDEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zhL8FcfNScm9PkXj7Tnrl0I6RhUdnvKmvH1J8Dy7QUQ=; b=sNJ2Z/Gsc8JmyQOObw1u2Z/wIFt8M9ATmhmIkjh2sTE0iSondBaYwyA26D8um7g+t4 06I87HUzwUPB+u32I6OyrsXjNf6JFSPeOliLR+wnHdd7mQKXQvkfSk0Ike6PwLvQ7WSn N/Ag/+I762TmGQdgxVdg6l9bBtilfneSsVG8r6fY9FrLfU4ZHQnG0v64mhOmWeWOtUh4 WhKGKwiPmYwA2A3K4dwhXoSuu3f+xperjPJ//ISob56rKfAoQyNlPhaDEl1AoAr0gIC0 vhKYU/I2FoKDtXJosDOM8q88l0Ykmfu7waFZrl1fbCWSWQXgpHB1xvDkVNCCSCm6uC70 ssJQ== X-Gm-Message-State: AOAM531nMdr2LFJuE+MR0bipjG2RvQZBxcrhc7ZMIwURi+k2w1Lhjyg+ LAy/rFu0/Rwl4BFlSerryrDOcLeQtqH7T7L3f8PI6UDUCBI5EbsosZGEM+v5oSfNuNLbdv0RcXy SAJZJa4eQbupCxDROxyGD5el8QOkFiUtCS+ZercjyyF8hRcnvTRxfwIgWJDSfKHpquvDpV+ni X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:a25:a466:: with SMTP id f93mr7989143ybi.264.1620326593607; Thu, 06 May 2021 11:43:13 -0700 (PDT) Date: Thu, 6 May 2021 11:42:39 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-7-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 6/8] KVM: x86/mmu: Skip rmap operations if rmaps not allocated From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If only the TDP MMU is being used to manage the memory mappings for a VM, then many rmap operations can be skipped as they are guaranteed to be no-ops. This saves some time which would be spent on the rmap operation. It also avoids acquiring the MMU lock in write mode for many operations. This makes it safe to run the VM without rmaps allocated, when only using the TDP MMU and sets the stage for waiting to allocate the rmaps until they're needed. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 128 +++++++++++++++++++++++++---------------- 1 file changed, 77 insertions(+), 51 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8761b4925755..730ea84bf7e7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1189,6 +1189,10 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); + + if (!kvm_memslots_have_rmaps(kvm)) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1218,6 +1222,10 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); + + if (!kvm_memslots_have_rmaps(kvm)) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1260,9 +1268,12 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int i; bool write_protected = false; - for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { - rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + if (kvm_memslots_have_rmaps(kvm)) { + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { + rmap_head = __gfn_to_rmap(gfn, i, slot); + write_protected |= __rmap_write_protect(kvm, rmap_head, + true); + } } if (is_tdp_mmu_enabled(kvm)) @@ -1433,9 +1444,10 @@ static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); @@ -1445,9 +1457,10 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); @@ -1500,9 +1513,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); @@ -1512,9 +1526,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); @@ -5440,7 +5455,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) */ kvm_reload_remote_mmus(kvm); - kvm_zap_obsolete_pages(kvm); + if (kvm_memslots_have_rmaps(kvm)) + kvm_zap_obsolete_pages(kvm); write_unlock(&kvm->mmu_lock); @@ -5492,29 +5508,29 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) int i; bool flush = false; - write_lock(&kvm->mmu_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, slots) { - gfn_t start, end; - - start = max(gfn_start, memslot->base_gfn); - end = min(gfn_end, memslot->base_gfn + memslot->npages); - if (start >= end) - continue; + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + gfn_t start, end; + + start = max(gfn_start, memslot->base_gfn); + end = min(gfn_end, memslot->base_gfn + memslot->npages); + if (start >= end) + continue; - flush = slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, - PG_LEVEL_4K, - KVM_MAX_HUGEPAGE_LEVEL, - start, end - 1, true, flush); + flush = slot_handle_level_range(kvm, memslot, + kvm_zap_rmapp, PG_LEVEL_4K, + KVM_MAX_HUGEPAGE_LEVEL, start, + end - 1, true, flush); + } } + if (flush) + kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); + write_unlock(&kvm->mmu_lock); } - if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); - - write_unlock(&kvm->mmu_lock); - if (is_tdp_mmu_enabled(kvm)) { flush = false; @@ -5541,12 +5557,15 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot, int start_level) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, - start_level, KVM_MAX_HUGEPAGE_LEVEL, false); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, + start_level, KVM_MAX_HUGEPAGE_LEVEL, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5616,16 +5635,15 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_memory_slot *slot = (struct kvm_memory_slot *)memslot; bool flush; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); - - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); + if (flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { - flush = false; - read_lock(&kvm->mmu_lock); flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); if (flush) @@ -5652,11 +5670,14 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5681,6 +5702,14 @@ void kvm_mmu_zap_all(struct kvm *kvm) int ign; write_lock(&kvm->mmu_lock); + if (is_tdp_mmu_enabled(kvm)) + kvm_tdp_mmu_zap_all(kvm); + + if (!kvm_memslots_have_rmaps(kvm)) { + write_unlock(&kvm->mmu_lock); + return; + } + restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { if (WARN_ON(sp->role.invalid)) @@ -5693,9 +5722,6 @@ void kvm_mmu_zap_all(struct kvm *kvm) kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (is_tdp_mmu_enabled(kvm)) - kvm_tdp_mmu_zap_all(kvm); - write_unlock(&kvm->mmu_lock); } -- 2.31.1.607.g51e8a6a459-goog