Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3716830pxj; Tue, 11 May 2021 10:19:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzaL1tzJ3O6Q5m8dZVihEtQVemD+St5KgEsraDGpwtcQ/QTLDzobatrFxUHYeK7K28vfwKV X-Received: by 2002:a50:ab06:: with SMTP id s6mr29105003edc.100.1620753554411; Tue, 11 May 2021 10:19:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620753554; cv=none; d=google.com; s=arc-20160816; b=Iw0bU80sGh1j2EozIICBdfYkv9tDS7pINVXze+MHj1lHYwaTajJPU/ReMruL3Pv3Co huiPfSwhTWIVm2KSMLZgsDZjeHLuB1seA89NWUkMesdrOos/FdvwJJYczCLHZRgShvRP 9og7ZJ187j76vuHkpoupASub/Wdt9WQcjnjswOW4JRLxSeO1MZOX+0kHDrvj0d9raaqh 8Kxt9dJe99MbEUrmVpN0LKcgLp9gCmhls8eQ+jxk31EA74cEfzTATTOWrsKFZiCu9TtO o+cTl2vWLb1SW/sRCqVSBX1aclGNI6tS4abMPR1QJLYnhEiECEDVN7KTos1XAjrW62fs hS0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=3SOM6IY5GwQokx4OjnUyYojfbGYTl1YKZ6S3GRlXvGY=; b=Evsil1xs7MhiHTQhI1bEcYA8D5v9gxtOTAENp04y1z0BsR4rxK0Biu/57NU826QsGC QzrchJwoOyWqyYmo3VvYAhiSL/y/LASfoSmwmm2F220M015t10knfvzzmSrpbrLEWP1/ M7O8l3iivxnH5Jq35d/KSXXYG2SUS5lphC/uUjqHeNkVKuAmTYOIEWNWwOfIA8ZwazXR c78bcu8SaeyBpfDfhiZk7GtRbiEpBvMRWrFKU8hW+yvQZS+H6at1OjGOTjpI9tgyJMrn mfb/3kyFWG0ROR9vpGFSD6632xki+gcPafkcncwddTtLX/3QsIwPgmHmZLkXDKeo57SX CZKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=BR77rWkA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w7si15152978edc.498.2021.05.11.10.18.49; Tue, 11 May 2021 10:19:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=BR77rWkA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232010AbhEKRRy (ORCPT + 99 others); Tue, 11 May 2021 13:17:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231967AbhEKRRn (ORCPT ); Tue, 11 May 2021 13:17:43 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5EDAC0613ED for ; Tue, 11 May 2021 10:16:31 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id l8-20020a25b3080000b02904f8bd69022dso10815465ybj.9 for ; Tue, 11 May 2021 10:16:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3SOM6IY5GwQokx4OjnUyYojfbGYTl1YKZ6S3GRlXvGY=; b=BR77rWkAyQd3DrcASp/9VOrqotwlKor+XuPXnPhAXYjZvQo0xRNneYyXox57+aWk4j 62Wv1QmgQ75YxKVTwEkgdVp7hmF/wJZlDDMJNzQzEqdEWHuUzzWEcl2X0h8XnlthXfw7 E9aN6AV12PK9BPlrSyyV/Gcr3+N1dND9MrElkA0IyppSQdRG+eI/y5xWvlMK/V9sojcg fy0CfO1/qpZ9sSA+TOSo6qaOe6ZiqfbFJbsbZqCiM/NC816ug22Qyqd0PJ7CMvCf7Ut2 rrHO9HopQNJYfUSWhgdCsUKHL7Cymq9qti9+oSW4XeALDRVzXSaabp3ukk+Gy/57yWU2 skkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3SOM6IY5GwQokx4OjnUyYojfbGYTl1YKZ6S3GRlXvGY=; b=hFKKu65EJbwSQFj0QMCn67DGWyAZaSKdgTQXU9S6yxmQfdQT2n6PYW8tb70sYIX1QW wumIYv64ONQ3Njyy2RZh6C87nbbzewamnGPB/FOKYAY1YvBpOVA5LwDIN+UJhUVZDfxL cMqDE6Y9KcNvlCX0i5nlnKbGIn4h9opi49qXoCsNB5E+u613kJHVQgGVeutqCNDqM6h/ xs7PnZ0DTh6vjcHU4llHXExojqiB7pbW2jO7bTep8ELp6CYVi0JouHAGGCZgLzyfTsd3 NNyxitKdktHP/sdQQTSCWBTC9SKB5BKQpUV0BCC29gAMfd/kDGYPZBZgE7MKMnifxw2r IcSQ== X-Gm-Message-State: AOAM533Bn5SmsZQnLyJnD+DIzPJxj+k/kRYossHU2Vb5SMzfihmgWbss aEKT4Aoh76pn/6GjH5ZaoMl6Cteng3m3zve6NjowKJQ9cda7Q/MrwsZkDhXRrEIqMIlD3YLCnun iNjEkXSuIc3JjCs9T+cmhGZDYpHSnmsf+iBnQBGBI4ydw6hrsjYHTLuZzx2Tx+1hZNjcEn8e4 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:e050:3342:9ea6:6859]) (user=bgardon job=sendgmr) by 2002:a25:830f:: with SMTP id s15mr44494156ybk.50.1620753390597; Tue, 11 May 2021 10:16:30 -0700 (PDT) Date: Tue, 11 May 2021 10:16:09 -0700 In-Reply-To: <20210511171610.170160-1-bgardon@google.com> Message-Id: <20210511171610.170160-7-bgardon@google.com> Mime-Version: 1.0 References: <20210511171610.170160-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v4 6/7] KVM: x86/mmu: Skip rmap operations if rmaps not allocated From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If only the TDP MMU is being used to manage the memory mappings for a VM, then many rmap operations can be skipped as they are guaranteed to be no-ops. This saves some time which would be spent on the rmap operation. It also avoids acquiring the MMU lock in write mode for many operations. This makes it safe to run the VM without rmaps allocated, when only using the TDP MMU and sets the stage for waiting to allocate the rmaps until they're needed. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 128 +++++++++++++++++++++++++---------------- 1 file changed, 77 insertions(+), 51 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f059f2e8c6fe..b0bdb924d519 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1189,6 +1189,10 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); + + if (!kvm->arch.memslots_have_rmaps) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1218,6 +1222,10 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); + + if (!kvm->arch.memslots_have_rmaps) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1260,9 +1268,12 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int i; bool write_protected = false; - for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { - rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + if (kvm->arch.memslots_have_rmaps) { + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { + rmap_head = __gfn_to_rmap(gfn, i, slot); + write_protected |= __rmap_write_protect(kvm, rmap_head, + true); + } } if (is_tdp_mmu_enabled(kvm)) @@ -1433,9 +1444,10 @@ static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); + if (kvm->arch.memslots_have_rmaps) + flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); @@ -1445,9 +1457,10 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); + if (kvm->arch.memslots_have_rmaps) + flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); @@ -1500,9 +1513,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); + if (kvm->arch.memslots_have_rmaps) + young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); @@ -1512,9 +1526,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); + if (kvm->arch.memslots_have_rmaps) + young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); @@ -5440,7 +5455,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) */ kvm_reload_remote_mmus(kvm); - kvm_zap_obsolete_pages(kvm); + if (kvm->arch.memslots_have_rmaps) + kvm_zap_obsolete_pages(kvm); write_unlock(&kvm->mmu_lock); @@ -5492,29 +5508,29 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) int i; bool flush = false; - write_lock(&kvm->mmu_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, slots) { - gfn_t start, end; - - start = max(gfn_start, memslot->base_gfn); - end = min(gfn_end, memslot->base_gfn + memslot->npages); - if (start >= end) - continue; + if (kvm->arch.memslots_have_rmaps) { + write_lock(&kvm->mmu_lock); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + gfn_t start, end; + + start = max(gfn_start, memslot->base_gfn); + end = min(gfn_end, memslot->base_gfn + memslot->npages); + if (start >= end) + continue; - flush = slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, - PG_LEVEL_4K, - KVM_MAX_HUGEPAGE_LEVEL, - start, end - 1, true, flush); + flush = slot_handle_level_range(kvm, memslot, + kvm_zap_rmapp, PG_LEVEL_4K, + KVM_MAX_HUGEPAGE_LEVEL, start, + end - 1, true, flush); + } } + if (flush) + kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); + write_unlock(&kvm->mmu_lock); } - if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); - - write_unlock(&kvm->mmu_lock); - if (is_tdp_mmu_enabled(kvm)) { flush = false; @@ -5541,12 +5557,15 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot, int start_level) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, - start_level, KVM_MAX_HUGEPAGE_LEVEL, false); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.memslots_have_rmaps) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, + start_level, KVM_MAX_HUGEPAGE_LEVEL, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5616,16 +5635,15 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_memory_slot *slot = (struct kvm_memory_slot *)memslot; bool flush; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); - - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.memslots_have_rmaps) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); + if (flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { - flush = false; - read_lock(&kvm->mmu_lock); flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); if (flush) @@ -5652,11 +5670,14 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.memslots_have_rmaps) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5681,6 +5702,14 @@ void kvm_mmu_zap_all(struct kvm *kvm) int ign; write_lock(&kvm->mmu_lock); + if (is_tdp_mmu_enabled(kvm)) + kvm_tdp_mmu_zap_all(kvm); + + if (!kvm->arch.memslots_have_rmaps) { + write_unlock(&kvm->mmu_lock); + return; + } + restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { if (WARN_ON(sp->role.invalid)) @@ -5693,9 +5722,6 @@ void kvm_mmu_zap_all(struct kvm *kvm) kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (is_tdp_mmu_enabled(kvm)) - kvm_tdp_mmu_zap_all(kvm); - write_unlock(&kvm->mmu_lock); } -- 2.31.1.607.g51e8a6a459-goog