Received: by 2002:a05:6a10:8a4d:0:0:0:0 with SMTP id dn13csp1081238pxb; Fri, 13 Aug 2021 13:05:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7uB6/abhGpF0bN6UiQuJKH9AhHKBd4emop9qgDNcnb1HkoDfukIDIG1a/l/ZQm0WprrzX X-Received: by 2002:a5e:d91a:: with SMTP id n26mr3212404iop.96.1628885101452; Fri, 13 Aug 2021 13:05:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628885101; cv=none; d=google.com; s=arc-20160816; b=MOgIDQ3JrQV2mMAAouF856iFSPBLXG8N6I/nQ0Ou22Ng0+mrCiS4P8QXUESgWNSwF3 d0k4aikiEblOBt2I4TAPDrYQFKY2TFd9Rx+0Lwa9UKx9N/kNSg1FqpLo3xt9A2CKid6X owmMtlqoBK5WQRutTkukWHaxp50K2lOctNP6dYa/LVDY/+/wmR7yZRYEsih8CVCpQxkG FoNUiQZzJBS2D/Wmhq3S2KoEWYStKYJeof7QnFNbU4Mv8oQPm6MNE1cn12MEL/Bm8VGM c9tIMW8+7p6U1IrHebvDUVE5A3Ei5kiLKwD5W04Psm750k96hMVIRfNYPd8f5CbLKvcB rF0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=FTo79NnUUwS+P/ZgCndyCiTtx+YBv8YtUHCxVfuO7Sk=; b=BsanY0o29GzByFpCgiL4by5Q7CG5NI1jgU7glBYA48PmMxg580J/0cyOyQ5q55Hz5j ih1EDtFzxlwawFWocbcLnxq/PMrZLzy3bW0Wp08PlUIPu6DBy7r6hOs0MuwASjLcJBaj MaFCD7nukZrDlATeK18SuJ2VLMQYfjPUSV03dvvHbH+2/aelUBw32z/T7d9bpNeaDn40 Felr+VUaE4ZWZztPVOPhs4C8PUF4JsTLtDV2MXf8Q1yKW0R40a5x/JDglkj0aQMrZRQU J1Q8ktLYxZJtqnEmS/fRLoj2poFHsVio1oLG0X+mT6Ixk3ZoTJIreu85sMYWmJ7aYxHH d/8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m12si2426138ilq.105.2021.08.13.13.04.49; Fri, 13 Aug 2021 13:05:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234129AbhHMUBS (ORCPT + 99 others); Fri, 13 Aug 2021 16:01:18 -0400 Received: from vps-vb.mhejs.net ([37.28.154.113]:50276 "EHLO vps-vb.mhejs.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234099AbhHMUBR (ORCPT ); Fri, 13 Aug 2021 16:01:17 -0400 Received: from MUA by vps-vb.mhejs.net with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mEcwt-00046c-Oh; Fri, 13 Aug 2021 21:34:35 +0200 From: "Maciej S. Szmigiero" To: Paolo Bonzini , Vitaly Kuznetsov Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Igor Mammedov , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Christian Borntraeger , Janosch Frank , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 12/13] KVM: Optimize gfn lookup in kvm_zap_gfn_range() Date: Fri, 13 Aug 2021 21:33:25 +0200 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Maciej S. Szmigiero" Introduce a memslots gfn upper bound operation and use it to optimize kvm_zap_gfn_range(). This way this handler can do a quick lookup for intersecting gfns and won't have to do a linear scan of the whole memslot set. Signed-off-by: Maciej S. Szmigiero --- arch/x86/kvm/mmu/mmu.c | 11 +++++-- include/linux/kvm_host.h | 69 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2a4ac5100ff2..fd068efbe462 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5637,18 +5637,25 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) int i; bool flush = false; + if (WARN_ON_ONCE(gfn_end <= gfn_start)) + return; + write_lock(&kvm->mmu_lock); kvm_inc_notifier_count(kvm, gfn_start, gfn_end); if (kvm_memslots_have_rmaps(kvm)) { for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - int bkt; + int idx; + struct rb_node *node; slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, bkt, slots) { + idx = slots->node_idx; + + kvm_for_each_memslot_in_gfn_range(node, slots, gfn_start, gfn_end) { gfn_t start, end; + memslot = container_of(node, struct kvm_memory_slot, gfn_node[idx]); start = max(gfn_start, memslot->base_gfn); end = min(gfn_end, memslot->base_gfn + memslot->npages); if (start >= end) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 71a9e13dab2d..c58f5a9e18f7 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -833,6 +833,75 @@ struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id) return NULL; } +static inline +struct rb_node *kvm_memslots_gfn_upper_bound(struct kvm_memslots *slots, gfn_t gfn) +{ + int idx = slots->node_idx; + struct rb_node *node, *result = NULL; + + for (node = slots->gfn_tree.rb_node; node; ) { + struct kvm_memory_slot *slot; + + slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]); + if (gfn < slot->base_gfn) { + result = node; + node = node->rb_left; + } else + node = node->rb_right; + } + + return result; +} + +static inline +struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t start) +{ + struct rb_node *node; + + /* + * Find the slot with the lowest gfn that can possibly intersect with + * the range, so we'll ideally have slot start <= range start + */ + node = kvm_memslots_gfn_upper_bound(slots, start); + if (node) { + struct rb_node *pnode; + + /* + * A NULL previous node means that the very first slot + * already has a higher start gfn. + * In this case slot start > range start. + */ + pnode = rb_prev(node); + if (pnode) + node = pnode; + } else { + /* a NULL node below means no slots */ + node = rb_last(&slots->gfn_tree); + } + + return node; +} + +static inline +bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *node, gfn_t end) +{ + struct kvm_memory_slot *memslot; + + memslot = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]); + + /* + * If this slot starts beyond or at the end of the range so does + * every next one + */ + return memslot->base_gfn >= end; +} + +/* Iterate over each memslot *possibly* intersecting [start, end) range */ +#define kvm_for_each_memslot_in_gfn_range(node, slots, start, end) \ + for (node = kvm_for_each_in_gfn_first(slots, start); \ + node && !kvm_for_each_in_gfn_no_more(slots, node, end); \ + node = rb_next(node)) \ + /* * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations: * - create a new memory slot