Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3864823ybt; Tue, 23 Jun 2020 12:44:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwYYe4IY8QyoQhMzhrZGlDzLeCNoDMdyPDjUojPheIUWf5rXQh6v6IfN5WwwT/tuMHBAdHz X-Received: by 2002:a17:906:dbe5:: with SMTP id yd5mr7692613ejb.328.1592941480393; Tue, 23 Jun 2020 12:44:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592941480; cv=none; d=google.com; s=arc-20160816; b=nTkXcekvrnrEK91e7nzWJMq8ykacDV6vph/UrY3YhA+0n7mP05Up56EsgohGvwN0NW nHC/oMxzPeMFfxk6q8Phl/TSARcGMajEXbUz4v5w1RD428kOHaFykA9QE3Xvdqo4+NwF uyKKYNHt31TKfrlAPHaSiu1dcQ1391t4jiBTtdJ493JC9eyLgVHUJu12dqFfnZufpoPt m9/GwVJvehPQA7QrquxHw/e5mo6OxgZLpR+devtQlKrkc9A+g81ht6DHPHwbImCIpIGB mqWjRzs32gHYvWODMBzo7lbiUeRtLkZHYVRvCVvh7RBRZyAHIX+gEc/9C5JzBzXiAdwY Fy1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=VkjR0ju+z/EQJAsWc/2ifWuzozO4HfzXYXJ0X4crGr4=; b=v1OQcLKqzNSmaq3mqo/J6C42Xt/klhZbjcRDh/s+fGkZpcti3CAvUfvl0sx5SCJUeZ SZ9/9ehfSk4XvYVRRAWfnsmykktbi6usKDmxB+Xz0Vck9b9vPR4QNB9xXkJE++78qVvE Otlo3lZ0cUTCX9uIkVg3+paOCc6TEHBGQIwh3mLgyOdp1lUf6Y3ofWhGaFnFg+aYR7rR FtT6FmIKQA6buFGvu6N8tpDZL3mL23MqbxwcsPdVwp5xkMuazygKUnu7zdkeKS72Qrwr NnyhHu4LuOYuQFxVHQGACNujDg05UWkoWMST27xsGr9gmVVKrDPXwmrwXFZzC/8G9EpX SPtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z15si11616656eda.482.2020.06.23.12.44.14; Tue, 23 Jun 2020 12:44:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387608AbgFWTkm (ORCPT + 99 others); Tue, 23 Jun 2020 15:40:42 -0400 Received: from mga06.intel.com ([134.134.136.31]:64955 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387442AbgFWTk3 (ORCPT ); Tue, 23 Jun 2020 15:40:29 -0400 IronPort-SDR: TH3lobsh+GvUDkEROc0lXs4Fq2FespYfmuwIo7dCgZirFAAjTZJh4MVsqlmFSMwgJWR8V7XwTd M++HcgeRIgQA== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="205705217" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="205705217" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 12:40:29 -0700 IronPort-SDR: 05tE0GLhuyI3F46Oq3sOoMHZmyc2D5d7erJVwrsYJ7Kd25vOwzaSSq73I+4JcjGsSHVmBV3BPy PcejMBCW+zAg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="319249367" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by FMSMGA003.fm.intel.com with ESMTP; 23 Jun 2020 12:40:28 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Peter Feiner , Jon Cargille Subject: [PATCH 1/2] KVM: x86/mmu: Avoid multiple hash lookups in kvm_get_mmu_page() Date: Tue, 23 Jun 2020 12:40:26 -0700 Message-Id: <20200623194027.23135-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200623194027.23135-1-sean.j.christopherson@intel.com> References: <20200623194027.23135-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Refactor for_each_valid_sp() to take the list of shadow pages instead of retrieving it from a gfn to avoid doing the gfn->list hash and lookup multiple times during kvm_get_mmu_page(). Cc: Peter Feiner Cc: Jon Cargille Cc: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3dd0af7e7515..67f8f82e9783 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2258,15 +2258,14 @@ static bool kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp, static void kvm_mmu_commit_zap_page(struct kvm *kvm, struct list_head *invalid_list); - -#define for_each_valid_sp(_kvm, _sp, _gfn) \ - hlist_for_each_entry(_sp, \ - &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \ +#define for_each_valid_sp(_kvm, _sp, _list) \ + hlist_for_each_entry(_sp, _list, hash_link) \ if (is_obsolete_sp((_kvm), (_sp))) { \ } else #define for_each_gfn_indirect_valid_sp(_kvm, _sp, _gfn) \ - for_each_valid_sp(_kvm, _sp, _gfn) \ + for_each_valid_sp(_kvm, _sp, \ + &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else static inline bool is_ept_sp(struct kvm_mmu_page *sp) @@ -2477,6 +2476,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, unsigned int access) { union kvm_mmu_page_role role; + struct hlist_head *sp_list; unsigned quadrant; struct kvm_mmu_page *sp; bool need_sync = false; @@ -2496,7 +2496,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; role.quadrant = quadrant; } - for_each_valid_sp(vcpu->kvm, sp, gfn) { + + sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; + for_each_valid_sp(vcpu->kvm, sp, sp_list) { if (sp->gfn != gfn) { collisions++; continue; @@ -2533,8 +2535,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, sp->gfn = gfn; sp->role = role; - hlist_add_head(&sp->hash_link, - &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]); + hlist_add_head(&sp->hash_link, sp_list); if (!direct) { /* * we should do write protection before syncing pages -- 2.26.0