Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp445682pxb; Fri, 16 Apr 2021 09:19:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxhAolkvUY9909NOKU4UN2QUX38E5aHYJM8sn0VP12xqniT/KeNOH8Y2q/5lsghtxCB8F/0 X-Received: by 2002:a17:906:1c8f:: with SMTP id g15mr8975226ejh.20.1618589963448; Fri, 16 Apr 2021 09:19:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618589963; cv=none; d=google.com; s=arc-20160816; b=Qrhhod73MQttX2M1DBeqn+uWiBbtyyvTWpI9bX5D1xKz5ZhhkQBwzAUcYO5qiGwcFd AFswmFAl9v8KuCtOqXG8+xm0LY3uMZcm06m+SFluvFqXZ+gnBiviqQ2+6OSFATIpIRXa jXL2yfbToqIdIxuV1hxHNzaif4DYRDLbySOEJxGB/aYsTsnAeOZiXFJeZyF1JDrOFMCZ 8r/LG8GLIVHuCKycakYrlgh88FC7qGEynfvNTMCssr3iEiJLuORFEwVQt7Uc5y/wU2WP P/whHM0OgCc3THOul0N4aYHM8crg0oGuYeKe/gVhhb+sl6GLiDgHfu0Hn2mV+eFNXKZK RdeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=kKoAmylcTln3YotFHjaro7cTGrAcuJicfojrkyCxHs0=; b=kWfyb3yDMe6fzdCUy+A3ENHXdlZxlsUAxfB1zAaMQifLRsqvdWuwTkTBwpySr+vAIz tfSlgbXbzqMrTBcePJtkgPRlYWjSZeuHIAg1W6E13vIGY//kGP9eNM2mra7wdLJXxBHi PUzugo0gUrKngOsHItm2cY2WohBvSjQ4A19C5uir4NXpnAnRCmJAUFyXiS6AjRnqMVT0 CcyRffmVCwyVnNTGTnX80+C1to1fX/1zQZPANDVgEUV/W7XpqDzQgNzjFJa/9ahJ9tzg Lb88yiCXwf2OHrhHN2vFKdHbKibV+LSNYrrPNwrEDRMxfn2LqjBtH2wQTJ8UVF735I5g EGQA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id nb28si4024783ejc.303.2021.04.16.09.19.00; Fri, 16 Apr 2021 09:19:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237406AbhDPPme (ORCPT + 99 others); Fri, 16 Apr 2021 11:42:34 -0400 Received: from mga06.intel.com ([134.134.136.31]:18952 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343979AbhDPPmQ (ORCPT ); Fri, 16 Apr 2021 11:42:16 -0400 IronPort-SDR: ozDk07V9sdvuAs92B9HnPL8tevharSccVgb9fFyKZ6EDbvhiCvVOiYK1wrbsRg4dMXpjmc3H9L UMr2K7jVGaDQ== X-IronPort-AV: E=McAfee;i="6200,9189,9956"; a="256371746" X-IronPort-AV: E=Sophos;i="5.82,226,1613462400"; d="scan'208";a="256371746" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2021 08:41:51 -0700 IronPort-SDR: uZvFooXAZtP/DOt3rfOjQaJ5N9ruvvbTTjYYdv9o7baNrfOMWFIO2CCTVrIbUQKBl9rMlyUCyp FuEPe04gY7NA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,226,1613462400"; d="scan'208";a="425635445" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga008.jf.intel.com with ESMTP; 16 Apr 2021 08:41:37 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 4EFA537E; Fri, 16 Apr 2021 18:41:50 +0300 (EEST) From: "Kirill A. Shutemov" To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Sean Christopherson , Jim Mattson Cc: David Rientjes , "Edgecombe, Rick P" , "Kleen, Andi" , "Yamahata, Isaku" , Erdem Aktas , Steve Rutherford , Peter Gonda , David Hildenbrand , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 12/13] KVM: passdown struct kvm to hva_to_pfn_slow() Date: Fri, 16 Apr 2021 18:41:05 +0300 Message-Id: <20210416154106.23721-13-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210416154106.23721-1-kirill.shutemov@linux.intel.com> References: <20210416154106.23721-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Make struct kvm pointer available within hva_to_pfn_slow(). It is prepartion for the next patch. Signed-off-by: Kirill A. Shutemov --- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/kvm/mmu/mmu.c | 8 +++-- include/linux/kvm_host.h | 12 ++++--- virt/kvm/kvm_main.c | 49 ++++++++++++++------------ 5 files changed, 41 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 38ea396a23d6..86781ff76fcb 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -589,7 +589,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, write_ok = true; } else { /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, + pfn = __gfn_to_pfn_memslot(kvm, memslot, gfn, false, NULL, writing, &write_ok); if (is_error_noslot_pfn(pfn)) return -EFAULT; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index bb35490400e9..319a1a99153f 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -821,7 +821,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long pfn; /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, + pfn = __gfn_to_pfn_memslot(kvm, memslot, gfn, false, NULL, writing, upgrade_p); if (is_error_noslot_pfn(pfn)) return -EFAULT; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6d16481aa29d..3b97342af026 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2687,7 +2687,7 @@ static kvm_pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, if (!slot) return KVM_PFN_ERR_FAULT; - return gfn_to_pfn_memslot_atomic(slot, gfn); + return gfn_to_pfn_memslot_atomic(vcpu->kvm, slot, gfn); } static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, @@ -3672,7 +3672,8 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, } async = false; - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable); + *pfn = __gfn_to_pfn_memslot(vcpu->kvm, slot, gfn, false, + &async, write, writable); if (!async) return false; /* *pfn has correct page already */ @@ -3686,7 +3687,8 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, return true; } - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable); + *pfn = __gfn_to_pfn_memslot(vcpu->kvm, slot, gfn, false, + NULL, write, writable); return false; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f3b1013fb22c..fadaccb95a4c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -725,11 +725,13 @@ void kvm_set_page_accessed(struct page *page); kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); -kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn); -kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn); -kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool *async, bool write_fault, - bool *writable); +kvm_pfn_t gfn_to_pfn_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn); +kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn); +kvm_pfn_t __gfn_to_pfn_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, bool atomic, bool *async, + bool write_fault, bool *writable); void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8367d88ce39b..3e2dbaef7979 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2009,9 +2009,9 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, return pfn; } -kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool *async, bool write_fault, - bool *writable) +kvm_pfn_t __gfn_to_pfn_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, bool atomic, bool *async, + bool write_fault, bool *writable) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -2041,38 +2041,43 @@ EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, - write_fault, writable); + return __gfn_to_pfn_memslot(kvm, gfn_to_memslot(kvm, gfn), gfn, false, + NULL, write_fault, writable); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); -kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn) +kvm_pfn_t gfn_to_pfn_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL); + return __gfn_to_pfn_memslot(kvm, slot, gfn, false, NULL, true, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); -kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn) +kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL); + return __gfn_to_pfn_memslot(kvm, slot, gfn, true, NULL, true, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn) { - return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); + return gfn_to_pfn_memslot_atomic(vcpu->kvm, + kvm_vcpu_gfn_to_memslot(vcpu, gfn), + gfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic); kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) { - return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); + return gfn_to_pfn_memslot(kvm, gfn_to_memslot(kvm, gfn), gfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) { - return gfn_to_pfn_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); + return gfn_to_pfn_memslot(vcpu->kvm, + kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); @@ -2130,21 +2135,21 @@ void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache) kvm_release_pfn_clean(pfn); } -static void kvm_cache_gfn_to_pfn(struct kvm_memory_slot *slot, gfn_t gfn, +static void kvm_cache_gfn_to_pfn(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn, struct gfn_to_pfn_cache *cache, u64 gen) { kvm_release_pfn(cache->pfn, cache->dirty, cache); - cache->pfn = gfn_to_pfn_memslot(slot, gfn); + cache->pfn = gfn_to_pfn_memslot(kvm, slot, gfn); cache->gfn = gfn; cache->dirty = false; cache->generation = gen; } -static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, - struct kvm_host_map *map, - struct gfn_to_pfn_cache *cache, - bool atomic) +static int __kvm_map_gfn(struct kvm *kvm, struct kvm_memslots *slots, + gfn_t gfn, struct kvm_host_map *map, + struct gfn_to_pfn_cache *cache, bool atomic) { kvm_pfn_t pfn; void *hva = NULL; @@ -2160,13 +2165,13 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, cache->generation != gen) { if (atomic) return -EAGAIN; - kvm_cache_gfn_to_pfn(slot, gfn, cache, gen); + kvm_cache_gfn_to_pfn(kvm, slot, gfn, cache, gen); } pfn = cache->pfn; } else { if (atomic) return -EAGAIN; - pfn = gfn_to_pfn_memslot(slot, gfn); + pfn = gfn_to_pfn_memslot(kvm, slot, gfn); } if (is_error_noslot_pfn(pfn)) return -EINVAL; @@ -2199,14 +2204,14 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool atomic) { - return __kvm_map_gfn(kvm_memslots(vcpu->kvm), gfn, map, + return __kvm_map_gfn(vcpu->kvm, kvm_memslots(vcpu->kvm), gfn, map, cache, atomic); } EXPORT_SYMBOL_GPL(kvm_map_gfn); int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) { - return __kvm_map_gfn(kvm_vcpu_memslots(vcpu), gfn, map, + return __kvm_map_gfn(vcpu->kvm, kvm_vcpu_memslots(vcpu), gfn, map, NULL, false); } EXPORT_SYMBOL_GPL(kvm_vcpu_map); -- 2.26.3