Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp1593266rdb; Sat, 2 Dec 2023 02:01:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IHRVWh8CzaW/IdEFMA/lWyDC9i092WfJTXa3vhWsZNEx59RY21Fheyz1/ytL8GjAGSyrIym X-Received: by 2002:a05:6a20:a108:b0:18b:e49:fdd2 with SMTP id q8-20020a056a20a10800b0018b0e49fdd2mr1368373pzk.9.1701511268158; Sat, 02 Dec 2023 02:01:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701511268; cv=none; d=google.com; s=arc-20160816; b=XCBYhWJMeRCN9S6o7Eawf/lBIvozAFpkyWCARn9VPf/yCXbWSN1bG6G2CIKP7UQgbx Sco2N68uUYFtJaVCTifYZh1VKxwyX2/qDbHDW6MsoRisrnHiP/Tmno98E2N5s3n98DN8 zaUuoY9zILMXxTVKjmdsN0ZfN6idiKJMsqwFivBtbqcvuzCeD5mjdgVJClrdrqVN9R7q suyyticBejvNWcGV7AnW2lCIDu6bYPIYrn0yKSo8pKnGw72mhfGXueTYuZxXkO7ARJjU X8vbFOuE2ITBZVWg8pYsTjb5E0XsmjSNJ9kHR9htANsOuU0XUnQHIBdI95PQ22FZe5G7 moMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=bflv1UmV95gO32nl0Epb8Vb3EBz8aQTLzVas5tcWH+c=; fh=+WI4m5k3dRLR+dR3neThuZkNBTzIm/a8HgtddERL9fA=; b=dfyo6earl6yh1djuvVBtDKB/3RpY3/Z2anI33wnJUoUHIw2xHjxz8NfesfuEdtYr6z qFh6rtFbjrD3WT1YQKoO+bwSWCNGdprQN7pcBQJO7QDbHmktNiMeS2KLH8XpNRCTFRNf hB7Sl2rUsef9hM5V0mm78CMEczTmWZX0SfAF/gMs4Em+QXAo0f9KmvJlui8407lLHQt4 NE94voCbhxjIhi2T7cb1xyUnMQPuslpiQfoLOdc5ZY36aSW7b3y/4nwoSacmAAcYHZ4t mJuZLO4c7BxIVhllyJTHjE4zvwoi0Wf28HG/DAiGemjIDft903UGQoge/1gRXYItbblh 75Ow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Jt/wD5QU"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id a8-20020a056a000c8800b006cdf3e98118si3862821pfv.118.2023.12.02.02.01.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 02 Dec 2023 02:01:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Jt/wD5QU"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 8EFAA83CEF08; Sat, 2 Dec 2023 02:00:05 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232359AbjLBJ7r (ORCPT + 99 others); Sat, 2 Dec 2023 04:59:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229472AbjLBJ7q (ORCPT ); Sat, 2 Dec 2023 04:59:46 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D4A8E3; Sat, 2 Dec 2023 01:59:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701511192; x=1733047192; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=BB3oVFRCJKuJwn2fpuBKAK+sq6q4cIt28G0VfFVlmdc=; b=Jt/wD5QUYehfDm9wIlqvtSpVlHoBtUUDS2RwK9li/6MaqrZlSDOxwGDR TqM5Fj2JLGqrURwezVfFS2dveQVyLB30E72MVG1D9v0CjK/jdjDXGaUmx xYoa+HUAcDw8O3II53hLYMnDSDoZZ9VcXbKERzOW3ctnMn3HtiZkKdy3T IdTnQMq71ggJO5H60Ul92MlGy8dDmLQOWZ1Nl12WkQAQ2ncqWJgfXZIsl XjG6gq7cUEVclZrwnDBtptf/zXnnnHnRuHd409voG5PmKH8dd3J6EExUh CFGOYSi1+y9TRi6iD4VR5bK5J/a0ib7me+IthXvrwCx1hWunzZJMbwDWE A==; X-IronPort-AV: E=McAfee;i="6600,9927,10911"; a="424756329" X-IronPort-AV: E=Sophos;i="6.04,245,1695711600"; d="scan'208";a="424756329" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2023 01:59:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10911"; a="836022446" X-IronPort-AV: E=Sophos;i="6.04,245,1695711600"; d="scan'208";a="836022446" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2023 01:59:48 -0800 From: Yan Zhao To: iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: alex.williamson@redhat.com, jgg@nvidia.com, pbonzini@redhat.com, seanjc@google.com, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, dwmw2@infradead.org, yi.l.liu@intel.com, Yan Zhao Subject: [RFC PATCH 31/42] KVM: x86/mmu: add extra param "kvm" to kvm_faultin_pfn() Date: Sat, 2 Dec 2023 17:30:49 +0800 Message-Id: <20231202093049.15341-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231202091211.13376-1-yan.y.zhao@intel.com> References: <20231202091211.13376-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Sat, 02 Dec 2023 02:00:06 -0800 (PST) Add an extra param "kvm" to kvm_faultin_pfn() to allow param "vcpu" to be NULL in future to allow page faults in non-vcpu context. It is a preparation for later KVM MMU to export TDP. No-slot mapping (for emulated MMIO cache), async pf, sig pending PFN are not compatible to page fault in non-vcpu context. Signed-off-by: Yan Zhao --- arch/x86/kvm/mmu/mmu.c | 35 +++++++++++++++++++--------------- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 21 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bcf17aef29119..df5651ea99139 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3266,9 +3266,10 @@ static void kvm_send_hwpoison_signal(struct kvm_memory_slot *slot, gfn_t gfn) send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, PAGE_SHIFT, current); } -static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int kvm_handle_error_pfn(struct kvm *kvm, struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { - if (is_sigpending_pfn(fault->pfn)) { + if (is_sigpending_pfn(fault->pfn) && vcpu) { kvm_handle_signal_exit(vcpu); return -EINTR; } @@ -3289,12 +3290,15 @@ static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fa return -EFAULT; } -static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, +static int kvm_handle_noslot_fault(struct kvm *kvm, struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, unsigned int access) { gva_t gva = fault->is_tdp ? 0 : fault->addr; + if (!vcpu) + return -EFAULT; + vcpu_cache_mmio_info(vcpu, gva, fault->gfn, access & shadow_mmio_access_mask); @@ -4260,7 +4264,8 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true, NULL); } -static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int __kvm_faultin_pfn(struct kvm *kvm, struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; bool async; @@ -4275,7 +4280,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (!kvm_is_visible_memslot(slot)) { /* Don't expose private memslots to L2. */ - if (is_guest_mode(vcpu)) { + if (vcpu && is_guest_mode(vcpu)) { fault->slot = NULL; fault->pfn = KVM_PFN_NOSLOT; fault->map_writable = false; @@ -4288,7 +4293,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * when the AVIC is re-enabled. */ if (slot && slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT && - !kvm_apicv_activated(vcpu->kvm)) + !kvm_apicv_activated(kvm)) return RET_PF_EMULATE; } @@ -4299,7 +4304,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (!async) return RET_PF_CONTINUE; /* *pfn has correct page already */ - if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { + if (!fault->prefetch && vcpu && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); if (kvm_find_async_pf_gfn(vcpu, fault->gfn)) { trace_kvm_async_pf_repeated_fault(fault->addr, fault->gfn); @@ -4321,23 +4326,23 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return RET_PF_CONTINUE; } -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, - unsigned int access) +static int kvm_faultin_pfn(struct kvm *kvm, struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, unsigned int access) { int ret; - fault->mmu_seq = vcpu->kvm->mmu_invalidate_seq; + fault->mmu_seq = kvm->mmu_invalidate_seq; smp_rmb(); - ret = __kvm_faultin_pfn(vcpu, fault); + ret = __kvm_faultin_pfn(kvm, vcpu, fault); if (ret != RET_PF_CONTINUE) return ret; if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_error_pfn(vcpu, fault); + return kvm_handle_error_pfn(kvm, vcpu, fault); if (unlikely(!fault->slot)) - return kvm_handle_noslot_fault(vcpu, fault, access); + return kvm_handle_noslot_fault(kvm, vcpu, fault, access); return RET_PF_CONTINUE; } @@ -4389,7 +4394,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - r = kvm_faultin_pfn(vcpu, fault, ACC_ALL); + r = kvm_faultin_pfn(vcpu->kvm, vcpu, fault, ACC_ALL); if (r != RET_PF_CONTINUE) return r; @@ -4469,7 +4474,7 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, if (r) return r; - r = kvm_faultin_pfn(vcpu, fault, ACC_ALL); + r = kvm_faultin_pfn(vcpu->kvm, vcpu, fault, ACC_ALL); if (r != RET_PF_CONTINUE) return r; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f685b036f6637..054d1a203f0ca 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -812,7 +812,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - r = kvm_faultin_pfn(vcpu, fault, walker.pte_access); + r = kvm_faultin_pfn(vcpu->kvm, vcpu, fault, walker.pte_access); if (r != RET_PF_CONTINUE) return r; -- 2.17.1