Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp3448800rwb; Fri, 30 Sep 2022 03:53:32 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5TnItFDbn+sswfpY2/DSHYYatRAgyI4juXdvQnO/fTjc5FJd7np8Qx9oBzHxVPvlnqJ5vj X-Received: by 2002:a17:90b:4d92:b0:202:f200:a3c9 with SMTP id oj18-20020a17090b4d9200b00202f200a3c9mr21687137pjb.4.1664535211983; Fri, 30 Sep 2022 03:53:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664535211; cv=none; d=google.com; s=arc-20160816; b=jDgLylpgoSmH3COM8Ib9F1wvqFI92ghdlNp9l0pcvcGWgbSY1AHi9YoiUMi0TtIHRq I7sF1ViKs+7hBtObR1lGtqbD97ihQWZ8SsBoKM207t02+dOm34IZQKM241QMKXBSGmu+ 5ZIIbSSd2C8/nUs7eMHg/EiyPZzFsXg7/IfJZLJUH3tjewXbS4vRqcD4pAiJxv1gD6dP XCkW9v1vyUCQukweJ6vbXBwzSqpWV2fik2zaawz16DYwibdvTY/Nv8Ov7eITcdeZvAAP krhvT7MGPpIXPmykgrNWTYa3DPmb9jpwncNGqgbTg6BeFVJdi4ILmP13+zbKJa1u9tMm 2oCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=17bv3btODYDPSSJM75F1/Pdg4aNcj5kFVFJreo7c7JI=; b=KFqgifgKO/1X0Bu0LegfYFxNJkfJ1mQwA99JNKx+oCdWG41Mn8u6yHBAr7HGIOdjtj uoQJn5jXkGkW6XfsjpM6DMFyva2vMtHflX58j1f3XK8mFz+POZibvnGAgWUCLvjG6Qrz 7ljHcl/2tic2YXiZO7o/bY1rHQhxDxHT5AsctxoF61NCh3eAZSexsTLCwdfF/LyhKizp w6TR0HWjap3ZSN1ouDjnMWyg9jHJLM54eguDrixELttemCMgegYQDgI9RJ1gDYPFWuW7 o7jni6AYxSSIOScT0JTiKYraMaRU52ep8JlgZ1CBeBbGYieMu1tBJHTUNefxSb7dEnIt 8Iig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=n2790kdt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q194-20020a632acb000000b00434c2c0a834si2414715pgq.734.2022.09.30.03.53.20; Fri, 30 Sep 2022 03:53:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=n2790kdt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232137AbiI3KWW (ORCPT + 99 others); Fri, 30 Sep 2022 06:22:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231536AbiI3KTE (ORCPT ); Fri, 30 Sep 2022 06:19:04 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 637B315ED32; Fri, 30 Sep 2022 03:19:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664533142; x=1696069142; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kRqGXwvT6ecA6jYMlkVRsGMRjPs2V/ytrmrzFD9hsU0=; b=n2790kdtTG2QAjtg4hD2bJ2XWQnx4nNzcfUmtDo6sZ+NH7Sp4oVPZt2v BqMAfEOJf/EOHJomHLroU7w3MzXfuZmkV5hsBQllPYhw4Z48AFxlI1xwk w7Ikl2uQYbRTrxKoRuQElCKYZZgKpF1ijrIWnMJCrNK8/34cU4ZojDHhL d6FOOL26/SBDbi+4tDqZpNNlVuAxTfdjnUj54c2xvZkhq4rnjMivDuZzl kI7eFGtnbQVSsEM+FZOyMmPPPYBl/kMJvOmD+2P91xTsZuTSHmRSCMn0t OJECKcDNCHv/MpuILN5TmZ85CxWViM1YJkBaW+puOF4fmSLY7IbbJpEY/ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="281870093" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="281870093" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 03:18:57 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="726807619" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="726807619" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 03:18:57 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [PATCH v9 036/105] KVM: x86/mmu: Disallow fast page fault on private GPA Date: Fri, 30 Sep 2022 03:17:30 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata TDX requires TDX SEAMCALL to operate Secure EPT instead of direct memory access and TDX SEAMCALL is heavy operation. Fast page fault on private GPA doesn't make sense. Disallow fast page fault on private GPA. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f4d7432cd9fc..2fd70876d346 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3225,8 +3225,16 @@ static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fau return RET_PF_CONTINUE; } -static bool page_fault_can_be_fast(struct kvm_page_fault *fault) +static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault) { + /* + * TDX private mapping doesn't support fast page fault because the EPT + * entry is read/written with TDX SEAMCALLs instead of direct memory + * access. + */ + if (kvm_is_private_gpa(kvm, fault->addr)) + return false; + /* * Page faults with reserved bits set, i.e. faults on MMIO SPTEs, only * reach the common page fault handler if the SPTE has an invalid MMIO @@ -3336,7 +3344,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) u64 *sptep = NULL; uint retry_count = 0; - if (!page_fault_can_be_fast(fault)) + if (!page_fault_can_be_fast(vcpu->kvm, fault)) return ret; walk_shadow_page_lockless_begin(vcpu); -- 2.25.1