Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2711054rwd; Sun, 28 May 2023 22:47:52 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7NpgqG0MuEjTC3/6eOE1xpgFHTMgC9FKOEn18g+nqJI66qBjsYgOH5+c3I5dI9nf1iSTBG X-Received: by 2002:a05:6a00:c8a:b0:64c:ae1c:3385 with SMTP id a10-20020a056a000c8a00b0064cae1c3385mr15552141pfv.32.1685339272464; Sun, 28 May 2023 22:47:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685339272; cv=none; d=google.com; s=arc-20160816; b=wkLwKItniEFW8+M/RayiMxzBJ/G4StLLtsxu9ERqfUpa3HV1wCWYPsxlA7XIFdFADs vgbcDDTic9ViZuLH+sa+qL697fngWLiDqAK8QEZLxNeK0i8LZpxqinb430uk4m+Ztv6l IJmkkMSb9wUBuAKCwkf4UalFejCzGZgppB/Aie0P3WqLEDw6Gul4YFBo2jryOHo2GVP3 07o12aSaB6R9gFrcQkU4Pu5sHvPGZvg0GbWn595lES12QLnSZ1PAeQLXrArxD5HK1syU sy9nLrqby276NQVpoNZwsz5ozWbtT1HKsqQeuLVhxr33F/VYL8paE1XHuIls8E4gbVxu zlwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=E1dcSMN0b4P/VHPPy0g3iL+k0Vu107AHmZo7/6oV1ig=; b=TMvNOfWaFNojug+Cf2enkdPM19kRqZweo/TVPs6wWNBcVG3HAvD24PbLm7SzGrQjp8 KDjJe1hZmvZzorZfABZpSypmZX3WaMbGmwqZ6Ui5UT8SqCV2Ce/qXhcljVYZQibnJwD9 lcz9BD+46dQCiF2o0HdL9tH3nl2HvQqcm6I3qfIBr4SwgTQ4oWqqR9zAZi4synbVSIgz nNsqnUXYv18pXkTml2508tGsNu/MGuvKK0PhZ+M5nPEWbZo1a2lzNSfW/TBIQmxhArJu M1VxshFkkPs/6QovdnmyTOpKAhs+OBR2JYYznINORR/bLICl6/GxGtssjEsgsFAhK1i8 lteg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=e15cB7fy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l64-20020a622543000000b0064d419dd566si6464722pfl.205.2023.05.28.22.47.38; Sun, 28 May 2023 22:47:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=e15cB7fy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232024AbjE2E12 (ORCPT + 99 others); Mon, 29 May 2023 00:27:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231907AbjE2E0d (ORCPT ); Mon, 29 May 2023 00:26:33 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEE80E4; Sun, 28 May 2023 21:24:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685334243; x=1716870243; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ezFlgAmEx6ikWaFocHvxrtitS6X9pQ4nLsclwZggxeo=; b=e15cB7fyonot1sT8kX3Nw9qu2/l4LT0yH6U/8sOaaO9s2hb98YWsT0p/ szyJulXuIfLk4vR9IW0Xr2wKUvl8r2K4GfagBxYhQzCBdTwG4NlOCACCJ fEy63LSa30uMjy44VGYLH4kGvsoJ9E+Pm1YuNO53uumjrEzyDZLEkY2wc 4PyRInKhP1pYr3dDm7op7LboORWFVDNlkd0nFwqHW4dEzYpZf7VkGm3S0 zIc6mhKru168/UT4EF1zUKf1FhzVEKb1yOkvl39ets7rn1htn48K25IDe 5MOAly3+5n8jVr13OTjnQ9P1EeoW6KD00GEZiJuEh0FkuLDBVJn0nYRzq w==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="334966060" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="334966060" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 21:21:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="775784361" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="775784361" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 21:21:20 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com Subject: [PATCH v14 056/113] KVM: TDX: MTRR: implement get_mt_mask() for TDX Date: Sun, 28 May 2023 21:19:38 -0700 Message-Id: <74627bd942ce05e9dfe1713346300d4139a077b5.1685333728.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata Because TDX virtualize cpuid[0x1].EDX[MTRR: bit 12] to fixed 1, guest TD thinks MTRR is supported. Although TDX supports only WB for private GPA, it's desirable to support MTRR for shared GPA. As guest access to MTRR MSRs causes #VE and KVM/x86 tracks the values of MTRR MSRs, the remining part is to implement get_mt_mask method for TDX for shared GPA. Pass around shared bit from kvm fault handler to get_mt_mask method so that it can determine if the gfn is shared or private. Implement get_mt_mask() following vmx case for shared GPA and return WB for private GPA. the existing vmx_get_mt_mask() can't be directly used as CPU state(CR0.CD) is protected. GFN passed to kvm_mtrr_check_gfn_range_consistency() should include shared bit. Suggested-by: Kai Huang Signed-off-by: Isaku Yamahata --- Changes from v11 to V12 - Make common function for VMX and TDX - pass around shared bit from KVM fault handler to get_mt_mask method - updated commit message --- arch/x86/kvm/vmx/main.c | 10 +++++++++- arch/x86/kvm/vmx/tdx.c | 23 +++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 2 ++ 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index e8edb3b1b23e..6a06b74bf448 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -228,6 +228,14 @@ static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, vmx_load_mmu_pgd(vcpu, root_hpa, pgd_level); } +static u8 vt_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) +{ + if (is_td_vcpu(vcpu)) + return tdx_get_mt_mask(vcpu, gfn, is_mmio); + + return vmx_get_mt_mask(vcpu, gfn, is_mmio); +} + static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp) { if (!is_td(kvm)) @@ -348,7 +356,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .set_tss_addr = vmx_set_tss_addr, .set_identity_map_addr = vmx_set_identity_map_addr, - .get_mt_mask = vmx_get_mt_mask, + .get_mt_mask = vt_get_mt_mask, .get_exit_info = vmx_get_exit_info, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 52820fa17708..aaf3a232c940 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -344,6 +344,29 @@ int tdx_vm_init(struct kvm *kvm) return 0; } +u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) +{ + if (is_mmio) + return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; + + if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) + return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + + /* + * TDX enforces CR0.CD = 0 and KVM MTRR emulation enforces writeback. + * TODO: implement MTRR MSR emulation so that + * MTRRCap: SMRR=0: SMRR interface unsupported + * WC=0: write combining unsupported + * FIX=0: Fixed range registers unsupported + * VCNT=0: number of variable range regitsers = 0 + * MTRRDefType: E=1, FE=0, type=writeback only. Don't allow other value. + * E=1: enable MTRR + * FE=0: disable fixed range MTRRs + * type: default memory type=writeback + */ + return MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT; +} + int tdx_vcpu_create(struct kvm_vcpu *vcpu) { /* diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index bff797ba2623..dad23ea98052 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -151,6 +151,7 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_create(struct kvm_vcpu *vcpu); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); +u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -173,6 +174,7 @@ static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOP static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; } static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} +static inline u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { return 0; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } -- 2.25.1