Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp2610210rdg; Mon, 16 Oct 2023 09:18:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGqYTQQu2ebqVuAIcrubYKPt7e31ICsAId+4dVImaidKMXI8tITcJoCnGftGPyQYJW+wVyy X-Received: by 2002:a05:6359:6d91:b0:166:cae0:6e19 with SMTP id tg17-20020a0563596d9100b00166cae06e19mr1520127rwb.3.1697473122937; Mon, 16 Oct 2023 09:18:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697473122; cv=none; d=google.com; s=arc-20160816; b=MzpH9vLWKQhj/J5NekzJfiiRg/WpEN7hPMzs7fl8kd/UOOKKfylwSDN8UI4aaGQImw JcSgUgFavGFZMbuwFgA24QgVMhPIVbWn+5KtCjDdYYA8mha0VSFxCcSMdEUZPGgvNW90 /Du2LrAA21BKAO4lfJlBXvVGrxXFLa1lcJ89wlzJwD8xcQzgZLJHlRHfhPbL+klEoaaQ tOOvCEanZC3Wq2Bb55k2zgM9DXV0EWmNoUeNmEzGkba8/0If6L5MIaw4aY70e2/A7wjY o4Ob3cLwo3Jbd+c8h+IST7u2i5oU39SmXzmT/1X9mo/7wqmKjYCQ8IP1zaHbj4nadjWW QhOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GqFEY32OxUEdswW6i9lIfg5f6AU0E4Z8/67k3GozGUs=; fh=lRdU2Q/1zx5DcPdZuWBjshA5VT5Oc9cEhB1tCFiV0Nw=; b=le2WeZOdCOaHmeIzYKn2NdKk7Fy5gNZLcXOqpU6YJ71PvUQb0S61SgLi1v3u3cUP7o Yg5VN7+ANJ7iTUGwZzTDou9J5nfxtpguvqqx680c4/WPaAGXGn5eI9uBZrfYQXK1NdlQ /Xgfl0tFElkK9gOteEIXtxDTFTyYVp56h8zmW/tenUMIXeY8wGhoImCWSTXFZXo/prDt +G2kLOIGACsvFZMGFw9DajVJ/82EE3fkdKmrESvHswd/so5NPmtgE8DfEf1u0VqcOk0x MLlU5SShODlLet14SY1EfNm6JIedhfl+dycMGatvaSUxmlu+6w78/KgfbMLYRkU1GclY 8EGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=E9+Ine9J; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id f15-20020a056a00228f00b006934390d0casi153709pfe.36.2023.10.16.09.18.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Oct 2023 09:18:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=E9+Ine9J; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 0EE1880AA25D; Mon, 16 Oct 2023 09:18:40 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233854AbjJPQSO (ORCPT + 99 others); Mon, 16 Oct 2023 12:18:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234140AbjJPQRr (ORCPT ); Mon, 16 Oct 2023 12:17:47 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C56310E7; Mon, 16 Oct 2023 09:17:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473035; x=1729009035; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=te/VzOZA6RdNyTzGAxnR8umZmCmcWLX+cMrw0t3eFuQ=; b=E9+Ine9JCE261LGYWX9axWGBVRNvdq4GlruUXNkAK4wgu8xe2eYhS4MF OfNIxCRvK9cgWqinlGPq3AlVdbzrwFirV1UPf0jkpGrLJkCupyt3y37g1 HAp25Mx/DQbmawr5G17rfvzQhg2rJOA5jrVYQdd2FF1AbCmmIJ/kNUEb+ Zlj1NzlARH+rvHbaUSofFwphCq55lLdXMt2NAGuIe0DpbGoqWR55F2e5D At4baITnh0iw/MXGkmGouf493AM27H/a/1hamvRw72DXUyqO+ljR+OLuy Hx1p1GXsMdBhn1SccHBSidzORCfE6TfrhfwhTKSlMWiasYolcfuqvDxw/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="364921822" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="364921822" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:15:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="846448157" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="846448157" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:15:47 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v16 056/116] KVM: TDX: MTRR: implement get_mt_mask() for TDX Date: Mon, 16 Oct 2023 09:14:08 -0700 Message-Id: <65d13fa6ce67b025ce609aeeaa7f0624c9870c13.1697471314.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 16 Oct 2023 09:18:40 -0700 (PDT) From: Isaku Yamahata Because TDX virtualize cpuid[0x1].EDX[MTRR: bit 12] to fixed 1, guest TD thinks MTRR is supported. Although TDX supports only WB for private GPA, it's desirable to support MTRR for shared GPA. As guest access to MTRR MSRs causes #VE and KVM/x86 tracks the values of MTRR MSRs, the remining part is to implement get_mt_mask method for TDX for shared GPA. Pass around shared bit from kvm fault handler to get_mt_mask method so that it can determine if the gfn is shared or private. Implement get_mt_mask() following vmx case for shared GPA and return WB for private GPA. the existing vmx_get_mt_mask() can't be directly used as CPU state(CR0.CD) is protected. GFN passed to kvm_mtrr_check_gfn_range_consistency() should include shared bit. Suggested-by: Kai Huang Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/main.c | 10 +++++++++- arch/x86/kvm/vmx/tdx.c | 23 +++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 2 ++ 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 077eee12e7b6..22b9d44ee29f 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -228,6 +228,14 @@ static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, vmx_load_mmu_pgd(vcpu, root_hpa, pgd_level); } +static u8 vt_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) +{ + if (is_td_vcpu(vcpu)) + return tdx_get_mt_mask(vcpu, gfn, is_mmio); + + return vmx_get_mt_mask(vcpu, gfn, is_mmio); +} + static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp) { if (!is_td(kvm)) @@ -347,7 +355,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .set_tss_addr = vmx_set_tss_addr, .set_identity_map_addr = vmx_set_identity_map_addr, - .get_mt_mask = vmx_get_mt_mask, + .get_mt_mask = vt_get_mt_mask, .get_exit_info = vmx_get_exit_info, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index cad6f9beda50..a5f1b3e75764 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -363,6 +363,29 @@ int tdx_vm_init(struct kvm *kvm) return 0; } +u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) +{ + if (is_mmio) + return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; + + if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) + return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + + /* + * TDX enforces CR0.CD = 0 and KVM MTRR emulation enforces writeback. + * TODO: implement MTRR MSR emulation so that + * MTRRCap: SMRR=0: SMRR interface unsupported + * WC=0: write combining unsupported + * FIX=0: Fixed range registers unsupported + * VCNT=0: number of variable range regitsers = 0 + * MTRRDefType: E=1, FE=0, type=writeback only. Don't allow other value. + * E=1: enable MTRR + * FE=0: disable fixed range MTRRs + * type: default memory type=writeback + */ + return MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT; +} + int tdx_vcpu_create(struct kvm_vcpu *vcpu) { struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 562e567953c0..e2a8b59adf2d 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -150,6 +150,7 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_create(struct kvm_vcpu *vcpu); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); +u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -176,6 +177,7 @@ static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOP static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; } static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} +static inline u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { return 0; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } -- 2.25.1