Received: by 2002:a05:6358:c692:b0:131:369:b2a3 with SMTP id fe18csp288243rwb; Tue, 25 Jul 2023 16:08:08 -0700 (PDT) X-Google-Smtp-Source: APBJJlHCpv+kY9RHsLvP/w5p6qgrtIQttWIz//NXo4KsG8Bod2Wy3d4Hs5QJ40R8LvXXdAr9DJm8 X-Received: by 2002:a17:907:77d9:b0:992:54dc:9cf2 with SMTP id kz25-20020a17090777d900b0099254dc9cf2mr172832ejc.62.1690326487956; Tue, 25 Jul 2023 16:08:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690326487; cv=none; d=google.com; s=arc-20160816; b=MwwZ0fFu+w16PKEo33dSr1b+pInNkgmSk4gwJhWh6OpnpOpq4EUOBIsLPyJRezd0iu 3iatY+tN0GLgxVkBfHZTtMQna6TcvHpwF1t4NjtQuspjdqQle8EuBpCYYJBKdyC7RaI9 thmgHLC/Lbms209I+XAwZH/qfgqZ+eIM7qu4EpAxPhG1M+wVCOyGowYHe3yzlNJlnsTb CuhLXFTDMKxcvA087yntLVbITQ1NOMph5VhWpGLUgU+s/9HXDldLRcBIfG8HV24PMCci Ge4HEUzjvLog6FLkRhu70evHD4R+0K9+Iik/8lO/sWTMM9/4zDhzPzCQ4zVf/AOnae5W Ph8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kN1LtLzytpieJa+/oqHWY8T4VHwqIypYp9Ilm7Qorus=; fh=juxwNcA6iKwLISFgUCNwdoIYC0NKLUseq3xZdq25RR4=; b=vYbu2T6o33zNo/G5VXDx3cqHP7IV8ZfZXYHv5rBSpzGRarWqprYYk/PPNHeTz0wswq bmkY3+AbzANrM2UowbtBtjZpSfsvabGjl70AlXKKv2IUR54eY+sLIIYC1Z/OvXb2FWox /dYKUSW+uhmuSAvvxZKkg6ZajjZLBzzLvJH05FkebSfu4bdOc4O1F/sKBSzg6vZjIc/y cUgLqUXrf/VBYjAIoDiS/eSHRnAEboZr25Hkn+Ak4lHvrJ1nZQsyoP7Q5TawpOL4hC4B 18Zxk2+stnTyjASGqNGcNv5XXrSAvZqezmhWkMJnkdBaWzrEJ58u7fzN6sDkc4EyNfeJ n0Pg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nYUBWN3d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gh2-20020a170906e08200b0099bbcfbd199si385429ejb.988.2023.07.25.16.07.43; Tue, 25 Jul 2023 16:08:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nYUBWN3d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230375AbjGYWSA (ORCPT + 99 others); Tue, 25 Jul 2023 18:18:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231530AbjGYWQW (ORCPT ); Tue, 25 Jul 2023 18:16:22 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85ED4358A; Tue, 25 Jul 2023 15:15:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690323353; x=1721859353; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Hm+qc6OZKTLiS/cWtmLpuCb6TFeXiIf6J0MfW03XlpY=; b=nYUBWN3d52viTbYAdGpjDKg7qgwLyRhvWR3/3GGVBBGsJfxwTen+Bift 3rzkfJ0mhllgq5ddCmjf589hoJy83Ev6vK8iaQxBmDmxYtPwAQ8vL+3Jq QSvOppd0bEtpCbv0fU7hVm6fH2gv4JDP/QdCgYp+F++dpKIO/hEGYNBU+ pkDnXZ0fxu3m1CJsYiMGSUY13D7dXvmS/3bpEOLoOqAdOosG1eV6AUoHW wnBBlKAGSwdDjKQXVTNmJdqpHIOVsaW1ABTeSR0+ic1HvwcGCu6uwl35Y y8Sbf4pqkOwC9E6NIjpYd7GVXnYeWm1sKHj1CMICcntA7spxcykGkEBHK w==; X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="367882516" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="367882516" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:15:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="840001777" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="840001777" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:15:49 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v15 059/115] KVM: TDX: Create initial guest memory Date: Tue, 25 Jul 2023 15:14:10 -0700 Message-Id: <19a589ab40b01c10c3b9addc5c38f3fe64b15ad0.1690322424.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata Because the guest memory is protected in TDX, the creation of the initial guest memory requires a dedicated TDX module API, tdh_mem_page_add, instead of directly copying the memory contents into the guest memory in the case of the default VM type. KVM MMU page fault handler callback, private_page_add, handles it. Define new subcommand, KVM_TDX_INIT_MEM_REGION, of VM-scoped KVM_MEMORY_ENCRYPT_OP. It assigns the guest page, copies the initial memory contents into the guest memory, encrypts the guest memory. At the same time, optionally it extends memory measurement of the TDX guest. It calls the KVM MMU page fault(EPT-violation) handler to trigger the callbacks for it. Signed-off-by: Isaku Yamahata --- v14 -> v15: - add a check if TD is finalized or not to tdx_init_mem_region() - return -EAGAIN when partial population Signed-off-by: Isaku Yamahata --- arch/x86/include/uapi/asm/kvm.h | 9 ++ arch/x86/kvm/mmu/mmu.c | 1 + arch/x86/kvm/vmx/tdx.c | 158 +++++++++++++++++++++++++- arch/x86/kvm/vmx/tdx.h | 2 + tools/arch/x86/include/uapi/asm/kvm.h | 9 ++ 5 files changed, 174 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 311a7894b712..a1815fcbb0be 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -572,6 +572,7 @@ enum kvm_tdx_cmd_id { KVM_TDX_CAPABILITIES = 0, KVM_TDX_INIT_VM, KVM_TDX_INIT_VCPU, + KVM_TDX_INIT_MEM_REGION, KVM_TDX_CMD_NR_MAX, }; @@ -645,4 +646,12 @@ struct kvm_tdx_init_vm { struct kvm_cpuid2 cpuid; }; +#define KVM_TDX_MEASURE_MEMORY_REGION (1UL << 0) + +struct kvm_tdx_init_mem_region { + __u64 source_addr; + __u64 gpa; + __u64 nr_pages; +}; + #endif /* _ASM_X86_KVM_H */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7ef66d8a785b..0d218d930d0a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5704,6 +5704,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) out: return r; } +EXPORT_SYMBOL(kvm_mmu_load); void kvm_mmu_unload(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index e367351f8d71..32e84c29d35e 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -446,6 +446,21 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); } +static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa) +{ + struct tdx_module_output out; + u64 err; + int i; + + for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) { + err = tdh_mr_extend(kvm_tdx->tdr_pa, gpa + i, &out); + if (KVM_BUG_ON(err, &kvm_tdx->kvm)) { + pr_tdx_error(TDH_MR_EXTEND, err, &out); + break; + } + } +} + static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn) { struct page *page = pfn_to_page(pfn); @@ -460,12 +475,10 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, hpa_t hpa = pfn_to_hpa(pfn); gpa_t gpa = gfn_to_gpa(gfn); struct tdx_module_output out; + hpa_t source_pa; + bool measure; u64 err; - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm)) - return -EINVAL; - /* * Because restricted mem doesn't support page migration with * a_ops->migrate_page (yet), no callback isn't triggered for KVM on @@ -476,7 +489,12 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, */ get_page(pfn_to_page(pfn)); + /* Build-time faults are induced and handled via TDH_MEM_PAGE_ADD. */ if (likely(is_td_finalized(kvm_tdx))) { + /* TODO: handle large pages. */ + if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm)) + return -EINVAL; + err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, hpa, &out); if (unlikely(err == TDX_ERROR_SEPT_BUSY)) { tdx_unpin(kvm, pfn); @@ -490,7 +508,45 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, return 0; } - /* TODO: tdh_mem_page_add() comes here for the initial memory. */ + /* + * KVM_INIT_MEM_REGION, tdx_init_mem_region(), supports only 4K page + * because tdh_mem_page_add() supports only 4K page. + */ + if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm)) + return -EINVAL; + + /* + * In case of TDP MMU, fault handler can run concurrently. Note + * 'source_pa' is a TD scope variable, meaning if there are multiple + * threads reaching here with all needing to access 'source_pa', it + * will break. However fortunately this won't happen, because below + * TDH_MEM_PAGE_ADD code path is only used when VM is being created + * before it is running, using KVM_TDX_INIT_MEM_REGION ioctl (which + * always uses vcpu 0's page table and protected by vcpu->mutex). + */ + if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) { + tdx_unpin(kvm, pfn); + return -EINVAL; + } + + source_pa = kvm_tdx->source_pa & ~KVM_TDX_MEASURE_MEMORY_REGION; + measure = kvm_tdx->source_pa & KVM_TDX_MEASURE_MEMORY_REGION; + kvm_tdx->source_pa = INVALID_PAGE; + + do { + err = tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, hpa, source_pa, + &out); + /* + * This path is executed during populating initial guest memory + * image. i.e. before running any vcpu. Race is rare. + */ + } while (unlikely(err == TDX_ERROR_SEPT_BUSY)); + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); + tdx_unpin(kvm, pfn); + return -EIO; + } else if (measure) + tdx_measure_page(kvm_tdx, gpa); return 0; } @@ -1215,6 +1271,95 @@ void tdx_flush_tlb_current(struct kvm_vcpu *vcpu) tdx_track(to_kvm_tdx(vcpu->kvm)); } +#define TDX_SEPT_PFERR (PFERR_WRITE_MASK | PFERR_GUEST_ENC_MASK) + +static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd) +{ + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); + struct kvm_tdx_init_mem_region region; + struct kvm_vcpu *vcpu; + struct page *page; + int idx, ret = 0; + bool added = false; + + /* Once TD is finalized, the initial guest memory is fixed. */ + if (is_td_finalized(kvm_tdx)) + return -EINVAL; + + /* The BSP vCPU must be created before initializing memory regions. */ + if (!atomic_read(&kvm->online_vcpus)) + return -EINVAL; + + if (cmd->flags & ~KVM_TDX_MEASURE_MEMORY_REGION) + return -EINVAL; + + if (copy_from_user(®ion, (void __user *)cmd->data, sizeof(region))) + return -EFAULT; + + /* Sanity check */ + if (!IS_ALIGNED(region.source_addr, PAGE_SIZE) || + !IS_ALIGNED(region.gpa, PAGE_SIZE) || + !region.nr_pages || + region.gpa + (region.nr_pages << PAGE_SHIFT) <= region.gpa || + !kvm_is_private_gpa(kvm, region.gpa) || + !kvm_is_private_gpa(kvm, region.gpa + (region.nr_pages << PAGE_SHIFT))) + return -EINVAL; + + vcpu = kvm_get_vcpu(kvm, 0); + if (mutex_lock_killable(&vcpu->mutex)) + return -EINTR; + + vcpu_load(vcpu); + idx = srcu_read_lock(&kvm->srcu); + + kvm_mmu_reload(vcpu); + + while (region.nr_pages) { + if (signal_pending(current)) { + ret = -ERESTARTSYS; + break; + } + + if (need_resched()) + cond_resched(); + + /* Pin the source page. */ + ret = get_user_pages_fast(region.source_addr, 1, 0, &page); + if (ret < 0) + break; + if (ret != 1) { + ret = -ENOMEM; + break; + } + + kvm_tdx->source_pa = pfn_to_hpa(page_to_pfn(page)) | + (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION); + + ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR, + PG_LEVEL_4K); + put_page(page); + if (ret) + break; + + region.source_addr += PAGE_SIZE; + region.gpa += PAGE_SIZE; + region.nr_pages--; + added = true; + } + + srcu_read_unlock(&kvm->srcu, idx); + vcpu_put(vcpu); + + mutex_unlock(&vcpu->mutex); + + if (added && region.nr_pages > 0) + ret = -EAGAIN; + if (copy_to_user((void __user *)cmd->data, ®ion, sizeof(region))) + ret = -EFAULT; + + return ret; +} + int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { struct kvm_tdx_cmd tdx_cmd; @@ -1234,6 +1379,9 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) case KVM_TDX_INIT_VM: r = tdx_td_init(kvm, &tdx_cmd); break; + case KVM_TDX_INIT_MEM_REGION: + r = tdx_init_mem_region(kvm, &tdx_cmd); + break; default: r = -EINVAL; goto out; diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 6603da8708ad..24ee0bc3285c 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -17,6 +17,8 @@ struct kvm_tdx { u64 xfam; int hkid; + hpa_t source_pa; + bool finalized; atomic_t tdh_mem_track; diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h index 83bd9e3118d1..a3408f6e1124 100644 --- a/tools/arch/x86/include/uapi/asm/kvm.h +++ b/tools/arch/x86/include/uapi/asm/kvm.h @@ -567,6 +567,7 @@ enum kvm_tdx_cmd_id { KVM_TDX_CAPABILITIES = 0, KVM_TDX_INIT_VM, KVM_TDX_INIT_VCPU, + KVM_TDX_INIT_MEM_REGION, KVM_TDX_CMD_NR_MAX, }; @@ -648,4 +649,12 @@ struct kvm_tdx_init_vm { }; }; +#define KVM_TDX_MEASURE_MEMORY_REGION (1UL << 0) + +struct kvm_tdx_init_mem_region { + __u64 source_addr; + __u64 gpa; + __u64 nr_pages; +}; + #endif /* _ASM_X86_KVM_H */ -- 2.25.1