Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp4680976iob; Sun, 8 May 2022 21:47:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyZSlireHckqr6eyJI98iZCSLliasy6oX/Klp2Gkx+6PHQq9iEyoRCid8APB0C8qJmNN94k X-Received: by 2002:a05:6a00:1acf:b0:50e:1872:c6b1 with SMTP id f15-20020a056a001acf00b0050e1872c6b1mr14357735pfv.76.1652071667292; Sun, 08 May 2022 21:47:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652071667; cv=none; d=google.com; s=arc-20160816; b=xOKDe2O7KSQc518TtWG8cUJa7KH06df9djmRzsUfhe4mNoCHE64eyLoF0a+9JzdW7i D1Krgf3UcQAVhTjtn97S/Lhsz2bsUl7aSPFftpHO0iOG2uEB5l8dJaFkljP8IAj0ex2K 5NrqMP6+qgGlJCbGkPdrnget9qkF3mwpKpKyo7nLTj2YU4dL2G/HKt0BaLSuTGRA/3/A +scD+Yst6cFSklaqLgyGTb/sg9VuxnBIqbQjefgCb0crycI1buoJP4uI24mNlsopkOex NVb0L9ITV1nROCNWOoFSfjFUny1e6QIla25SDl8u4Td8BGIjzUgwGqXQD1kaK0a7gxjP Lbng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jE4yCaiZbPFWYSipEJns6q2H+3cAhn6y+oeXfshIIz0=; b=D/a1pA0T07p0DhlJ1ZNBaYn9UG3dZnqqycBsXGdNHROT6fkZaSAVXV+YLcYrERjsfs Fe4sPX8QSQoW4ic4hODYb0J1qW8ZcAwv/sCjibUSJlJ7hyYiYYjIAnIujgR9OcMg+1XP RhmzkPVtNkkjLLsGrsBm26daauXTJAkBzhmp8qyRELo59tIS8gGq9EU5BHWb6FNS71mE ncXaOul7XUeA8RGJpFclvKKI09TzhXX/p2ElDEAV9R7yEXPttWtqqPVZxI4Rc6m8FhCK Ot7SN1DCPkP5nYw/qCX2Y06zZLVoBCYvhCE5q9f5RxEwXwOzyeLjzv+oTBeXYOJuZ0FK gdSA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Pe9sT4Xv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id x20-20020a1709027c1400b0015a39e94e61si9319919pll.420.2022.05.08.21.47.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 May 2022 21:47:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Pe9sT4Xv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CA75580718; Sun, 8 May 2022 21:46:02 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383315AbiEES07 (ORCPT + 99 others); Thu, 5 May 2022 14:26:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383206AbiEESTl (ORCPT ); Thu, 5 May 2022 14:19:41 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37BE93A714; Thu, 5 May 2022 11:15:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651774553; x=1683310553; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2hsiPaxykS0hFgrg6vKCDbwTAq/Z3c/uskYvahx3PYU=; b=Pe9sT4XveBLmGeA5j/qaNU/K9yaqt2blrQSYa5cGo94o3FiYWKYOG7Kl Sk8Pqcy+gFYHbvVu5QZhwM/B+wgwZeVxXKYeM8idcLDyN4tN6SG4RGnb5 H2uKb4rPI7fSs0KBPNVejDiwbx0xDvPci+FUk3JhNLLtKNXxxVwF6Oozg GLiUTHpSSnRsyjXrWP1dGzTS9HM3p3CdxPuBgi/MIT2c3wKIFtuWVj23C mZmrAHYnxaQpxHYDn+2VMbujkZ32j1X7f7eEg9BdzEZxF0TJ8nNGv3L2B rHl3CI4nfXZ8sz59Daj75XcKAahUpK37wLgUIstTR0IeA13JzkXt5E/Pk w==; X-IronPort-AV: E=McAfee;i="6400,9594,10338"; a="248113902" X-IronPort-AV: E=Sophos;i="5.91,202,1647327600"; d="scan'208";a="248113902" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2022 11:15:51 -0700 X-IronPort-AV: E=Sophos;i="5.91,202,1647327600"; d="scan'208";a="665083402" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2022 11:15:51 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [RFC PATCH v6 073/104] KVM: TDX: handle vcpu migration over logical processor Date: Thu, 5 May 2022 11:15:07 -0700 Message-Id: <96148d16e5c3b394bec4d7f51130b2d35e33352e.1651774250.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata For vcpu migration, in the case of VMX, VCMS is flushed on the source pcpu, and load it on the target pcpu. There are corresponding TDX SEAMCALL APIs, call them on vcpu migration. The logic is mostly same as VMX except the TDX SEAMCALLs are used. When shutting down the machine, (VMX or TDX) vcpus needs to be shutdown on each pcpu. Do the similar for TDX with TDX SEAMCALL APIs. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/main.c | 43 +++++++++++-- arch/x86/kvm/vmx/tdx.c | 121 +++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx.h | 2 + arch/x86/kvm/vmx/x86_ops.h | 6 ++ 4 files changed, 168 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index f101f358d90c..ad09988c4faa 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -17,6 +17,25 @@ static bool vt_is_vm_type_supported(unsigned long type) (enable_tdx && tdx_is_vm_type_supported(type)); } +static int vt_hardware_enable(void) +{ + int ret; + + ret = vmx_hardware_enable(); + if (ret) + return ret; + + tdx_hardware_enable(); + return 0; +} + +static void vt_hardware_disable(void) +{ + /* Note, TDX *and* VMX need to be disabled if TDX is enabled. */ + tdx_hardware_disable(); + vmx_hardware_disable(); +} + static __init int vt_hardware_setup(void) { int ret; @@ -151,6 +170,14 @@ static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu) return vmx_vcpu_run(vcpu); } +static void vt_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + if (is_td_vcpu(vcpu)) + return tdx_vcpu_load(vcpu, cpu); + + return vmx_vcpu_load(vcpu, cpu); +} + static void vt_flush_tlb_all(struct kvm_vcpu *vcpu) { if (is_td_vcpu(vcpu)) @@ -192,6 +219,14 @@ static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, vmx_load_mmu_pgd(vcpu, root_hpa, pgd_level); } +static void vt_sched_in(struct kvm_vcpu *vcpu, int cpu) +{ + if (is_td_vcpu(vcpu)) + return; + + vmx_sched_in(vcpu, cpu); +} + static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp) { if (!is_td(kvm)) @@ -214,8 +249,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .hardware_unsetup = vt_hardware_unsetup, .check_processor_compatibility = vmx_check_processor_compatibility, - .hardware_enable = vmx_hardware_enable, - .hardware_disable = vmx_hardware_disable, + .hardware_enable = vt_hardware_enable, + .hardware_disable = vt_hardware_disable, .has_emulated_msr = vmx_has_emulated_msr, .is_vm_type_supported = vt_is_vm_type_supported, @@ -231,7 +266,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .vcpu_reset = vt_vcpu_reset, .prepare_switch_to_guest = vt_prepare_switch_to_guest, - .vcpu_load = vmx_vcpu_load, + .vcpu_load = vt_vcpu_load, .vcpu_put = vt_vcpu_put, .update_exception_bitmap = vmx_update_exception_bitmap, @@ -317,7 +352,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .request_immediate_exit = vmx_request_immediate_exit, - .sched_in = vmx_sched_in, + .sched_in = vt_sched_in, .cpu_dirty_log_size = PML_ENTITY_NUM, .update_cpu_dirty_logging = vmx_update_cpu_dirty_logging, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index f92941586b42..6233c65b2a48 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -61,6 +61,14 @@ static struct tdx_capabilities tdx_caps; static DEFINE_MUTEX(tdx_lock); static struct mutex *tdx_mng_key_config_lock; +/* + * A per-CPU list of TD vCPUs associated with a given CPU. Used when a CPU + * is brought down to invoke TDH_VP_FLUSH on the approapriate TD vCPUS. + * Protected by interrupt mask. This list is manipulated in process context + * of vcpu and IPI callback. See tdx_flush_vp_on_cpu(). + */ +static DEFINE_PER_CPU(struct list_head, associated_tdvcpus); + static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u16 hkid) { pa &= ~hkid_mask; @@ -95,6 +103,36 @@ static inline bool is_td_finalized(struct kvm_tdx *kvm_tdx) return kvm_tdx->finalized; } +static inline void tdx_disassociate_vp(struct kvm_vcpu *vcpu) +{ + list_del(&to_tdx(vcpu)->cpu_list); + + /* + * Ensure tdx->cpu_list is updated is before setting vcpu->cpu to -1, + * otherwise, a different CPU can see vcpu->cpu = -1 and add the vCPU + * to its list before its deleted from this CPUs list. + */ + smp_wmb(); + + vcpu->cpu = -1; +} + +void tdx_hardware_enable(void) +{ + INIT_LIST_HEAD(&per_cpu(associated_tdvcpus, raw_smp_processor_id())); +} + +void tdx_hardware_disable(void) +{ + int cpu = raw_smp_processor_id(); + struct list_head *tdvcpus = &per_cpu(associated_tdvcpus, cpu); + struct vcpu_tdx *tdx, *tmp; + + /* Safe variant needed as tdx_disassociate_vp() deletes the entry. */ + list_for_each_entry_safe(tdx, tmp, tdvcpus, cpu_list) + tdx_disassociate_vp(&tdx->vcpu); +} + static void tdx_clear_page(unsigned long page) { const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); @@ -171,6 +209,41 @@ static void tdx_reclaim_td_page(struct tdx_td_page *page) free_page(page->va); } +static void tdx_flush_vp(void *arg) +{ + struct kvm_vcpu *vcpu = arg; + u64 err; + + lockdep_assert_irqs_disabled(); + + /* Task migration can race with CPU offlining. */ + if (vcpu->cpu != raw_smp_processor_id()) + return; + + /* + * No need to do TDH_VP_FLUSH if the vCPU hasn't been initialized. The + * list tracking still needs to be updated so that it's correct if/when + * the vCPU does get initialized. + */ + if (is_td_vcpu_created(to_tdx(vcpu))) { + err = tdh_vp_flush(to_tdx(vcpu)->tdvpr.pa); + if (unlikely(err && err != TDX_VCPU_NOT_ASSOCIATED)) { + if (WARN_ON_ONCE(err)) + pr_tdx_error(TDH_VP_FLUSH, err, NULL); + } + } + + tdx_disassociate_vp(vcpu); +} + +static void tdx_flush_vp_on_cpu(struct kvm_vcpu *vcpu) +{ + if (unlikely(vcpu->cpu == -1)) + return; + + smp_call_function_single(vcpu->cpu, tdx_flush_vp, vcpu, 1); +} + static int tdx_do_tdh_phymem_cache_wb(void *param) { u64 err = 0; @@ -195,9 +268,11 @@ void tdx_mmu_release_hkid(struct kvm *kvm) struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); cpumask_var_t packages; bool cpumask_allocated; + struct kvm_vcpu *vcpu; u64 err; int ret; int i; + unsigned long j; if (!is_hkid_assigned(kvm_tdx)) return; @@ -205,6 +280,19 @@ void tdx_mmu_release_hkid(struct kvm *kvm) if (!is_td_created(kvm_tdx)) goto free_hkid; + kvm_for_each_vcpu(j, vcpu, kvm) + tdx_flush_vp_on_cpu(vcpu); + + mutex_lock(&tdx_lock); + err = tdh_mng_vpflushdone(kvm_tdx->tdr.pa); + mutex_unlock(&tdx_lock); + if (WARN_ON_ONCE(err)) { + pr_tdx_error(TDH_MNG_VPFLUSHDONE, err, NULL); + pr_err("tdh_mng_vpflushdone failed. HKID %d is leaked.\n", + kvm_tdx->hkid); + return; + } + cpumask_allocated = zalloc_cpumask_var(&packages, GFP_KERNEL); cpus_read_lock(); for_each_online_cpu(i) { @@ -479,6 +567,26 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) return ret; } +void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + struct vcpu_tdx *tdx = to_tdx(vcpu); + + if (vcpu->cpu == cpu) + return; + + tdx_flush_vp_on_cpu(vcpu); + + local_irq_disable(); + /* + * Pairs with the smp_wmb() in tdx_disassociate_vp() to ensure + * vcpu->cpu is read before tdx->cpu_list. + */ + smp_rmb(); + + list_add(&tdx->cpu_list, &per_cpu(associated_tdvcpus, cpu)); + local_irq_enable(); +} + void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) { struct vcpu_tdx *tdx = to_tdx(vcpu); @@ -525,6 +633,19 @@ void tdx_vcpu_free(struct kvm_vcpu *vcpu) tdx_reclaim_td_page(&tdx->tdvpx[i]); kfree(tdx->tdvpx); tdx_reclaim_td_page(&tdx->tdvpr); + + /* + * kvm_free_vcpus() + * -> kvm_unload_vcpu_mmu() + * + * does vcpu_load() for every vcpu after they already disassociated + * from the per cpu list when tdx_vm_teardown(). So we need to + * disassociate them again, otherwise the freed vcpu data will be + * accessed when do list_{del,add}() on associated_tdvcpus list + * later. + */ + tdx_flush_vp_on_cpu(vcpu); + WARN_ON(vcpu->cpu != -1); } void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 414c15235ed0..32e05efa70f9 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -85,6 +85,8 @@ struct vcpu_tdx { struct tdx_td_page tdvpr; struct tdx_td_page *tdvpx; + struct list_head cpu_list; + union tdx_exit_reason exit_reason; bool initialized; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index a2deb42794c0..6594bcb717cd 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -132,6 +132,8 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu); int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops); bool tdx_is_vm_type_supported(unsigned long type); void tdx_hardware_unsetup(void); +void tdx_hardware_enable(void); +void tdx_hardware_disable(void); int tdx_dev_ioctl(void __user *argp); int tdx_vm_init(struct kvm *kvm); @@ -144,6 +146,7 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu); void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu); void tdx_vcpu_put(struct kvm_vcpu *vcpu); +void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -154,6 +157,8 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; } static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; } static inline void tdx_hardware_unsetup(void) {} +static inline void tdx_hardware_enable(void) {} +static inline void tdx_hardware_disable(void) {} static inline int tdx_dev_ioctl(void __user *argp) { return -EOPNOTSUPP; }; static inline int tdx_vm_init(struct kvm *kvm) { return -EOPNOTSUPP; } @@ -167,6 +172,7 @@ static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT_FASTPATH_NONE; } static inline void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_put(struct kvm_vcpu *vcpu) {} +static inline void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {} static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } -- 2.25.1