Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp37903141rwd; Tue, 11 Jul 2023 23:31:22 -0700 (PDT) X-Google-Smtp-Source: APBJJlExBfv4WQuy7XAIDFq5lQP9kwvCbuBxd3oVLh3wFrwD8JLmFuvjzay9asHrnPwJGp6kbuIu X-Received: by 2002:a17:906:20dd:b0:993:ec0b:1a27 with SMTP id c29-20020a17090620dd00b00993ec0b1a27mr12733065ejc.24.1689143482414; Tue, 11 Jul 2023 23:31:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689143482; cv=none; d=google.com; s=arc-20160816; b=LkMl3O+niRuzK2v6Nh5fZK5HomiSPB5TH9pTikAzqKE+JoB2qi4cHy/aN0MGoNm5ox 7PfEFKXNY0BeeRs5Filj/DfAzmQxAmsKSLgNqq1roawowHXNSe0QMqdV9P7aniB05zfn ZiIh8Tkj6slYoqPorGBSBwQIyjbGW5TWJE3v8CC6BVk8haz1eWp9JoJpYWWxYk/qpKyU vNOqpdoLGVTQ1Cku+xDpRx8cnJUcLTK5OQ3rLfAJMa4G54tSxS0zBKLQD2S1orC3LpNG Iu51tXqcXBc3majv/bn0mbr1WyQdUmMknkPC0nfr1dkl605qLZclhAxMSUXU3AoXIimH mPRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=QdORK4q30AVZLGjjJUp/eqkF38kVvBvp6kuLxlkvP4o=; fh=ir/qlGcRNQUOxM/GI53gUobxVfqZktl/dRsS5fMX6wo=; b=yN1DP8R3Vtw4U0hi/rgSd5hC8XTg2eO51EFCoO6bzSf3Cjc9MDaZ340726AMqIZpiA oHktMV+pWht7hXVYQsvZHOdXrw3bHyKoCer6rqJrqX6+68g6nplrZlZhU65F0BztWViq VzJ1sp7GBaxoBLwmTthu/+R14PybcCzUy8RgXy8Hz535gKKUaM4JQI9pMDAuZfDKuAmO 54UV5wnwtjsMvNVKif0pzWZ7AyFc+4M5zERalXOEQTUx9DNb1vksFtP4Y/zA4FqYG07T ui0QzWaz/jStA0gO6ORHrccavQR9+YQvXsXxgOATH55Cw82TKQ1wI+wJBIQ6C7x38wlq hQ9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=I6pLDodd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l18-20020a170906231200b0099304c10fd0si3511437eja.991.2023.07.11.23.30.58; Tue, 11 Jul 2023 23:31:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=I6pLDodd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232016AbjGLGJA (ORCPT + 99 others); Wed, 12 Jul 2023 02:09:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231886AbjGLGIz (ORCPT ); Wed, 12 Jul 2023 02:08:55 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFD23210E; Tue, 11 Jul 2023 23:08:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689142100; x=1720678100; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=CH+BgLUDKN7wzFrNHeRudgNib9NyLEvOxPXuiPpSf+Q=; b=I6pLDoddjnuooaSOadDHuJnHk65UhirF9md9PjwFW6Jmn2h1KcDepSoq LpnWX77skwWRw++VBx8p5ZRZWMTL86sYlcMtASi2zkILJoLWSXlpRiEc/ 06Oku44HYpI1rmlcxHs7Nqt1PzdKW/bWwTs3p5kKRzZI7+rxPi5LksjsB hY4sJi0BQzxH7l7r9h+y4brdHo/2tvnyGJXkYAxwjhSorkVvP85RQ6qxg jmR3FKFezIokCrtX88pyoJv1/ZzFhPLEmG8+2GrYNxmcd0kQEV2N1DxQY fxRmHbL0G7uEVTLujLsHD8hw0PCEgPFUwpUsqLcIRmxxYe1i/EVb8Neml w==; X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="354716213" X-IronPort-AV: E=Sophos;i="6.01,198,1684825200"; d="scan'208";a="354716213" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jul 2023 23:08:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="895473500" X-IronPort-AV: E=Sophos;i="6.01,198,1684825200"; d="scan'208";a="895473500" Received: from qianwen-mobl1.ccr.corp.intel.com (HELO [10.238.5.29]) ([10.238.5.29]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jul 2023 23:08:17 -0700 Message-ID: <48951fc1-4e98-b32a-af4f-343b7ea2d44d@intel.com> Date: Wed, 12 Jul 2023 14:08:15 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH v14 072/113] KVM: TDX: handle vcpu migration over logical processor Content-Language: en-US To: isaku.yamahata@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com References: <7a57603a0668ec51a7ac324ab3d1a8acb9863e7b.1685333728.git.isaku.yamahata@intel.com> From: "Wen, Qian" In-Reply-To: <7a57603a0668ec51a7ac324ab3d1a8acb9863e7b.1685333728.git.isaku.yamahata@intel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/29/2023 12:19 PM, isaku.yamahata@intel.com wrote: > From: Isaku Yamahata > > For vcpu migration, in the case of VMX, VMCS is flushed on the source pcpu, > and load it on the target pcpu. There are corresponding TDX SEAMCALL APIs, > call them on vcpu migration. The logic is mostly same as VMX except the > TDX SEAMCALLs are used. > > When shutting down the machine, (VMX or TDX) vcpus needs to be shutdown on > each pcpu. Do the similar for TDX with TDX SEAMCALL APIs. > > Signed-off-by: Isaku Yamahata > --- > arch/x86/kvm/vmx/main.c | 32 ++++++- > arch/x86/kvm/vmx/tdx.c | 168 +++++++++++++++++++++++++++++++++++++ > arch/x86/kvm/vmx/tdx.h | 2 + > arch/x86/kvm/vmx/x86_ops.h | 4 + > 4 files changed, 203 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c > index 17fb1515e56a..29ebd171dbe3 100644 ... > @@ -455,6 +606,19 @@ void tdx_vcpu_free(struct kvm_vcpu *vcpu) > return; > } > > + /* > + * kvm_free_vcpus() > + * -> kvm_unload_vcpu_mmu() > + * > + * does vcpu_load() for every vcpu after they already disassociated > + * from the per cpu list when tdx_vm_teardown(). So we need to > + * disassociate them again, otherwise the freed vcpu data will be > + * accessed when do list_{del,add}() on associated_tdvcpus list > + * later. > + */ Nit: kvm_free_vcpus() and tdx_vm_teardown() are typos? I don't find these functions. > + tdx_disassociate_vp_on_cpu(vcpu); > + WARN_ON_ONCE(vcpu->cpu != -1); > + > if (tdx->tdvpx_pa) { > for (i = 0; i < tdx_info.nr_tdvpx_pages; i++) { > if (tdx->tdvpx_pa[i])