Received: by 2002:a05:7208:3003:b0:81:def:69cd with SMTP id f3csp4122910rba; Tue, 2 Apr 2024 08:12:18 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCW3AkFbNREgxXeI58mGoL9TOMD2JBZwAfRKHEOgy9z3nP5XYPSvZiQhrqta2/yVF6IRc0FCqDiRuNT7/x7B4Jpr2u68djCiK1/pygKJCQ== X-Google-Smtp-Source: AGHT+IGAKx+RGfou/wtJbLA+vk++IFHxYa9WHVwIWfuaOh6fCRQiWi+CbHJxnLXcDWd7E8yrUNR7 X-Received: by 2002:a50:d547:0:b0:56b:d1c2:9b42 with SMTP id f7-20020a50d547000000b0056bd1c29b42mr1676138edj.29.1712070737985; Tue, 02 Apr 2024 08:12:17 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712070737; cv=pass; d=google.com; s=arc-20160816; b=Fwy1AW0IXiCrCvWuGTIyepE/y1aED5z7+lgpIyiDUOdctxBZ9urZssqAyPiXwe9wgo oNQPA2MlVy3CbZ6KrhLLdMKo2AGpxPzV5D+y2Q02bucV6Y5/aj7b8c/0uyUV4jnpgy2V xiSV+0q9/+5EOpmelj6N70P6Yu8feMfgQGulA4IUaFsoKnwhQOOxqeTu+DV4UsNv9DT4 M0jqN6MWlP3LxH/yRM8iFD18nq+JGE+g3WQdT0zJT3SITnjfeTPZa6u2wEaik7dkuLvC d9xNto/NMn86mUqAb0cQZ+0+s/OKV+hyzr/WEASg0ERwraqs2ljsYBEUNHRjhUEkBjjZ yV9g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to:subject :user-agent:mime-version:list-unsubscribe:list-subscribe:list-id :precedence:date:message-id:dkim-signature; bh=XY15avk4YaswhPRlMs9v3MZbv3RIp+nuuZ63akUtDhE=; fh=K0XVi60J6z53e0BFBsK5x3L9Ts4yv3uQtw9XByUrpt8=; b=gCw9qc/Tpq7uFDbSSmGzbFr2/ohfy25gkckJAjVIym1cqNp7lKwvrcjok9BsmbdDPM jLNxys5ly7b5uUSxnSdj4YqaTqTo5aj7bvpBFLLs4ckBpw/tqzUDDqPmuRRhZu849nMs FGjlIgClNbTwiEVzruvZ2FZAqo6xAwKtmCkg9sTQUbBq/snLeEZVZobdROW3obBOJVpU 3Opfyk0Q5oE+ynjOzn4gIn9ECKFTxthGcv/Kc0xRx0Gcl/+qw6yg9YaWdZt2tdTbXEai URUODmLVOfvJ1Vo2xBmRB+9eXoi7seaIS3e1ZYXSu+PXFdYcRFR3Fu+2PSkC5ILBjTc5 xRkw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="oAhbU/K7"; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-127613-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-127613-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id n17-20020a509351000000b0056c2e56ace6si5810141eda.42.2024.04.02.08.12.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Apr 2024 08:12:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-127613-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="oAhbU/K7"; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-127613-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-127613-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B99831F22D80 for ; Tue, 2 Apr 2024 09:13:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B08AC5732B; Tue, 2 Apr 2024 09:13:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="oAhbU/K7" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DFD547A53; Tue, 2 Apr 2024 09:13:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712049211; cv=none; b=TQ4RTtKGZcK07RSymU7R0JMd4p2glH2KQ8EHoFL5OwhBxpCXUiNwq9jfLSYuMJkqZ7FKvFPPxi+g5QblYAe/DnstfVD2dm47se7L4QPg3WAFcRHfahsnE/VE8SHeFKj5DlC99vaY9JZYhg7tTsqTwpvyKZwnd4vcthLVFobaqbY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712049211; c=relaxed/simple; bh=VuNglbO4jRM3VT1fMplCSTA4AcorVXFQWPfoF9/Jsso=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=dvtjBIlzretqNSZCw/dCsLwR08XZwzTQ1ugTJ4aCuwO18DcjnAbJnnG5mmcmYzJxnnjZVYigUgdmDH9c4alE27pRPmWWgSrSz7wW3oi6Yd4bzhvqSAbsc/kvD7qf8Ndb571BQHYjxUQFpmvQ0ADSikwjfir3CheVjS3cdE7U8pU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=oAhbU/K7; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712049209; x=1743585209; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=VuNglbO4jRM3VT1fMplCSTA4AcorVXFQWPfoF9/Jsso=; b=oAhbU/K7xoejA/ypgCixSH48GGK/g2Uk4HeFKOU5mznoP94VysfsHs0s 6SBG5iY/SYzYuBPNz254SYUYn6xlq2/SgbMNPTPh9um2KuFszjfvxaibl XVxYW4i/c/5dMAs1dLah5gE9EgIh7EpKB7hlfSqq/ANY9qrhPCC5bHqaS Ex0wiHSU+JrdA9xyhEEytbf/bR+HazF+SP0q9/ioj9SHRi6y3O8w3VVT8 +yt0/b/EFZovGyXGQbFfyj1y1nrSyJGBHZwD/ZGcIVKForE5zCF1mNIyZ G7PHlY1jPfoza1UlGMFL6ILQc+yFYg09l2aQR7fYyqqIb2Qz/ZQuECH7m A==; X-CSE-ConnectionGUID: NIhGXaypSRiNRzdG1hT0+A== X-CSE-MsgGUID: XnOQcDZ7Qrq73tKl13tAXg== X-IronPort-AV: E=McAfee;i="6600,9927,11031"; a="32606034" X-IronPort-AV: E=Sophos;i="6.07,174,1708416000"; d="scan'208";a="32606034" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2024 02:13:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,174,1708416000"; d="scan'208";a="22720556" Received: from binbinwu-mobl.ccr.corp.intel.com (HELO [10.238.10.225]) ([10.238.10.225]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Apr 2024 02:13:25 -0700 Message-ID: Date: Tue, 2 Apr 2024 17:13:23 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v19 070/130] KVM: TDX: TDP MMU TDX support To: isaku.yamahata@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com References: <56cdb0da8bbf17dc293a2a6b4ff74f6e3e034bbd.1708933498.git.isaku.yamahata@intel.com> From: Binbin Wu In-Reply-To: <56cdb0da8bbf17dc293a2a6b4ff74f6e3e034bbd.1708933498.git.isaku.yamahata@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 2/26/2024 4:26 PM, isaku.yamahata@intel.com wrote: > From: Isaku Yamahata > > Implement hooks of TDP MMU for TDX backend. TLB flush, TLB shootdown, > propagating the change private EPT entry to Secure EPT and freeing Secure > EPT page. TLB flush handles both shared EPT and private EPT. It flushes > shared EPT same as VMX. It also waits for the TDX TLB shootdown. For the > hook to free Secure EPT page, unlinks the Secure EPT page from the Secure > EPT so that the page can be freed to OS. > > Propagate the entry change to Secure EPT. The possible entry changes are > present -> non-present(zapping) and non-present -> present(population). On > population just link the Secure EPT page or the private guest page to the > Secure EPT by TDX SEAMCALL. Because TDP MMU allows concurrent > zapping/population, zapping requires synchronous TLB shoot down with the > frozen EPT entry. It zaps the secure entry, increments TLB counter, sends > IPI to remote vcpus to trigger TLB flush, and then unlinks the private > guest page from the Secure EPT. For simplicity, batched zapping with > exclude lock is handled as concurrent zapping. Although it's inefficient, > it can be optimized in the future. > > For MMIO SPTE, the spte value changes as follows. > initial value (suppress VE bit is set) > -> Guest issues MMIO and triggers EPT violation > -> KVM updates SPTE value to MMIO value (suppress VE bit is cleared) > -> Guest MMIO resumes. It triggers VE exception in guest TD > -> Guest VE handler issues TDG.VP.VMCALL > -> KVM handles MMIO > -> Guest VE handler resumes its execution after MMIO instruction > > Signed-off-by: Isaku Yamahata > > --- > v19: > - Compile fix when CONFIG_HYPERV != y. > It's due to the following patch. Catch it up. > https://lore.kernel.org/all/20231018192325.1893896-1-seanjc@google.com/ > - Add comments on tlb shootdown to explan the sequence. > - Use gmem_max_level callback, delete tdp_max_page_level. > > v18: > - rename tdx_sept_page_aug() -> tdx_mem_page_aug() > - checkpatch: space => tab > > v15 -> v16: > - Add the handling of TD_ATTR_SEPT_VE_DISABLE case. > > v14 -> v15: > - Implemented tdx_flush_tlb_current() > - Removed unnecessary invept in tdx_flush_tlb(). It was carry over > from the very old code base. > > Signed-off-by: Isaku Yamahata > --- > arch/x86/kvm/mmu/spte.c | 3 +- > arch/x86/kvm/vmx/main.c | 91 ++++++++- > arch/x86/kvm/vmx/tdx.c | 372 +++++++++++++++++++++++++++++++++++++ > arch/x86/kvm/vmx/tdx.h | 2 +- > arch/x86/kvm/vmx/tdx_ops.h | 6 + > arch/x86/kvm/vmx/x86_ops.h | 13 ++ > 6 files changed, 481 insertions(+), 6 deletions(-) > [...] > +static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, > + enum pg_level level) > +{ > + int tdx_level = pg_level_to_tdx_sept_level(level); > + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); > + gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); > + struct tdx_module_args out; > + u64 err; > + > + /* This can be called when destructing guest TD after freeing HKID. */ > + if (unlikely(!is_hkid_assigned(kvm_tdx))) > + return 0; > + > + /* For now large page isn't supported yet. */ > + WARN_ON_ONCE(level != PG_LEVEL_4K); > + err = tdh_mem_range_block(kvm_tdx->tdr_pa, gpa, tdx_level, &out); > + if (unlikely(err == TDX_ERROR_SEPT_BUSY)) > + return -EAGAIN; > + if (KVM_BUG_ON(err, kvm)) { > + pr_tdx_error(TDH_MEM_RANGE_BLOCK, err, &out); > + return -EIO; > + } > + return 0; > +} > + > +/* > + * TLB shoot down procedure: > + * There is a global epoch counter and each vcpu has local epoch counter. > + * - TDH.MEM.RANGE.BLOCK(TDR. level, range) on one vcpu > + * This blocks the subsequenct creation of TLB translation on that range. > + * This corresponds to clear the present bit(all RXW) in EPT entry > + * - TDH.MEM.TRACK(TDR): advances the epoch counter which is global. > + * - IPI to remote vcpus > + * - TDExit and re-entry with TDH.VP.ENTER on remote vcpus > + * - On re-entry, TDX module compares the local epoch counter with the global > + * epoch counter. If the local epoch counter is older than the global epoch > + * counter, update the local epoch counter and flushes TLB. > + */ > +static void tdx_track(struct kvm *kvm) > +{ > + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); > + u64 err; > + > + KVM_BUG_ON(!is_hkid_assigned(kvm_tdx), kvm); > + /* If TD isn't finalized, it's before any vcpu running. */ > + if (unlikely(!is_td_finalized(kvm_tdx))) > + return; > + > + /* > + * tdx_flush_tlb() waits for this function to issue TDH.MEM.TRACK() by > + * the counter. The counter is used instead of bool because multiple > + * TDH_MEM_TRACK() can be issued concurrently by multiple vcpus. Which case will have concurrent issues of TDH_MEM_TRACK() by multiple vcpus? For now, zapping is holding write lock. Promotion/demotion may have concurrent issues of TDH_MEM_TRACK(), but it's not supported yet. > + * > + * optimization: The TLB shoot down procedure described in The TDX > + * specification is, TDH.MEM.TRACK(), send IPI to remote vcpus, confirm > + * all remote vcpus exit to VMM, and execute vcpu, both local and > + * remote. Twist the sequence to reduce IPI overhead as follows. > + * > + * local remote > + * ----- ------ > + * increment tdh_mem_track > + * > + * request KVM_REQ_TLB_FLUSH > + * send IPI > + * > + * TDEXIT to KVM due to IPI > + * > + * IPI handler calls tdx_flush_tlb() > + * to process KVM_REQ_TLB_FLUSH. > + * spin wait for tdh_mem_track == 0 > + * > + * TDH.MEM.TRACK() > + * > + * decrement tdh_mem_track > + * > + * complete KVM_REQ_TLB_FLUSH > + * > + * TDH.VP.ENTER to flush tlbs TDH.VP.ENTER to flush tlbs > + */ > + atomic_inc(&kvm_tdx->tdh_mem_track); > + /* > + * KVM_REQ_TLB_FLUSH waits for the empty IPI handler, ack_flush(), with > + * KVM_REQUEST_WAIT. > + */ > + kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH); > + > + do { > + err = tdh_mem_track(kvm_tdx->tdr_pa); > + } while (unlikely((err & TDX_SEAMCALL_STATUS_MASK) == TDX_OPERAND_BUSY)); > + > + /* Release remote vcpu waiting for TDH.MEM.TRACK in tdx_flush_tlb(). */ > + atomic_dec(&kvm_tdx->tdh_mem_track); > + > + if (KVM_BUG_ON(err, kvm)) > + pr_tdx_error(TDH_MEM_TRACK, err, NULL); > + > +} > + > +static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn, > + enum pg_level level, void *private_spt) > +{ > + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); > + > + /* > + * The HKID assigned to this TD was already freed and cache was > + * already flushed. We don't have to flush again. > + */ > + if (!is_hkid_assigned(kvm_tdx)) > + return tdx_reclaim_page(__pa(private_spt)); > + > + /* > + * free_private_spt() is (obviously) called when a shadow page is being > + * zapped. KVM doesn't (yet) zap private SPs while the TD is active. > + * Note: This function is for private shadow page. Not for private > + * guest page. private guest page can be zapped during TD is active. > + * shared <-> private conversion and slot move/deletion. > + */ > + KVM_BUG_ON(is_hkid_assigned(kvm_tdx), kvm); At this point, is_hkid_assigned(kvm_tdx) is always true.