Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp195303rdb; Tue, 5 Dec 2023 02:55:49 -0800 (PST) X-Google-Smtp-Source: AGHT+IHY/s7W5I+DDdBNlu5DXsiXRgRpd6vM/JzX/L+E2Pr2Spm6T8A5iHbzUOKihy7wNGYPx6NF X-Received: by 2002:a05:6a00:1d1a:b0:6c3:3bf9:217e with SMTP id a26-20020a056a001d1a00b006c33bf9217emr858562pfx.19.1701773749154; Tue, 05 Dec 2023 02:55:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701773749; cv=none; d=google.com; s=arc-20160816; b=Tq30Rq7XjGACD2SByJVHUs/SymQ+DYCN0gFEwnzesiYxqbqATJZUcnv32v+z8NMPP0 vbqVGi593lVSzGjFewWLjYW3ZfD/Mb0BvZtnlNYpQaXTal4FOewKYP8h5iR7JppqIloH cc1kucYKvI1YsqDwL9/j7FtwWI+8s91f7noRRBnt8/2dWyFE7+QooqWUHrLlIzwK74wh 22Z1lqu83ktoVV21u0lIDIRT4ADYbB3TZXI+PrT9iSYEsa8YgzTbLLaZr4uJQx0CisZx 3sslRl/bXG/Pgm4zF5z/YiWvU5RYwz/zitH3blt5rlskUd02ajLa/m1Gv2fuq3xmB9nm Kq2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=Ep/wgzH6bBq72REuiJCXdKiU51/t6y5NMYfjVD1Usw0=; fh=eomwG96sEC/gjc2hxkfcqQp4wTe8ZlxZ4RTCowdzz4I=; b=aLUCVSD5MI6q53qoU8Qyj378X31532zE55lpB1zo7+u0KqGRAVCN+XZdY3ADrYJl5g xK81uB8oftrDsgKvVrKxkUPXDr9MoacJ41wDlTXhG4TBwaIqM0AIIjwRvBAEmtc6g+zW 7k2gcrcMgac8ELb5LSE1HleVk+gcghfu+j1dwKUzSuVyN+qED9Kkm6g1XDrykC5J+PJW uWVFbUjKWbkYnDEi9QGhvMk+42VqclvJxrCh0mTYf+Fw6IX9Ug2ZeV92uBeKiXB81HqV 3MwpLSLEOGg/YyTW6Gi1xL4wn8tR5KGvaDPCTCh9fIndGm/loqcytofcEg9itKVAtdE2 IIMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nK+cWFsU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id q5-20020a056a00088500b006cbbdc31d10si9584912pfj.220.2023.12.05.02.55.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Dec 2023 02:55:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nK+cWFsU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 9E27D82250A1; Tue, 5 Dec 2023 02:55:21 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346889AbjLEKyd (ORCPT + 99 others); Tue, 5 Dec 2023 05:54:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346811AbjLEKyQ (ORCPT ); Tue, 5 Dec 2023 05:54:16 -0500 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92EF0D45; Tue, 5 Dec 2023 02:54:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701773657; x=1733309657; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=yCuIHOeWcfGOCJR+76s/7IFgjxS9TLWZvOKlnLpi+0Y=; b=nK+cWFsURHii2wQDnYgJ9zWuhkpPPOzhEDkrw0fBvFM/OFD09PFw7d6i r9IFrHN4GzyibibcpoW7z8N2JgZ9AcvcFdKk3NHpntkDjOTjPN5XQGTau SbImAVICD2BC356DgbLQvXQW1LMcfdFFEc2C9UmKI70I1aKAwqAtDWR9l +buBa0/cLIf9xOgx1xwCuMBUnx2SuHctEcG4u2adoH71PsevIdmgcKWCT ddpJjN5llaHfF9NsIWDi/JFwTGsj5ETzh8YjfgKr4ZooFTAJQBol1HN33 hruTD7ka6hW8B/uyF6g5aOa8fzoY15qm4O8o7LQWY6hNVUeO8/EHxkmMz Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10914"; a="396672855" X-IronPort-AV: E=Sophos;i="6.04,251,1695711600"; d="scan'208";a="396672855" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 02:54:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10914"; a="747184419" X-IronPort-AV: E=Sophos;i="6.04,251,1695711600"; d="scan'208";a="747184419" Received: from abijaz-mobl2.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.61.240]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Dec 2023 02:54:10 -0800 Received: by box.shutemov.name (Postfix, from userid 1000) id 89EC610A437; Tue, 5 Dec 2023 13:54:07 +0300 (+03) Date: Tue, 5 Dec 2023 13:54:07 +0300 From: "Kirill A. Shutemov" To: Jeremi Piotrowski Cc: "Reshetova, Elena" , "linux-kernel@vger.kernel.org" , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Ingo Molnar , Michael Kelley , Nikolay Borisov , Peter Zijlstra , Thomas Gleixner , Tom Lendacky , "x86@kernel.org" , "Cui, Dexuan" , "linux-hyperv@vger.kernel.org" , "stefan.bader@canonical.com" , "tim.gardner@canonical.com" , "roxana.nicolescu@canonical.com" , "cascardo@canonical.com" , "kys@microsoft.com" , "haiyangz@microsoft.com" , "wei.liu@kernel.org" , "sashal@kernel.org" , "stable@vger.kernel.org" Subject: Re: [PATCH v1 1/3] x86/tdx: Check for TDX partitioning during early TDX init Message-ID: <20231205105407.vp2rejqb5avoj7mx@box.shutemov.name> References: <20231122170106.270266-1-jpiotrowski@linux.microsoft.com> <9ab71fee-be9f-4afc-8098-ad9d6b667d46@linux.microsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <9ab71fee-be9f-4afc-8098-ad9d6b667d46@linux.microsoft.com> X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 05 Dec 2023 02:55:21 -0800 (PST) On Mon, Dec 04, 2023 at 08:07:38PM +0100, Jeremi Piotrowski wrote: > On 04/12/2023 10:17, Reshetova, Elena wrote: > >> Check for additional CPUID bits to identify TDX guests running with Trust > >> Domain (TD) partitioning enabled. TD partitioning is like nested virtualization > >> inside the Trust Domain so there is a L1 TD VM(M) and there can be L2 TD VM(s). > >> > >> In this arrangement we are not guaranteed that the TDX_CPUID_LEAF_ID is > >> visible > >> to Linux running as an L2 TD VM. This is because a majority of TDX facilities > >> are controlled by the L1 VMM and the L2 TDX guest needs to use TD partitioning > >> aware mechanisms for what's left. So currently such guests do not have > >> X86_FEATURE_TDX_GUEST set. > > > > Back to this concrete patch. Why cannot L1 VMM emulate the correct value of > > the TDX_CPUID_LEAF_ID to L2 VM? It can do this per TDX partitioning arch. > > How do you handle this and other CPUID calls call currently in L1? Per spec, > > all CPUIDs calls from L2 will cause L2 --> L1 exit, so what do you do in L1? > The disclaimer here is that I don't have access to the paravisor (L1) code. But > to the best of my knowledge the L1 handles CPUID calls by calling into the TDX > module, or synthesizing a response itself. TDX_CPUID_LEAF_ID is not provided to > the L2 guest in order to discriminate a guest that is solely responsible for every > TDX mechanism (running at L1) from one running at L2 that has to cooperate with L1. > More below. > > > > > Given that you do that simple emulation, you already end up with TDX guest > > code being activated. Next you can check what features you wont be able to > > provide in L1 and create simple emulation calls for the TDG calls that must be > > supported and cannot return error. The biggest TDG call (TDVMCALL) is already > > direct call into L0 VMM, so this part doesn’t require L1 VMM support. > > I don't see anything in the TD-partitioning spec that gives the TDX guest a way > to detect if it's running at L2 or L1, or check whether TDVMCALLs go to L0/L1. > So in any case this requires an extra cpuid call to establish the environment. > Given that, exposing TDX_CPUID_LEAF_ID to the guest doesn't help. > > I'll give some examples of where the idea of emulating a TDX environment > without attempting L1-L2 cooperation breaks down. > > hlt: if the guest issues a hlt TDVMCALL it goes to L0, but if it issues a classic hlt > it traps to L1. The hlt should definitely go to L1 so that L1 has a chance to do > housekeeping. Why would L2 issue HLT TDVMCALL? It only happens in response to #VE, but if partitioning enabled #VEs are routed to L1 anyway. > map gpa: say the guest uses MAP_GPA TDVMCALL. This goes to L0, not L1 which is the actual > entity that needs to have a say in performing the conversion. L1 can't act on the request > if L0 would forward it because of the CoCo threat model. So L1 and L2 get out of sync. > The only safe approach is for L2 to use a different mechanism to trap to L1 explicitly. Hm? L1 is always in loop on share<->private conversion. I don't know why you need MAP_GPA for that. You can't rely on MAP_GPA anyway. It is optional (unfortunately). Conversion doesn't require MAP_GPA call. > Having a paravisor is required to support a TPM and having TDVMCALLs go to L0 is > required to make performance viable for real workloads. > > > > > Until we really see what breaks with this approach, I don’t think it is worth to > > take in the complexity to support different L1 hypervisors view on partitioning. > > > > I'm not asking to support different L1 hypervisors view on partitioning, I want to > clean up the code (by fixing assumptions that no longer hold) for the model that I'm > describing that: the kernel already supports, has an implementation that works and > has actual users. This is also a model that Intel intentionally created the TD-partitioning > spec to support. > > So lets work together to make X86_FEATURE_TDX_GUEST match reality. I think the right direction is to make TDX architecture good enough without that. If we need more hooks in TDX module that give required control to L1, let's do that. (I don't see it so far) -- Kiryl Shutsemau / Kirill A. Shutemov