Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1692305pxb; Wed, 20 Oct 2021 09:54:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzkJ9a/XPTdhYfa7EMiH8brouWBKSInJSZ16f2nbW+5G0KTS8YtC71CcZjG2IWjxMlgSEHO X-Received: by 2002:a05:6402:3d6:: with SMTP id t22mr156920edw.240.1634748841478; Wed, 20 Oct 2021 09:54:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634748841; cv=none; d=google.com; s=arc-20160816; b=oY3XrC7MvMxdAe3zgHXETf7bOM0tHfNBiQ13PL+rRNNbHFCLKbegePWpo7TuJ0tFJ1 gtyUzUZigoYBFiIqtOdai4vD+PyO6byYkmG42/ad6hXlpcgNUlc4wfF11IGGTPD+W2K1 izBK7gI8NesOh49aJUtgqM9KlzTij65Wy+1fpzeyjWyMe/WDSNlR+8zvIyskHfNpSBO2 2ADNmDNxsa2j/4e1uqdfF2EuOFlRuFUzzEpxF2znRzFUi0chpE32fy/xb1aZtHG+htDa 8qkTM+075Rp5l0UHQ4hyFEuCkLb5mPqSCV1stI8ZsIh3UdyvB98/YNsFaYUDi76ayLHr 9sJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=4wQUxknKFYvwBn2GSEDZ7i8BfVPLCU1+h6/lqbXKxms=; b=ueKmlfqA88OAMRm4bsjW4Lj//qvnzYwlkB5nP8OvHhaop9pctojGYBhPhe7/OrvVmK hPT0iRERFsdkaP6WFicvIje6h8mB1OpCXKt89JB7xNWrNn7vgVo95nEmYqzIuR1iDBqV d/C2+Zr3o24n7mBxw/LNnEmUem3cA8LLIDROxocGaD4vjRe7Wmrd/V37CYRbzRPpP2pF /12eNey95h2hLksqHmTA2p/gPotdZ6u7TU0Wa7ybgujALz5zsAFE/5t5FpEHX9P6RdIE 2vlbnjvc69WOkYRz/tVeRKldF1QoOT8DvBsoB/2u2+Lsf2fVaHCc1Ryd6fEcRdqUBvwW RsdA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id eb14si5912276edb.415.2021.10.20.09.53.36; Wed, 20 Oct 2021 09:54:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230391AbhJTQxm (ORCPT + 99 others); Wed, 20 Oct 2021 12:53:42 -0400 Received: from mga12.intel.com ([192.55.52.136]:62389 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229771AbhJTQxg (ORCPT ); Wed, 20 Oct 2021 12:53:36 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10143"; a="208931060" X-IronPort-AV: E=Sophos;i="5.87,167,1631602800"; d="scan'208";a="208931060" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2021 09:50:50 -0700 X-IronPort-AV: E=Sophos;i="5.87,167,1631602800"; d="scan'208";a="720487372" Received: from yakasaka-mobl1.gar.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.9.165]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2021 09:50:49 -0700 Subject: Re: [PATCH v5 07/16] x86/kvm: Use bounce buffers for TD guest To: Tom Lendacky , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" , Paolo Bonzini , David Hildenbrand , Andrea Arcangeli , Josh Poimboeuf Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org References: <20211009003711.1390019-1-sathyanarayanan.kuppuswamy@linux.intel.com> <20211009003711.1390019-8-sathyanarayanan.kuppuswamy@linux.intel.com> <42f17b60-9bd4-a8bc-5164-d960e54cd30b@amd.com> From: Sathyanarayanan Kuppuswamy Message-ID: <0a9c6485-74d8-e0fc-d261-097380272e07@linux.intel.com> Date: Wed, 20 Oct 2021 09:50:48 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: <42f17b60-9bd4-a8bc-5164-d960e54cd30b@amd.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/20/21 9:39 AM, Tom Lendacky wrote: > On 10/8/21 7:37 PM, Kuppuswamy Sathyanarayanan wrote: >> From: "Kirill A. Shutemov" >> >> Intel TDX doesn't allow VMM to directly access guest private memory. >> Any memory that is required for communication with VMM must be shared >> explicitly. The same rule applies for any DMA to and from TDX guest. >> All DMA pages had to marked as shared pages. A generic way to achieve >> this without any changes to device drivers is to use the SWIOTLB >> framework. >> >> This method of handling is similar to AMD SEV. So extend this support >> for TDX guest as well. Also since there are some common code between >> AMD SEV and TDX guest in mem_encrypt_init(), move it to >> mem_encrypt_common.c and call AMD specific init function from it >> >> Signed-off-by: Kirill A. Shutemov >> Reviewed-by: Andi Kleen >> Reviewed-by: Tony Luck >> Signed-off-by: Kuppuswamy Sathyanarayanan >> >> --- >> >> Changes since v4: >>   * Replaced prot_guest_has() with cc_guest_has(). >> >> Changes since v3: >>   * Rebased on top of Tom Lendacky's protected guest >>     changes (https://lore.kernel.org/patchwork/cover/1468760/) >> >> Changes since v1: >>   * Removed sme_me_mask check for amd_mem_encrypt_init() in >> mem_encrypt_init(). >> >>   arch/x86/include/asm/mem_encrypt_common.h |  3 +++ >>   arch/x86/kernel/tdx.c                     |  2 ++ >>   arch/x86/mm/mem_encrypt.c                 |  5 +---- >>   arch/x86/mm/mem_encrypt_common.c          | 14 ++++++++++++++ >>   4 files changed, 20 insertions(+), 4 deletions(-) >> >> diff --git a/arch/x86/include/asm/mem_encrypt_common.h >> b/arch/x86/include/asm/mem_encrypt_common.h >> index 697bc40a4e3d..bc90e565bce4 100644 >> --- a/arch/x86/include/asm/mem_encrypt_common.h >> +++ b/arch/x86/include/asm/mem_encrypt_common.h >> @@ -8,11 +8,14 @@ >>   #ifdef CONFIG_AMD_MEM_ENCRYPT >>   bool amd_force_dma_unencrypted(struct device *dev); >> +void __init amd_mem_encrypt_init(void); >>   #else /* CONFIG_AMD_MEM_ENCRYPT */ >>   static inline bool amd_force_dma_unencrypted(struct device *dev) >>   { >>       return false; >>   } >> + >> +static inline void amd_mem_encrypt_init(void) {} >>   #endif /* CONFIG_AMD_MEM_ENCRYPT */ >>   #endif >> diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c >> index 433f366ca25c..ce8e3019b812 100644 >> --- a/arch/x86/kernel/tdx.c >> +++ b/arch/x86/kernel/tdx.c >> @@ -12,6 +12,7 @@ >>   #include >>   #include >>   #include /* force_sig_fault() */ >> +#include >>   /* TDX Module call Leaf IDs */ >>   #define TDX_GET_INFO            1 >> @@ -577,6 +578,7 @@ void __init tdx_early_init(void) >>       pv_ops.irq.halt = tdx_halt; >>       legacy_pic = &null_legacy_pic; >> +    swiotlb_force = SWIOTLB_FORCE; >>       cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "tdx:cpu_hotplug", >>                 NULL, tdx_cpu_offline_prepare); >> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c >> index 5d7fbed73949..8385bc4565e9 100644 >> --- a/arch/x86/mm/mem_encrypt.c >> +++ b/arch/x86/mm/mem_encrypt.c >> @@ -438,14 +438,11 @@ static void print_mem_encrypt_feature_info(void) >>   } >>   /* Architecture __weak replacement functions */ >> -void __init mem_encrypt_init(void) >> +void __init amd_mem_encrypt_init(void) >>   { >>       if (!sme_me_mask) >>           return; >> -    /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ >> -    swiotlb_update_mem_attributes(); >> - >>       /* >>        * With SEV, we need to unroll the rep string I/O instructions, >>        * but SEV-ES supports them through the #VC handler. >> diff --git a/arch/x86/mm/mem_encrypt_common.c >> b/arch/x86/mm/mem_encrypt_common.c >> index 119a9056efbb..6fe44c6cb753 100644 >> --- a/arch/x86/mm/mem_encrypt_common.c >> +++ b/arch/x86/mm/mem_encrypt_common.c >> @@ -10,6 +10,7 @@ >>   #include >>   #include >>   #include >> +#include >>   /* Override for DMA direct allocation check - >> ARCH_HAS_FORCE_DMA_UNENCRYPTED */ >>   bool force_dma_unencrypted(struct device *dev) >> @@ -24,3 +25,16 @@ bool force_dma_unencrypted(struct device *dev) >>       return false; >>   } >> + >> +/* Architecture __weak replacement functions */ >> +void __init mem_encrypt_init(void) >> +{ >> +    /* >> +     * For TDX guest or SEV/SME, call into SWIOTLB to update >> +     * the SWIOTLB DMA buffers >> +     */ >> +    if (sme_me_mask || cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) > > Can't you just make this: > >     if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) > > SEV will return true if sme_me_mask is not zero and TDX should only > return true if it is TDX guest, right? Yes. It can be simplified. But where shall we leave this function cc_platform.c or here? > > Thanks, > Tom > >> +        swiotlb_update_mem_attributes(); >> + >> +    amd_mem_encrypt_init(); >> +} >> -- Sathyanarayanan Kuppuswamy Linux Kernel Developer