Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp301100pxm; Wed, 2 Mar 2022 15:51:10 -0800 (PST) X-Google-Smtp-Source: ABdhPJyI7O794umtAOO+cOgRCqn6N6ctxxbh0cvsqysjKAbqKgmJ3PZNIF3MEtisGKx3zA6T3ZzW X-Received: by 2002:a17:90b:4f43:b0:1bc:7e5c:e024 with SMTP id pj3-20020a17090b4f4300b001bc7e5ce024mr801015pjb.0.1646265070029; Wed, 02 Mar 2022 15:51:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646265070; cv=none; d=google.com; s=arc-20160816; b=J8Gv7jwigul9kUAfCHx9b+b2jttZiw2OFOZSUQnWH5M13jWuAUjh0+z0N4XDup4+zN 9LG44WEU8tuQBOITrm1MiFvczbhn/t4pPVBhX+NtVpLLSE78NXfRSQBZohZomU40bJ2V CDsXgSAdWfW1frKGazfSIQ66wTaVudsYwEsYYMpBQtD1gF85Ny7e6zsel5ifLq5qB1Sw UljzyWAQ6/0EXUym9cqE+wOOz+cmTuNfvCT9E4n/lBfAEiIgaYdN9fIPgP6m3+B2x6/Y WSXp5ME/OCJyVRtSFBm8hcJjcyurSQspay2xKeRij+Q6x1Xd0pw5GMrZJy7MrtZdADOl by4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YfUcHC/L29CtFM2ombJthpowNDieAwdHc6BYHOftjzc=; b=T79+hM5+6TwZEgaYz+yHD5flfHdAOIXlRCV38+UzBbDNIbyydQX5mE3jD/PARpKKFE vDH4CSrF3NZWavEmaZZOhqlDHqu0rVts6LEpHWxx33TOadOPhZNHWSKKVd6EMRgYErlr kW3KV48TeQDqfB9bIWrhO1uDS1GR1j1aWd9j5q90U9ChU9LmynaulYXyBhXzF6Vt+Zji eluDecGYyGIVFr+GxmYG0JcvVioMlasx+3X7s7FtTfOxNZHFiGNpyGVOgnT8UaGGVvOj q+ZSHbRVX370Y61CJDivfPlPMh215vjeD2Uhh0r9ACKwxwIG0duwJQqQ3v4QRpx6a/bq 2rUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VO4cbiKY; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id 21-20020a630e55000000b00372f5aa67f8si476521pgo.221.2022.03.02.15.51.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 15:51:10 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VO4cbiKY; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B0AA6C7EBE; Wed, 2 Mar 2022 15:13:09 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242985AbiCBOa2 (ORCPT + 99 others); Wed, 2 Mar 2022 09:30:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242956AbiCBO3v (ORCPT ); Wed, 2 Mar 2022 09:29:51 -0500 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD7BE54BF2 for ; Wed, 2 Mar 2022 06:28:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646231304; x=1677767304; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oHHgD0lUC3RUOg4gi3hYg1dWt0msOj5EGIqW9IbsbN8=; b=VO4cbiKY7ngpM+gghHxh8G2Uo6dWXVdwY86ZBsmnkmnsU6p04ogRy6Lo yyUxm+U7FWmrQyLLilfgmwb2fIS3bpS2Aq5HNwbNTJcKsVuZIBzPEqlXd ilbYnuvNk+FlDCsNIXgihsrYQxCKTtyVvAldLBpKA3Ie8Jyy2sPTyZPsU yyxEP4bByDGONuASsUV3PZtSRNbO03qDK+zp/q52aEU8TfJtOkH2d9aUq PZeY4+MmFLE0fyo6MG4dBIkaE3iaulhdNx5LUO/erIM/vEFj16SkCURq2 K4AzTQFnySyJJNMtUE36PZ14qPVCu+b6NJ6UJ+k7vJnmcgTrYNOQKdPkv Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10274"; a="253336398" X-IronPort-AV: E=Sophos;i="5.90,149,1643702400"; d="scan'208";a="253336398" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Mar 2022 06:28:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,149,1643702400"; d="scan'208";a="493553211" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 02 Mar 2022 06:28:10 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 275A860E; Wed, 2 Mar 2022 16:28:13 +0200 (EET) From: "Kirill A. Shutemov" To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org Cc: sathyanarayanan.kuppuswamy@linux.intel.com, aarcange@redhat.com, ak@linux.intel.com, dan.j.williams@intel.com, david@redhat.com, hpa@zytor.com, jgross@suse.com, jmattson@google.com, joro@8bytes.org, jpoimboe@redhat.com, knsathya@kernel.org, pbonzini@redhat.com, sdeep@vmware.com, seanjc@google.com, tony.luck@intel.com, vkuznets@redhat.com, wanpengli@tencent.com, thomas.lendacky@amd.com, brijesh.singh@amd.com, x86@kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 27/30] x86/kvm: Use bounce buffers for TD guest Date: Wed, 2 Mar 2022 17:28:03 +0300 Message-Id: <20220302142806.51844-28-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220302142806.51844-1-kirill.shutemov@linux.intel.com> References: <20220302142806.51844-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Intel TDX doesn't allow VMM to directly access guest private memory. Any memory that is required for communication with the VMM must be shared explicitly. The same rule applies for any DMA to and from the TDX guest. All DMA pages have to be marked as shared pages. A generic way to achieve this without any changes to device drivers is to use the SWIOTLB framework. Force SWIOTLB on TD guest and make SWIOTLB buffer shared by generalizing mem_encrypt_init() to cover TDX. Co-developed-by: Kuppuswamy Sathyanarayanan Signed-off-by: Kuppuswamy Sathyanarayanan Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 2 +- arch/x86/coco/core.c | 1 + arch/x86/coco/tdx.c | 3 +++ arch/x86/include/asm/mem_encrypt.h | 6 +++--- arch/x86/mm/mem_encrypt.c | 9 ++++++++- 5 files changed, 16 insertions(+), 5 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 98efb35ed7b1..1312cefb927d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -885,7 +885,7 @@ config INTEL_TDX_GUEST depends on X86_64 && CPU_SUP_INTEL depends on X86_X2APIC select ARCH_HAS_CC_PLATFORM - select DYNAMIC_PHYSICAL_MASK + select X86_MEM_ENCRYPT select X86_MCE help Support running as a guest under Intel TDX. Without this support, diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c index 9778cf4c6901..b10326f91d4f 100644 --- a/arch/x86/coco/core.c +++ b/arch/x86/coco/core.c @@ -22,6 +22,7 @@ static bool intel_cc_platform_has(enum cc_attr attr) case CC_ATTR_GUEST_UNROLL_STRING_IO: case CC_ATTR_HOTPLUG_DISABLED: case CC_ATTR_GUEST_MEM_ENCRYPT: + case CC_ATTR_MEM_ENCRYPT: return true; default: return false; diff --git a/arch/x86/coco/tdx.c b/arch/x86/coco/tdx.c index 2168ee25a52c..429a1ba42667 100644 --- a/arch/x86/coco/tdx.c +++ b/arch/x86/coco/tdx.c @@ -5,6 +5,7 @@ #define pr_fmt(fmt) "tdx: " fmt #include +#include #include #include #include @@ -627,5 +628,7 @@ void __init tdx_early_init(void) x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required; x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed; + swiotlb_force = SWIOTLB_FORCE; + pr_info("Guest detected\n"); } diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index e2c6f433ed10..88ceaf3648b3 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -49,9 +49,6 @@ void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, void __init mem_encrypt_free_decrypted_mem(void); -/* Architecture __weak replacement functions */ -void __init mem_encrypt_init(void); - void __init sev_es_init_vc_handling(void); #define __bss_decrypted __section(".bss..decrypted") @@ -89,6 +86,9 @@ static inline void mem_encrypt_free_decrypted_mem(void) { } #endif /* CONFIG_AMD_MEM_ENCRYPT */ +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void); + /* * The __sme_pa() and __sme_pa_nodebug() macros are meant for use when * writing to or comparing values from the cr3 register. Having the diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 50d209939c66..10ee40b5204b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -42,7 +42,14 @@ bool force_dma_unencrypted(struct device *dev) static void print_mem_encrypt_feature_info(void) { - pr_info("AMD Memory Encryption Features active:"); + pr_info("Memory Encryption Features active:"); + + if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) { + pr_cont(" Intel TDX\n"); + return; + } + + pr_cont("AMD "); /* Secure Memory Encryption */ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) { -- 2.34.1