Received: by 2002:a05:6520:2586:b029:fa:41f3:c225 with SMTP id u6csp29861lky; Wed, 9 Jun 2021 14:58:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzsFwGtNpvOW4KBhtdZQY8Ly0inA16WXLcOIkaLSoWqXUTG1ZmB6mQKROdN7u+y25pjmgLb X-Received: by 2002:a17:906:1790:: with SMTP id t16mr1654938eje.203.1623275899216; Wed, 09 Jun 2021 14:58:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623275899; cv=none; d=google.com; s=arc-20160816; b=u9VJI76W1UzNAE04cn3Vvr+gjFZwVp7MYCb8BGqfH4y+huikwkDlFM45TZNi4WBUsO NtdMf1iaGg/ZiMXt6GI4XCneC6M97nj5jSmELC26RmkT3P62UazVDFGO6JRV+WY6Bjpz 0sGAU9Lfb/vY2BEPznjRR9e/ydKxattAK5qD7RBY/aT/v6j5qhqMNJyV84mUZhvxHWhi 2pu4R3eprqcJhfPAqHabfoWdoPv5hxKrnt1WoClLgXtANXXupygcgDjPTPHdF2pGnQR2 0oPRIO+h8qcoXwBsHvdpWXFkD9mr2VnEqucgfdyHVrpToX3s401QcsFIzcHkimI0ozuJ J5xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=IXqUhv+5V1QoiUt6clFanOtPN5G+W1OR2GxPUb8hmo4=; b=a/FDWlqwf3fJ7F4iWtbyF5TRxi7sjEl6zL5bLhmpwdwCUhiU2uCj9BkHABviMb+nuc nWny0gSSK+aoIkJgh1web3cwYSRElEqL8mpp2yWqlgOwpa48EAuB4Fvm24KSG0iIy2MP KCMaLuGZYTlrFMWXxOWaVpM177uRYfv/B7divrDAolVdShy8z7RNSubpGy1GZTGgvz2k 0KVMU3Zpf0v4mUvepg4W0nQOPKUqofGZINnRCm8dll8+HYG9fsfYTNQCu6W55ukImKPR huwnvG4ogdZWcdHMd2XpzVgkNuJJgSFG/SI+lUrsitA9chd0ph1ZP71OUGlGIDejgFEH Rj8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b8si667543edy.432.2021.06.09.14.57.55; Wed, 09 Jun 2021 14:58:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230060AbhFIV54 (ORCPT + 99 others); Wed, 9 Jun 2021 17:57:56 -0400 Received: from mga03.intel.com ([134.134.136.65]:1786 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230077AbhFIV5v (ORCPT ); Wed, 9 Jun 2021 17:57:51 -0400 IronPort-SDR: 4xl7RBgjAXAOMh9XvgW27MYFNEPfux9c3jdWsxWXZE2PJBMB1eYQQwr/L53Gxo5GczyF22tllw ZhRDZ04tumWA== X-IronPort-AV: E=McAfee;i="6200,9189,10010"; a="205208553" X-IronPort-AV: E=Sophos;i="5.83,261,1616482800"; d="scan'208";a="205208553" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2021 14:55:56 -0700 IronPort-SDR: hI2NpG9d1K/kDxaeizyn4acsgmDr54vPLc0naw/LU9BUr77dbs5m4N8jG7UtDQfYXvWhDdf30z FqR5PXmi6gBQ== X-IronPort-AV: E=Sophos;i="5.83,261,1616482800"; d="scan'208";a="482555112" Received: from qwang4-mobl1.ccr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.254.35.228]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2021 14:55:55 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 6/7] x86/kvm: Use bounce buffers for TD guest Date: Wed, 9 Jun 2021 14:55:36 -0700 Message-Id: <20210609215537.1956150-7-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210609215537.1956150-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210609215537.1956150-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" Intel TDX doesn't allow VMM to directly access guest private memory. Any memory that is required for communication with VMM must be shared explicitly. The same rule applies for any any DMA to and fromTDX guest. All DMA pages had to marked as shared pages. A generic way to achieve this without any changes to device drivers is to use the SWIOTLB framework. This method of handling is similar to AMD SEV. So extend this support for TDX guest as well. Also since there are some common code between AMD SEV and TDX guest in mem_encrypt_init(), move it to mem_encrypt_common.c and call AMD specific init function from it Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/include/asm/mem_encrypt_common.h | 2 ++ arch/x86/kernel/tdx.c | 3 +++ arch/x86/mm/mem_encrypt.c | 5 +---- arch/x86/mm/mem_encrypt_common.c | 16 ++++++++++++++++ 4 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt_common.h b/arch/x86/include/asm/mem_encrypt_common.h index 697bc40a4e3d..48d98a3d64fd 100644 --- a/arch/x86/include/asm/mem_encrypt_common.h +++ b/arch/x86/include/asm/mem_encrypt_common.h @@ -8,11 +8,13 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT bool amd_force_dma_unencrypted(struct device *dev); +void __init amd_mem_encrypt_init(void); #else /* CONFIG_AMD_MEM_ENCRYPT */ static inline bool amd_force_dma_unencrypted(struct device *dev) { return false; } +static inline void amd_mem_encrypt_init(void) {} #endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index c90871a10443..1caf9fa5bb30 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -9,6 +9,7 @@ #include #include #include /* force_sig_fault() */ +#include #include #include @@ -535,6 +536,8 @@ void __init tdx_early_init(void) legacy_pic = &null_legacy_pic; + swiotlb_force = SWIOTLB_FORCE; + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "tdg:cpu_hotplug", NULL, tdg_cpu_offline_prepare); diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 9c55a3209c88..84ee14446139 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -467,14 +467,11 @@ static void print_mem_encrypt_feature_info(void) } /* Architecture __weak replacement functions */ -void __init mem_encrypt_init(void) +void __init amd_mem_encrypt_init(void) { if (!sme_me_mask) return; - /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ - swiotlb_update_mem_attributes(); - /* * With SEV, we need to unroll the rep string I/O instructions, * but SEV-ES supports them through the #VC handler. diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index 8053b43298ff..2da70f58b208 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -9,6 +9,7 @@ #include #include +#include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) @@ -21,3 +22,18 @@ bool force_dma_unencrypted(struct device *dev) return false; } + +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void) +{ + /* + * For TDX guest or SEV/SME, call into SWIOTLB to update + * the SWIOTLB DMA buffers + */ + if (sme_me_mask || prot_guest_has(PR_GUEST_MEM_ENCRYPT)) + swiotlb_update_mem_attributes(); + + if (sme_me_mask) + amd_mem_encrypt_init(); +} + -- 2.25.1