Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp3847294pxy; Mon, 26 Apr 2021 11:08:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzso1NFL8Keb6EBWV9OG7RoMUJFC9/JLMiyhEf6eWjs1Nkwu8eqMTWs8ylTaFPeD+n+zInD X-Received: by 2002:a17:906:7257:: with SMTP id n23mr20192703ejk.412.1619460506974; Mon, 26 Apr 2021 11:08:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619460506; cv=none; d=google.com; s=arc-20160816; b=AIMZZCIKtppJA1+SE5n61mUBS2FU56qzjoLnBKdP3CtwHhlzCTvQ2VoCUhVO4KR/Nl 92ZUPceXaT1d4qFcoUhFZk7QsAFDFpm44elF21MDE6yVbXnIRB26Esd4Z1GikRs6wYP+ gcaJNdjxf4iCTzURz/nO8qRcpnfAtFD4KVnV4TqizP4nC4+6Yyy9AppSxM5b7Y7yahou aPX2TDPUgbL69FF5Wz28Wo/52G3th0YLQKQyWLezyM1FQxEvJq3cQWgShPk58ndEGEGD 3liAC4064RWrWOtXdrugOFpu7dmFpIcVaW77SC3Xpi/fLzITFYdc7o2YWGGofwk6m8LA 0+1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=UC4XqjkcCO/oDC8vyXd/RiuMTKV4qwJmtgB+oyZpmH0=; b=KgE/sxtlGKNVj1DGintwedGNzTt8ZfF6Oupuh+9bHmWnu2+qA7GTSJYiHWVkLiAXLI 0Aa56zOWdPC3rNQSyL6SIpvMK69dGRza+fMaUT71OBDnQSFNcAgOKsQZzlZ4uI55/WtC w77m7bjI7vDKcAdbXERtBc8mUNWATPbZ7V5P95ANLNl3VZPcdHAGCmOQ96glbj5hMvoK VxRHD1NLY6RvjnpHFwYb+AHa7pFbsGBpWO7I9Knt6qwij6wHbHVJhEFHeS3+JS1VlSKu uMCAqFZK1SW3CeN74lv0bO70tO2aZCsbfmO+Xsw8MpoAlCxoHB3YiPG6gpr3w1Z+/6TV adfA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dp5si14638610ejc.284.2021.04.26.11.08.02; Mon, 26 Apr 2021 11:08:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235328AbhDZSEy (ORCPT + 99 others); Mon, 26 Apr 2021 14:04:54 -0400 Received: from mga06.intel.com ([134.134.136.31]:20703 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234403AbhDZSDi (ORCPT ); Mon, 26 Apr 2021 14:03:38 -0400 IronPort-SDR: +UawPceUz1yjn8RB7a7XkIOCrC6V1wMtcymmNLZBH1ZS2zDky13RqN/QXGoeKf2TM3TF9h0+zk AQCs5VhSf9ZQ== X-IronPort-AV: E=McAfee;i="6200,9189,9966"; a="257683613" X-IronPort-AV: E=Sophos;i="5.82,252,1613462400"; d="scan'208";a="257683613" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2021 11:02:55 -0700 IronPort-SDR: Z1GBJvJjShkDoR1QvCFf2LR34k8hWBZY3U/zZFbzzKTQGmlg1OYA0Hq9gxqzpRYDHg2dF59wXQ jPuip100DsMw== X-IronPort-AV: E=Sophos;i="5.82,252,1613462400"; d="scan'208";a="447353437" Received: from ssumanpx-mobl.amr.corp.intel.com (HELO skuppusw-mobl5.amr.corp.intel.com) ([10.254.34.197]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2021 11:02:55 -0700 From: Kuppuswamy Sathyanarayanan To: Peter Zijlstra , Andy Lutomirski , Dave Hansen , Dan Williams , Tony Luck Cc: Andi Kleen , Kirill Shutemov , Kuppuswamy Sathyanarayanan , Raj Ashok , Sean Christopherson , linux-kernel@vger.kernel.org, Kuppuswamy Sathyanarayanan Subject: [RFC v2 26/32] x86/mm: Move force_dma_unencrypted() to common code Date: Mon, 26 Apr 2021 11:01:53 -0700 Message-Id: <7c5adf75d69ea327b22b404b7c37b29712d73640.1619458733.git.sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" Intel TDX doesn't allow VMM to access guest memory. Any memory that is required for communication with VMM must be shared explicitly by setting the bit in page table entry. And, after setting the shared bit, the conversion must be completed with MapGPA TDVMALL. The call informs VMM about the conversion and makes it remove the GPA from the S-EPT mapping. The shared memory is similar to unencrypted memory in AMD SME/SEV terminology but the underlying process of sharing/un-sharing the memory is different for Intel TDX guest platform. SEV assumes that I/O devices can only do DMA to "decrypted" physical addresses without the C-bit set.  In order for the CPU to interact with this memory, the CPU needs a decrypted mapping. To add this support, AMD SME code forces force_dma_unencrypted() to return true for platforms that support AMD SEV feature. It will be used for DMA memory allocation API to trigger set_memory_decrypted() for platforms that support AMD SEV feature. TDX is similar.  TDX architecturally prevents access to private guest memory by anything other than the guest itself. This means that any DMA buffers must be shared. So move force_dma_unencrypted() out of AMD specific code.     It will be modified to return true for Intel TDX guest platform, similar to AMD SEV feature. Introduce new config option X86_MEM_ENCRYPT_COMMON that has to be selected by all x86 memory encryption features. This will be selected by both AMD SEV and Intel TDX guest config options. This is preparation for TDX changes in DMA code and it has not functional change.     Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/Kconfig | 8 +++++-- arch/x86/mm/Makefile | 2 ++ arch/x86/mm/mem_encrypt.c | 30 ------------------------- arch/x86/mm/mem_encrypt_common.c | 38 ++++++++++++++++++++++++++++++++ 4 files changed, 46 insertions(+), 32 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt_common.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 932e6d759ba7..67f99bf27729 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1529,14 +1529,18 @@ config X86_CPA_STATISTICS helps to determine the effectiveness of preserving large and huge page mappings when mapping protections are changed. +config X86_MEM_ENCRYPT_COMMON + select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select DYNAMIC_PHYSICAL_MASK + def_bool n + config AMD_MEM_ENCRYPT bool "AMD Secure Memory Encryption (SME) support" depends on X86_64 && CPU_SUP_AMD select DMA_COHERENT_POOL - select DYNAMIC_PHYSICAL_MASK select ARCH_USE_MEMREMAP_PROT - select ARCH_HAS_FORCE_DMA_UNENCRYPTED select INSTRUCTION_DECODER + select X86_MEM_ENCRYPT_COMMON help Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 5864219221ca..b31cb52bf1bd 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -52,6 +52,8 @@ obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o +obj-$(CONFIG_X86_MEM_ENCRYPT_COMMON) += mem_encrypt_common.o + obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index ae78cef79980..6f713c6a32b2 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -15,10 +15,6 @@ #include #include #include -#include -#include -#include -#include #include #include @@ -390,32 +386,6 @@ bool noinstr sev_es_active(void) return sev_status & MSR_AMD64_SEV_ES_ENABLED; } -/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ -bool force_dma_unencrypted(struct device *dev) -{ - /* - * For SEV, all DMA must be to unencrypted addresses. - */ - if (sev_active()) - return true; - - /* - * For SME, all DMA must be to unencrypted addresses if the - * device does not support DMA to addresses that include the - * encryption mask. - */ - if (sme_active()) { - u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); - u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, - dev->bus_dma_limit); - - if (dma_dev_mask <= dma_enc_mask) - return true; - } - - return false; -} - void __init mem_encrypt_free_decrypted_mem(void) { unsigned long vaddr, vaddr_end, npages; diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c new file mode 100644 index 000000000000..964e04152417 --- /dev/null +++ b/arch/x86/mm/mem_encrypt_common.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + */ + +#include +#include +#include + +/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ +bool force_dma_unencrypted(struct device *dev) +{ + /* + * For SEV, all DMA must be to unencrypted/shared addresses. + */ + if (sev_active()) + return true; + + /* + * For SME, all DMA must be to unencrypted addresses if the + * device does not support DMA to addresses that include the + * encryption mask. + */ + if (sme_active()) { + u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); + u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, + dev->bus_dma_limit); + + if (dma_dev_mask <= dma_enc_mask) + return true; + } + + return false; +} -- 2.25.1