Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp3846614pxy; Mon, 26 Apr 2021 11:07:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw8sqYjKszlLqoj6dbG5ysicLM5EeVPRi0otBe4MIHhfi6GeBXZCVq/YKtuPXU8RCqaoCmZ X-Received: by 2002:a17:906:840a:: with SMTP id n10mr20061301ejx.59.1619460443505; Mon, 26 Apr 2021 11:07:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619460443; cv=none; d=google.com; s=arc-20160816; b=SryNOMfCxlt75AbRGxYpNBzNkwAYm5M0OKVgQDvTcFxDh1fsUTMV1cyuvPjCr7bBt6 o7gFHhtWNhQX5k4CVYUOLB3s61/IJnDVr4v5ZMQSa7iw27SzvbQqlzzSDaHv4NdD7Urg Vmg7+MzwZrESm+4NmJgFEJ6NGorLul5Cetr9Z2JQ6x8yFDTEtBNNHb7hGGJb5B23hid6 pK+Ou7LwSPhflkHHxwEjsrKuwtH2oStltE9jqbhEZlOAYDrxYcEv34ehJ36N0G0ofwby TVLgyk4bmsmzEVuCX/283aVLZofyqOAGSu8paogzTApRYurRJbzqAVPDFWWG39SHy0kB NeRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=YO1hZauu8thg4zLQgWOOuD8M4WdgfUPWdJX49K5SXlc=; b=gtIO7TVm4AfQujbsiMijC8E3eE6LAjADup+OEKpmZpv/pMP8gXSCocltrPRABv6ljm jJNTpJje/oGy4rylmlfvtBSO58hyDnJ7y+3wwtXrg1WC7InEMGraZAJW0VQWfjlE1bNu ORxnr3VYZ6jyEdhG3S4f+4ljJ7/k+WuTvuwtWDtywQ/wweFhemdTlDaf0jkCwQZL59RC unFUv8JA+mJl7Ho7uYjQkrt+WrqrJB2HPV/Zt77Gn8FEaP3GsPM621s+MR2gcPdINOat ic1lWgUKsSR7eJmLXLPnkvhpf1rqA+ipZIgzPT9SOBzvcjJrjVWIi95edEowJBSdIEKK zkzw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t6si13008607ejj.37.2021.04.26.11.07.00; Mon, 26 Apr 2021 11:07:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235283AbhDZSFk (ORCPT + 99 others); Mon, 26 Apr 2021 14:05:40 -0400 Received: from mga06.intel.com ([134.134.136.31]:20709 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234483AbhDZSDq (ORCPT ); Mon, 26 Apr 2021 14:03:46 -0400 IronPort-SDR: 3RmuRWqVo2NnPP8vjunJINIsiV/flcVzpdpyRQB1kpf9FdlMtXefmdEdQd+JmtGNbNGUgMIc28 ilGetF5u8Bxg== X-IronPort-AV: E=McAfee;i="6200,9189,9966"; a="257683627" X-IronPort-AV: E=Sophos;i="5.82,252,1613462400"; d="scan'208";a="257683627" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2021 11:02:58 -0700 IronPort-SDR: MdhlhyPypj5GvO3iUC/NQe1JquciD4vAJ4ovZ7bu0KKgPHNbCZa7E42MiFs7dcfnH0w4dOkqT3 QcBlWHpPgwhQ== X-IronPort-AV: E=Sophos;i="5.82,252,1613462400"; d="scan'208";a="447353461" Received: from ssumanpx-mobl.amr.corp.intel.com (HELO skuppusw-mobl5.amr.corp.intel.com) ([10.254.34.197]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2021 11:02:58 -0700 From: Kuppuswamy Sathyanarayanan To: Peter Zijlstra , Andy Lutomirski , Dave Hansen , Dan Williams , Tony Luck Cc: Andi Kleen , Kirill Shutemov , Kuppuswamy Sathyanarayanan , Raj Ashok , Sean Christopherson , linux-kernel@vger.kernel.org, Kuppuswamy Sathyanarayanan Subject: [RFC v2 31/32] x86/kvm: Use bounce buffers for TD guest Date: Mon, 26 Apr 2021 11:01:58 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Kirill A. Shutemov" TDX doesn't allow to perform DMA access to guest private memory. In order for DMA to work properly in TD guest, user SWIOTLB bounce buffers. Move AMD SEV initialization into common code and adopt for TDX. Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/include/asm/io.h | 3 +- arch/x86/kernel/pci-swiotlb.c | 2 +- arch/x86/kernel/tdx.c | 3 ++ arch/x86/mm/mem_encrypt.c | 45 ------------------------------ arch/x86/mm/mem_encrypt_common.c | 47 ++++++++++++++++++++++++++++++++ 5 files changed, 53 insertions(+), 47 deletions(-) diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index 30a3b30395ad..658d9c2c2a9a 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -257,10 +257,11 @@ static inline void slow_down_io(void) #endif +extern struct static_key_false sev_enable_key; + #ifdef CONFIG_AMD_MEM_ENCRYPT #include -extern struct static_key_false sev_enable_key; static inline bool sev_key_active(void) { return static_branch_unlikely(&sev_enable_key); diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index c2cfa5e7c152..020e13749758 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -49,7 +49,7 @@ int __init pci_swiotlb_detect_4gb(void) * buffers are allocated and used for devices that do not support * the addressing range required for the encryption mask. */ - if (sme_active()) + if (sme_active() || is_tdx_guest()) swiotlb = 1; return swiotlb; diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c index 44dd12c693d0..6b07e7b4a69c 100644 --- a/arch/x86/kernel/tdx.c +++ b/arch/x86/kernel/tdx.c @@ -8,6 +8,7 @@ #include #include #include /* force_sig_fault() */ +#include #include @@ -470,6 +471,8 @@ void __init tdx_early_init(void) legacy_pic = &null_legacy_pic; + swiotlb_force = SWIOTLB_FORCE; + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "tdg:cpu_hotplug", NULL, tdg_cpu_offline_prepare); diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 6f713c6a32b2..761a98904aa2 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -409,48 +409,3 @@ void __init mem_encrypt_free_decrypted_mem(void) free_init_pages("unused decrypted", vaddr, vaddr_end); } - -static void print_mem_encrypt_feature_info(void) -{ - pr_info("AMD Memory Encryption Features active:"); - - /* Secure Memory Encryption */ - if (sme_active()) { - /* - * SME is mutually exclusive with any of the SEV - * features below. - */ - pr_cont(" SME\n"); - return; - } - - /* Secure Encrypted Virtualization */ - if (sev_active()) - pr_cont(" SEV"); - - /* Encrypted Register State */ - if (sev_es_active()) - pr_cont(" SEV-ES"); - - pr_cont("\n"); -} - -/* Architecture __weak replacement functions */ -void __init mem_encrypt_init(void) -{ - if (!sme_me_mask) - return; - - /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ - swiotlb_update_mem_attributes(); - - /* - * With SEV, we need to unroll the rep string I/O instructions, - * but SEV-ES supports them through the #VC handler. - */ - if (sev_active() && !sev_es_active()) - static_branch_enable(&sev_enable_key); - - print_mem_encrypt_feature_info(); -} - diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index b6d93b0c5dcf..625c15fa92f9 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -10,6 +10,7 @@ #include #include #include +#include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) @@ -36,3 +37,49 @@ bool force_dma_unencrypted(struct device *dev) return false; } + +static void print_amd_mem_encrypt_feature_info(void) +{ + pr_info("AMD Memory Encryption Features active:"); + + /* Secure Memory Encryption */ + if (sme_active()) { + /* + * SME is mutually exclusive with any of the SEV + * features below. + */ + pr_cont(" SME\n"); + return; + } + + /* Secure Encrypted Virtualization */ + if (sev_active()) + pr_cont(" SEV"); + + /* Encrypted Register State */ + if (sev_es_active()) + pr_cont(" SEV-ES"); + + pr_cont("\n"); +} + +/* Architecture __weak replacement functions */ +void __init mem_encrypt_init(void) +{ + if (!sme_me_mask && !is_tdx_guest()) + return; + + /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ + swiotlb_update_mem_attributes(); + + /* + * With SEV, we need to unroll the rep string I/O instructions, + * but SEV-ES supports them through the #VC handler. + */ + if (sev_active() && !sev_es_active()) + static_branch_enable(&sev_enable_key); + + /* sme_me_mask !=0 means SME or SEV */ + if (sme_me_mask) + print_amd_mem_encrypt_feature_info(); +} -- 2.25.1