Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp4935923rwd; Tue, 30 May 2023 12:04:15 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4J3O++7KqCRvhVJBF10Akpw1uNL+qlk1puhbtuneDNNS4tSB+B7QRRkI5QRl95ClqbC4H7 X-Received: by 2002:a17:90a:f013:b0:256:2072:d830 with SMTP id bt19-20020a17090af01300b002562072d830mr3524910pjb.8.1685473455172; Tue, 30 May 2023 12:04:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685473455; cv=none; d=google.com; s=arc-20160816; b=DYZgJD5f/4SNrkuN/pr0Kb4zlbzHssasa3uzhlP66I+wODeIBVLVEH/FR952hLr35a FkKk4TdRMO15zQkzhImWEQQ/daIhD3Vs6Ij0F1qdwpLKMotCmbnvZNFQ1ysr8O1ycLFl egZqYMvCtuYD8gk86M0Y0od+3GvYStBuwH0RX6GCBlxYMMD/NTtMyV3/DmOPAtbkpV1l s+nUn0z+GYwbiN84SetoimW5Qru0d14MbhFf77EbhQqPTtR5TKrthLXgXJOxZgS0wf7x s0MYsK/s3J5dKeHkL90zqN8ufiNxvcXoK+5vz+l5Vt7iB9pV9+BASOIRenVbR4VcR9yM iRbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=lqIfGJaNWsRio6FBy2/AToczolQrDr5+SGH+n8pxMvg=; b=Y6HY0X54yUg1MIHL86BA+A+3BUJ3dEBt8WY1GszOItxLHZEBlsoTNiCh5jp9NatbXd xk43JzXXM6TwkyvmJK8hJNOsQCSQFzkQJgzXes5dd/rHu0XdpIjrDLCr3lRCqv1XEbAt XKfdxixAgT9WcSiOn7uZIF/L04BAlKz55ObvjXiOB6cvMMROvYyQcO8nCPfawg7N6uMM OpWHVJTicPI7gb+MryFWnM0fSJO9tLhAKnYwO99nu+KtuQ3youeyl2wojC8zpICUq9BH a3er7KGkAbiNEuO3lT/46m+O4JQY5zd7GAUma/boW5/R8A72q1Wjs8T905D7wovlat3u 5cBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=UlWTYu6B; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f12-20020a17090ab94c00b0024e0eec665fsi11795594pjw.73.2023.05.30.12.03.58; Tue, 30 May 2023 12:04:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=UlWTYu6B; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232648AbjE3SzZ (ORCPT + 99 others); Tue, 30 May 2023 14:55:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232278AbjE3SzR (ORCPT ); Tue, 30 May 2023 14:55:17 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A30C1102 for ; Tue, 30 May 2023 11:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685472916; x=1717008916; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=u10ZnlSerg/CGRanwO7hyufpJ+ugNruAR/D+4L2bQtg=; b=UlWTYu6BqcEJGksg5ks0ubum5+skvAbebk5cund6DQJqSTfkE4xkTQuN t//sC2lxFne7Zr2BCiGGP1btkMz1lddNZAgUx9B0HgEL5/NH/j2VcfYAt uysc5Nr8rmlSKO/edfZTOwCNkiA2G+hQoWoNZyfId+mxUQOe1NvGOyIsG 0QvqXNqEBTAGHhrmsFVNnxDIF/ANyTRQFYag3MuRgoCOB2nrldGUrl3cT P4hQZvxsaPXgO1RYNks2eQxHXJzeYOSGYxzFF42+QoYrTZkSnWKGw+Dd4 Qwuz6tIAkrWpmQ5DhQKMIUSi5uc+feH5j/Zpghupp3Z3UE4pCF95//BQz A==; X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="353858168" X-IronPort-AV: E=Sophos;i="6.00,205,1681196400"; d="scan'208";a="353858168" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 11:55:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="656983702" X-IronPort-AV: E=Sophos;i="6.00,205,1681196400"; d="scan'208";a="656983702" Received: from aschofie-mobl2.amr.corp.intel.com (HELO aschofie-mobl2) ([10.252.140.233]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 11:55:15 -0700 Date: Tue, 30 May 2023 11:55:14 -0700 From: Alison Schofield To: Alexander Shishkin Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra Subject: Re: [PATCH] x86/sev: Move sev_setup_arch() to mem_encrypt.c Message-ID: References: <20230530121728.28854-1-alexander.shishkin@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230530121728.28854-1-alexander.shishkin@linux.intel.com> X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 30, 2023 at 03:17:28PM +0300, Alexander Shishkin wrote: > Since commit 4d96f9109109b ("x86/sev: Replace occurrences of > sev_active() with cc_platform_has()"), the SWIOTLB bounce buffer size > adjustment and restricted virtio memory setting also inadvertently apply > to TDX, which just happens to be what we want. Hi Alexander, Can you offer more context on how this inadvertently applies? One bit below... > > To reflect this, move the corresponding code to generic mem_encrypt.c. > No functional changes intended. > > Signed-off-by: Alexander Shishkin > --- > arch/x86/include/asm/mem_encrypt.h | 11 ++++++++-- > arch/x86/kernel/setup.c | 2 +- > arch/x86/mm/mem_encrypt.c | 34 ++++++++++++++++++++++++++++++ > arch/x86/mm/mem_encrypt_amd.c | 34 ------------------------------ > 4 files changed, 44 insertions(+), 37 deletions(-) > > diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h > index b7126701574c..4283063c1e1c 100644 > --- a/arch/x86/include/asm/mem_encrypt.h > +++ b/arch/x86/include/asm/mem_encrypt.h > @@ -37,7 +37,6 @@ void __init sme_map_bootdata(char *real_mode_data); > void __init sme_unmap_bootdata(char *real_mode_data); > > void __init sme_early_init(void); > -void __init sev_setup_arch(void); > > void __init sme_encrypt_kernel(struct boot_params *bp); > void __init sme_enable(struct boot_params *bp); > @@ -67,7 +66,6 @@ static inline void __init sme_map_bootdata(char *real_mode_data) { } > static inline void __init sme_unmap_bootdata(char *real_mode_data) { } > > static inline void __init sme_early_init(void) { } > -static inline void __init sev_setup_arch(void) { } > > static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } > static inline void __init sme_enable(struct boot_params *bp) { } > @@ -92,6 +90,15 @@ void __init mem_encrypt_init(void); > > void add_encrypt_protection_map(void); > > +#ifdef CONFIG_X86_MEM_ENCRYPT > + > +void __init mem_encrypt_setup_arch(void); > + > +#else /* !CONFIG_X86_MEM_ENCRYPT */ > + > +static inline void __init mem_encrypt_setup_arch(void) { } > + > +#endif /* CONFIG_X86_MEM_ENCRYPT */ > /* > * The __sme_pa() and __sme_pa_nodebug() macros are meant for use when > * writing to or comparing values from the cr3 register. Having the > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > index 16babff771bd..e2aa1d5b37a9 100644 > --- a/arch/x86/kernel/setup.c > +++ b/arch/x86/kernel/setup.c > @@ -1121,7 +1121,7 @@ void __init setup_arch(char **cmdline_p) > * Needs to run after memblock setup because it needs the physical > * memory size. > */ > - sev_setup_arch(); > + mem_encrypt_setup_arch(); > > efi_fake_memmap(); > efi_find_mirror(); > diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c > index 9f27e14e185f..c290c55b632b 100644 > --- a/arch/x86/mm/mem_encrypt.c > +++ b/arch/x86/mm/mem_encrypt.c > @@ -12,6 +12,7 @@ > #include > #include > #include > +#include It looks like this #include can be removed from mem_encrypt_amd.c Alison > > /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ > bool force_dma_unencrypted(struct device *dev) > @@ -86,3 +87,36 @@ void __init mem_encrypt_init(void) > > print_mem_encrypt_feature_info(); > } > + > +void __init mem_encrypt_setup_arch(void) > +{ > + phys_addr_t total_mem = memblock_phys_mem_size(); > + unsigned long size; > + > + if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) > + return; > + > + /* > + * For SEV and TDX, all DMA has to occur via shared/unencrypted pages. > + * Kernel uses SWIOTLB to make this happen without changing device > + * drivers. However, depending on the workload being run, the > + * default 64MB of SWIOTLB may not be enough and SWIOTLB may > + * run out of buffers for DMA, resulting in I/O errors and/or > + * performance degradation especially with high I/O workloads. > + * > + * Adjust the default size of SWIOTLB using a percentage of guest > + * memory for SWIOTLB buffers. Also, as the SWIOTLB bounce buffer > + * memory is allocated from low memory, ensure that the adjusted size > + * is within the limits of low available memory. > + * > + * The percentage of guest memory used here for SWIOTLB buffers > + * is more of an approximation of the static adjustment which > + * 64MB for <1G, and ~128M to 256M for 1G-to-4G, i.e., the 6% > + */ > + size = total_mem * 6 / 100; > + size = clamp_val(size, IO_TLB_DEFAULT_SIZE, SZ_1G); > + swiotlb_adjust_size(size); > + > + /* Set restricted memory access for virtio. */ > + virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc); > +} > diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c > index e0b51c09109f..3b95e6fdf160 100644 > --- a/arch/x86/mm/mem_encrypt_amd.c > +++ b/arch/x86/mm/mem_encrypt_amd.c > @@ -215,40 +215,6 @@ void __init sme_map_bootdata(char *real_mode_data) > __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true); > } > > -void __init sev_setup_arch(void) > -{ > - phys_addr_t total_mem = memblock_phys_mem_size(); > - unsigned long size; > - > - if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) > - return; > - > - /* > - * For SEV, all DMA has to occur via shared/unencrypted pages. > - * SEV uses SWIOTLB to make this happen without changing device > - * drivers. However, depending on the workload being run, the > - * default 64MB of SWIOTLB may not be enough and SWIOTLB may > - * run out of buffers for DMA, resulting in I/O errors and/or > - * performance degradation especially with high I/O workloads. > - * > - * Adjust the default size of SWIOTLB for SEV guests using > - * a percentage of guest memory for SWIOTLB buffers. > - * Also, as the SWIOTLB bounce buffer memory is allocated > - * from low memory, ensure that the adjusted size is within > - * the limits of low available memory. > - * > - * The percentage of guest memory used here for SWIOTLB buffers > - * is more of an approximation of the static adjustment which > - * 64MB for <1G, and ~128M to 256M for 1G-to-4G, i.e., the 6% > - */ > - size = total_mem * 6 / 100; > - size = clamp_val(size, IO_TLB_DEFAULT_SIZE, SZ_1G); > - swiotlb_adjust_size(size); > - > - /* Set restricted memory access for virtio. */ > - virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc); > -} > - > static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot) > { > unsigned long pfn = 0; > -- > 2.39.2 >