Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp1011676iob; Fri, 13 May 2022 19:33:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz0KYkAIfeGKD9bRpM8IaeZUepwRHBjq8RZ/ns97dUkoL0hi/P0wvdddWHsb8wLcgYZxHed X-Received: by 2002:a5d:6388:0:b0:20a:ed44:fd38 with SMTP id p8-20020a5d6388000000b0020aed44fd38mr6142301wru.153.1652495637553; Fri, 13 May 2022 19:33:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652495637; cv=none; d=google.com; s=arc-20160816; b=WId1LWq+Sq5A4jiQPETt0dxprH1e5XU2VexMKoS3kpYGQcWxxUlKR5c+znR9VUgM66 XKACzx+AtNs2aWbL87paWj/s/q7+SZqdYzBoLU6E5Wmnw7Qg/A2ISXNJOpKKczATQbpt B1sDXWWrURJkwZJKT1542B8zyMtCBjIBdPwVLYDQQtO78qy5CnHoK6ktlO7Co1IsIqH2 vDDbk7U5oFfQBLliseIUO6Ug4QK3uZ3jVqr9oXZA47/UlJ5xB8n35le3qAz6KI/HC0Uw GP2ajI3s45VEKPRBQpFcMsDwjdXSgzuy2gjlhDaDJ/Kc0K3+iFMWeByywBYk+QlyK+ai I7mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Sbyy10TJT8LMCAcN9M0vRiVE+49B24zLDyfwJC+GAoc=; b=RXEnZfRQzVXlSq6c88suHJRiTXcnukBpno/HurwKzzbF7ToEkNoC9BoB0ipw7bK9tO e9HxoCfYzZA8bHTdo0TvKZdkAsgLgU+3eIiKkVa+2/MckRsSNdQN6OJu3clR+jUnDjjy 22LqEjGSj1zV/a1n6eLctsvH6sRecum6nohzSDBYp+5i7NZYdgETt0LaEdMko5U99d8Z TVY8qQbucDW+TmCKFVEjolZW1p1MeSapZD6XFrj1V2nvWet6LPw2nNZDI5dGMTMBM8l/ aUNEVyA+RvD3BS4f1wIVG+WWUiyENmNoO4vcTwk+NAT6UU5InK5a2SbfMFWu27rv9XO3 LRMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W+t1JyzC; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id bh14-20020a05600c3d0e00b00392b6911eefsi3845597wmb.108.2022.05.13.19.33.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 13 May 2022 19:33:57 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=W+t1JyzC; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 106254E909C; Fri, 13 May 2022 17:46:37 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1359142AbiELWUZ (ORCPT + 99 others); Thu, 12 May 2022 18:20:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359116AbiELWUK (ORCPT ); Thu, 12 May 2022 18:20:10 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A793280E06 for ; Thu, 12 May 2022 15:20:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652394010; x=1683930010; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o1V4B1vYijYcC+JeCBsMbjrh6n4qeE+NBMgzu3VHSZo=; b=W+t1JyzCZS1rzFm8p3myi4dwR9XlRKc70+HbOeGGhoZ56tpJvi/fD8fU 7Ib7roafxtMZ9SB7drnbHhbgPDtTkx3LVUeauD3LuwHBjnuc4QH8W/1j/ XV72Yf6QIFly6j0m4+vtc548D5/7ClL1ibq6i0Yb5iDJ/s51oYXUmyiFg Pnpm5Ws29vhnafbe4PBFopvwdEz65HOhm/rVKcLLbctUKEsMn9c1ogqQI fww8ghRffsW4nkWWIFSuHpSKANek/ioI6IHqjQpNG6KAM5BtZ9F1YZ/pc 7QWroU/fV2lnxUkAFZLgEqg5ykH6SN7LE8MnFbqfP0mRAj/7aC3dFrIhY g==; X-IronPort-AV: E=McAfee;i="6400,9594,10345"; a="257695001" X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="257695001" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2022 15:20:08 -0700 X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="594904494" Received: from skothapa-mobl.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.209.67.107]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2022 15:20:08 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org Cc: "H . Peter Anvin" , Kuppuswamy Sathyanarayanan , "Kirill A . Shutemov" , Tony Luck , Andi Kleen , Kai Huang , Wander Lairson Costa , Isaku Yamahata , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, linux-kernel@vger.kernel.org Subject: [PATCH v6 4/5] x86/mm: Add noalias variants of set_memory_*crypted() functions Date: Thu, 12 May 2022 15:19:51 -0700 Message-Id: <20220512221952.3647598-5-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220512221952.3647598-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20220512221952.3647598-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In TDX guest, when creating a shared buffer for the VMM access, to avoid breaking the direct mapping, vmap() can be used to remap the memory and use it to create the shared mapping. Currently, both set_memory_encrypted() and set_memory_decrypted() functions modify the page attributes of aliased mappings (which also includes the direct mapping). So handle the use case like mentioned above, create noalias variants of set_memory_*crypted() functions. Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/include/asm/set_memory.h | 2 ++ arch/x86/mm/pat/set_memory.c | 26 ++++++++++++++++++++------ 2 files changed, 22 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index 78ca53512486..0e5fc2b818be 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -46,7 +46,9 @@ int set_memory_wb(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_encrypted_noalias(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages); +int set_memory_decrypted_noalias(unsigned long addr, int numpages); int set_memory_np_noalias(unsigned long addr, int numpages); int set_memory_nonglobal(unsigned long addr, int numpages); int set_memory_global(unsigned long addr, int numpages); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index abf5ed76e4b7..ef54178855a1 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1987,7 +1987,8 @@ int set_memory_global(unsigned long addr, int numpages) * __set_memory_enc_pgtable() is used for the hypervisors that get * informed about "encryption" status via page tables. */ -static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) +static int __set_memory_enc_pgtable(unsigned long addr, int numpages, + bool enc, int checkalias) { pgprot_t empty = __pgprot(0); struct cpa_data cpa; @@ -2015,7 +2016,7 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) /* Notify hypervisor that we are about to set/clr encryption attribute. */ x86_platform.guest.enc_status_change_prepare(addr, numpages, enc); - ret = __change_page_attr_set_clr(&cpa, 1); + ret = __change_page_attr_set_clr(&cpa, checkalias); /* * After changing the encryption attribute, we need to flush TLBs again @@ -2035,29 +2036,42 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) return ret; } -static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc, + int checkalias) { if (hv_is_isolation_supported()) return hv_set_mem_host_visibility(addr, numpages, !enc); if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) - return __set_memory_enc_pgtable(addr, numpages, enc); + return __set_memory_enc_pgtable(addr, numpages, enc, checkalias); return 0; } int set_memory_encrypted(unsigned long addr, int numpages) { - return __set_memory_enc_dec(addr, numpages, true); + return __set_memory_enc_dec(addr, numpages, true, 1); } EXPORT_SYMBOL_GPL(set_memory_encrypted); int set_memory_decrypted(unsigned long addr, int numpages) { - return __set_memory_enc_dec(addr, numpages, false); + return __set_memory_enc_dec(addr, numpages, false, 1); } EXPORT_SYMBOL_GPL(set_memory_decrypted); +int set_memory_encrypted_noalias(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, true, 0); +} +EXPORT_SYMBOL_GPL(set_memory_encrypted_noalias); + +int set_memory_decrypted_noalias(unsigned long addr, int numpages) +{ + return __set_memory_enc_dec(addr, numpages, false, 0); +} +EXPORT_SYMBOL_GPL(set_memory_decrypted_noalias); + int set_pages_uc(struct page *page, int numpages) { unsigned long addr = (unsigned long)page_address(page); -- 2.25.1