Received: by 2002:ab2:7041:0:b0:1f4:bcc8:f211 with SMTP id x1csp9604lql; Fri, 12 Apr 2024 01:45:40 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWaGZCI86HB5Gw1Jv3MKpUqvQztfkZ36rCXWGEgwHznvu2ahXG7A6+NA/7fgJBFU/MCj1HMCTtAWu9PHQUH0n/P3GtREEO207pxlKLM/g== X-Google-Smtp-Source: AGHT+IEJwEip2KCaI9knG6uhIfE+namLA39abQUuG4zvxeR5Uwr7rnhP9lbFUPaMPhT3BY8zX8to X-Received: by 2002:a17:90a:7e83:b0:2a2:9e5d:9bf9 with SMTP id j3-20020a17090a7e8300b002a29e5d9bf9mr6775030pjl.8.1712911540634; Fri, 12 Apr 2024 01:45:40 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712911540; cv=pass; d=google.com; s=arc-20160816; b=SPIIu8fOuN6xjXWqpJcgd9mcSj+N1Km0smUAFxrqlsrNSXblf6sc6LfudAYXn0JwQt cMtsz+4+gX4eiEOZKPxNXSejNUmCZabnXOvjs9v13rjuAqYI+EjXq5MZRlzp3miBuzwf Iy0NUXRJ++c/DrDuctgFr3lukhPyyLsFPYHpa5wFtuljS5XWcvKvXWk9eJpqsb6sV5s1 4+myFmZRRyv5yH4tLpDUoNyqhg1AZYbPW15M8b3OpvW3vqXvr4bQ6LcgDusYCfM2Kj7z tobIBxfzQZHoYpPDFhVN2MW06H4dO1gz2G+3m8mmFd0AsGl+CMwFFQM2qBHktFv5Qq+w DSDg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=VeIWvWWkmYoq38Tv4mgcStBeYSLZKB8VA/d2QLWFxi0=; fh=mU6xl5FN63PgqIKyLHx/u5uRervM0V89cYoVjXl22Bs=; b=orwFy+GuotZBB+CG16AYw1LWpBgOFn0JYwJ+RdNFM9UJ933aAhsj96nKBg3TAdp9ot 0WA40vH7xh0lS50LSZ0JSXAyf6YAjGvatM3LgwINGSDSLCn4+9teWzTbSsH9ssiO5aZx JkIHweQwmLHZ/L+tG/EjCePlMsZGh3zGosXGFrUlVyuqILuMnJLPMnVER3yLo9bnyN4b 1wD7XqQ9vnOesJnuHjg2JYL98TLegoCWm2t5oCt1JO+g7OZgpduy1kUXxnssjBQ4W1C3 s80E39RrX8XZ3nWsqqpT3uLgDSr7eDLhqYlTPleR9AsuEt3iNw/+VQcchYxmWRvf9050 lzNA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-142262-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-142262-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id om5-20020a17090b3a8500b002a161b13a14si3059378pjb.93.2024.04.12.01.45.40 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Apr 2024 01:45:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-142262-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-142262-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-142262-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 2E27C287131 for ; Fri, 12 Apr 2024 08:45:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 02B7F56B9F; Fri, 12 Apr 2024 08:42:46 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 58F8256B7F; Fri, 12 Apr 2024 08:42:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712911365; cv=none; b=hQzu0GD6uXTEvSgUmPLH9RgMgItZq17b6oeCFLH0pH910xCHN12ck1t+aoFS8yCBzXutaOP7XnKaY3235oBtk1NCzxcQ4D9f6WnX+t3zFB/dVi9PZDIqQgoXP83XbJRnts7flVJz915fyOpgDWpjap3NkNgygEZr/tyIHdpClN0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712911365; c=relaxed/simple; bh=yA1dvs3du4fFLvulPdxs17VbFP8TMUG/FONIbvKZ+Ew=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZOo77+EqgBFFFwBTuq/Txm+4cF2Td7UWpGo+XxXFcsYSD5e4CEAPC/eRPJ8VWxSyyXaoe31bhz2SwbupMRj9LTSF72CPMARODJeI4nN5AmTReOPeamkrRlbklxqtKp3FUl24tWf1yS1i/TaCXLJ8VW48BRtnAR33G/RqZXzEj5E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 24F7F339; Fri, 12 Apr 2024 01:43:12 -0700 (PDT) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.194.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C1ECE3F6C4; Fri, 12 Apr 2024 01:42:40 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Suzuki K Poulose , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Steven Price Subject: [PATCH v2 09/14] arm64: Enable memory encrypt for Realms Date: Fri, 12 Apr 2024 09:42:08 +0100 Message-Id: <20240412084213.1733764-10-steven.price@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240412084213.1733764-1-steven.price@arm.com> References: <20240412084056.1733704-1-steven.price@arm.com> <20240412084213.1733764-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Suzuki K Poulose Use the memory encryption APIs to trigger a RSI call to request a transition between protected memory and shared memory (or vice versa) and updating the kernel's linear map of modified pages to flip the top bit of the IPA. This requires that block mappings are not used in the direct map for realm guests. Signed-off-by: Suzuki K Poulose Co-developed-by: Steven Price Signed-off-by: Steven Price --- arch/arm64/Kconfig | 3 ++ arch/arm64/include/asm/mem_encrypt.h | 19 +++++++++++ arch/arm64/kernel/rsi.c | 12 +++++++ arch/arm64/mm/pageattr.c | 48 ++++++++++++++++++++++++++-- 4 files changed, 79 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/include/asm/mem_encrypt.h diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b11c98b3e84..ffd4685a3029 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -20,6 +20,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_CC_PLATFORM select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE @@ -40,6 +41,8 @@ config ARM64 select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SET_DIRECT_MAP select ARCH_HAS_SET_MEMORY + select ARCH_HAS_MEM_ENCRYPT + select ARCH_HAS_FORCE_DMA_UNENCRYPTED select ARCH_STACKWALK select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_MODULE_RWX diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h new file mode 100644 index 000000000000..7381f9585321 --- /dev/null +++ b/arch/arm64/include/asm/mem_encrypt.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2023 ARM Ltd. + */ + +#ifndef __ASM_MEM_ENCRYPT_H +#define __ASM_MEM_ENCRYPT_H + +#include + +/* All DMA must be to non-secure memory for now */ +static inline bool force_dma_unencrypted(struct device *dev) +{ + return is_realm_world(); +} + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); +#endif diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index 5c8ed3aaa35f..ba3f346e7a91 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -6,6 +6,7 @@ #include #include #include +#include #include @@ -19,6 +20,17 @@ unsigned int phys_mask_shift = CONFIG_ARM64_PA_BITS; DEFINE_STATIC_KEY_FALSE_RO(rsi_present); EXPORT_SYMBOL(rsi_present); +bool cc_platform_has(enum cc_attr attr) +{ + switch (attr) { + case CC_ATTR_MEM_ENCRYPT: + return is_realm_world(); + default: + return false; + } +} +EXPORT_SYMBOL_GPL(cc_platform_has); + static bool rsi_version_matches(void) { unsigned long ver; diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 0c4e3ecf989d..229b6d9990f5 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -5,10 +5,12 @@ #include #include #include +#include #include #include #include +#include #include #include #include @@ -23,14 +25,16 @@ bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED bool can_set_direct_map(void) { /* - * rodata_full and DEBUG_PAGEALLOC require linear map to be - * mapped at page granularity, so that it is possible to + * rodata_full, DEBUG_PAGEALLOC and a Realm guest all require linear + * map to be mapped at page granularity, so that it is possible to * protect/unprotect single pages. * * KFENCE pool requires page-granular mapping if initialized late. + * + * Realms need to make pages shared/protected at page granularity. */ return rodata_full || debug_pagealloc_enabled() || - arm64_kfence_can_set_direct_map(); + arm64_kfence_can_set_direct_map() || is_realm_world(); } static int change_page_range(pte_t *ptep, unsigned long addr, void *data) @@ -41,6 +45,7 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data) pte = clear_pte_bit(pte, cdata->clear_mask); pte = set_pte_bit(pte, cdata->set_mask); + /* TODO: Break before make for PROT_NS_SHARED updates */ __set_pte(ptep, pte); return 0; } @@ -192,6 +197,43 @@ int set_direct_map_default_noflush(struct page *page) PAGE_SIZE, change_page_range, &data); } +static int __set_memory_encrypted(unsigned long addr, + int numpages, + bool encrypt) +{ + unsigned long set_prot = 0, clear_prot = 0; + phys_addr_t start, end; + + if (!is_realm_world()) + return 0; + + WARN_ON(!__is_lm_address(addr)); + start = __virt_to_phys(addr); + end = start + numpages * PAGE_SIZE; + + if (encrypt) { + clear_prot = PROT_NS_SHARED; + set_memory_range_protected(start, end); + } else { + set_prot = PROT_NS_SHARED; + set_memory_range_shared(start, end); + } + + return __change_memory_common(addr, PAGE_SIZE * numpages, + __pgprot(set_prot), + __pgprot(clear_prot)); +} + +int set_memory_encrypted(unsigned long addr, int numpages) +{ + return __set_memory_encrypted(addr, numpages, true); +} + +int set_memory_decrypted(unsigned long addr, int numpages) +{ + return __set_memory_encrypted(addr, numpages, false); +} + #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { -- 2.34.1