Received: by 2002:ab2:7041:0:b0:1f4:bcc8:f211 with SMTP id x1csp19430lql; Fri, 12 Apr 2024 02:10:53 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCU6aa8Co3iPgFFD4LyooQ1nG/oqjbEiTPxPy97UWRBSsiK26gI6cA6WrBgt7CELVip2nYmFW0Q/ikpgSwU9DtmegGL8ZvRQOw0/PvlPJQ== X-Google-Smtp-Source: AGHT+IG4K2OKXpn3hJQMfIlEZrStE2NlLBmp1i8y2ZwtSQBDXr5DW/EOZ3xWBhqQCr513wXqDG0A X-Received: by 2002:a17:902:82ca:b0:1e4:9bce:978e with SMTP id u10-20020a17090282ca00b001e49bce978emr1823642plz.23.1712913052848; Fri, 12 Apr 2024 02:10:52 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712913052; cv=pass; d=google.com; s=arc-20160816; b=lg2h02I1qAfo1ulqEMQI2LD0YJApgBGM57NRE6Jt7QLFJB5QaVncgd5ATnITZ1qInl oCuTOg7aA7YV9VinuNdu2IE/EHQrq2+VWMv2ANWX1BbqGgtYfb0tpt6YdOnpHQxhxPZw ASK9A4dBByUgOLEM5KmrMJQ1c6FKlsPp8JYDVOwtHHV1nwolcwJ5Ir2q4BHEzHEZfKFV 35iNMH9LXq6jpVk164jn6h8irZQddf4KLMQycJKazTrVbMj8VvzMltJsuzHlirA8NZUr rzWZ4x2Sl5Ny5LCQ2koY8eOq9ibqSVpl3q62weXFOWX6V04+w6VGoHzBK/gD3td+T86y N/MQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=xbpiIzs2Q8dNNpthET3tRd1mEKVoLKws5kqTtXNpTto=; fh=NDCPdPn2hiqHaiIRDYQAzsW9PK20CTia4nECGof6enU=; b=nKM0CgJmivCt+Is0UBQ6IHywBW7JQ/azDJd6mVHYMYV465XXj4NabOd9IM6P0PSTGd DaVlSSR2y0dTLr/8gF8GDi4oWH4okLcSjbPHeTjMldetwMlmvRD8m+bajF+IDom8G/k0 zBOZ9dvOJr8XXK3b0z2fIEuXfiqoEqNBzUfswTDD8yl3ryx2W+R6BVEDDEI8oIdJo7PH LPrE7eLU4KZ6ss960DPZiL4as8XkcMqrp6GJt/5y/ZdXIWZ22Xji2HhYGcnLgVAoqIjq nqNKS3eou6lEngHdZrqLjysqzGb2myiMgBc7zsiOOhbTDj5vTrbSDGvsN3INMXibWslv 1hIQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-142290-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-142290-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id c12-20020a170902d48c00b001e29365dfe5si2841538plg.4.2024.04.12.02.10.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Apr 2024 02:10:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-142290-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-142290-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-142290-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 061FEB23E17 for ; Fri, 12 Apr 2024 08:54:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E7A4083CC4; Fri, 12 Apr 2024 08:44:07 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C301782D86; Fri, 12 Apr 2024 08:44:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712911447; cv=none; b=F6nd7+T4AetbhOAbo6TFaYhEcPiZiwsUpxL4Z7LcB8VdMIQrSiGjH6C2gs8r750cPC6+JV7ITDpuiMyxR+EMkfSoEoRpKKcrPPu12++y1siMmORFzHnx3gwUjG7cAO+9xk619qb8gKfMwZb6QeYuXUetADWZOhNgBdeQPNSD030= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712911447; c=relaxed/simple; bh=r+9fi7Kc606IBp8t/jPYUer+b0rxew8ErrauiZpQN/U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jlKgh0+d7oCP/Y9hqRNwwfm7ud/yRLck0ZPyeqAoMjV1pNNxUhnlzhQJyz/8rY2qpkfhR0nctsCy9xaUp+LbWBestyYk2mR9JWbctw05A8idJkRKiCUnHpKoQAvDWgkIfX/Sy1XEq89DsR5wnyqKwW4pkcPCGuyG5vUFS40V0Io= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9A0741596; Fri, 12 Apr 2024 01:44:34 -0700 (PDT) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.194.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3DCE83F6C4; Fri, 12 Apr 2024 01:44:03 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni Subject: [PATCH v2 20/43] arm64: RME: Allow populating initial contents Date: Fri, 12 Apr 2024 09:42:46 +0100 Message-Id: <20240412084309.1733783-21-steven.price@arm.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240412084309.1733783-1-steven.price@arm.com> References: <20240412084056.1733704-1-steven.price@arm.com> <20240412084309.1733783-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit The VMM needs to populate the realm with some data before starting (e.g. a kernel and initrd). This is measured by the RMM and used as part of the attestation later on. Co-developed-by: Suzuki K Poulose Signed-off-by: Suzuki K Poulose Signed-off-by: Steven Price --- arch/arm64/kvm/rme.c | 234 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 234 insertions(+) diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 0a3f823b2446..4aab507f896e 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -4,6 +4,7 @@ */ #include +#include #include #include @@ -547,6 +548,227 @@ void kvm_realm_unmap_range(struct kvm *kvm, unsigned long ipa, u64 size, realm_fold_rtt_range(realm, ipa, end); } +static int realm_create_protected_data_page(struct realm *realm, + unsigned long ipa, + struct page *dst_page, + struct page *src_page, + unsigned long flags) +{ + phys_addr_t dst_phys, src_phys; + int ret; + + dst_phys = page_to_phys(dst_page); + src_phys = page_to_phys(src_page); + + if (rmi_granule_delegate(dst_phys)) + return -ENXIO; + + ret = rmi_data_create(virt_to_phys(realm->rd), dst_phys, ipa, src_phys, + flags); + + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) { + /* Create missing RTTs and retry */ + int level = RMI_RETURN_INDEX(ret); + + ret = realm_create_rtt_levels(realm, ipa, level, + RME_RTT_MAX_LEVEL, NULL); + if (ret) + goto err; + + ret = rmi_data_create(virt_to_phys(realm->rd), dst_phys, ipa, + src_phys, flags); + } + + if (ret) + goto err; + + return 0; + +err: + if (WARN_ON(rmi_granule_undelegate(dst_phys))) { + /* Page can't be returned to NS world so is lost */ + get_page(dst_page); + } + return -ENXIO; +} + +static int fold_rtt(struct realm *realm, unsigned long addr, int level) +{ + phys_addr_t rtt_addr; + int ret; + + ret = realm_rtt_fold(realm, addr, level + 1, &rtt_addr); + if (ret) + return ret; + + free_delegated_page(realm, rtt_addr); + + return 0; +} + +static int populate_par_region(struct kvm *kvm, + phys_addr_t ipa_base, + phys_addr_t ipa_end, + u32 flags) +{ + struct realm *realm = &kvm->arch.realm; + struct kvm_memory_slot *memslot; + gfn_t base_gfn, end_gfn; + int idx; + phys_addr_t ipa; + int ret = 0; + struct page *tmp_page; + unsigned long data_flags = 0; + + base_gfn = gpa_to_gfn(ipa_base); + end_gfn = gpa_to_gfn(ipa_end); + + if (flags & KVM_ARM_RME_POPULATE_FLAGS_MEASURE) + data_flags = RMI_MEASURE_CONTENT; + + idx = srcu_read_lock(&kvm->srcu); + memslot = gfn_to_memslot(kvm, base_gfn); + if (!memslot) { + ret = -EFAULT; + goto out; + } + + /* We require the region to be contained within a single memslot */ + if (memslot->base_gfn + memslot->npages < end_gfn) { + ret = -EINVAL; + goto out; + } + + tmp_page = alloc_page(GFP_KERNEL); + if (!tmp_page) { + ret = -ENOMEM; + goto out; + } + + mmap_read_lock(current->mm); + + ipa = ipa_base; + while (ipa < ipa_end) { + struct vm_area_struct *vma; + unsigned long map_size; + unsigned int vma_shift; + unsigned long offset; + unsigned long hva; + struct page *page; + kvm_pfn_t pfn; + int level; + + hva = gfn_to_hva_memslot(memslot, gpa_to_gfn(ipa)); + vma = vma_lookup(current->mm, hva); + if (!vma) { + ret = -EFAULT; + break; + } + + if (is_vm_hugetlb_page(vma)) + vma_shift = huge_page_shift(hstate_vma(vma)); + else + vma_shift = PAGE_SHIFT; + + map_size = 1 << vma_shift; + + /* + * FIXME: This causes over mapping, but there's no good + * solution here with the ABI as it stands + */ + ipa = ALIGN_DOWN(ipa, map_size); + + switch (map_size) { + case RME_L2_BLOCK_SIZE: + level = 2; + break; + case PAGE_SIZE: + level = 3; + break; + default: + WARN_ONCE(1, "Unsupport vma_shift %d", vma_shift); + ret = -EFAULT; + break; + } + + pfn = gfn_to_pfn_memslot(memslot, gpa_to_gfn(ipa)); + + if (is_error_pfn(pfn)) { + ret = -EFAULT; + break; + } + + if (level < RME_RTT_MAX_LEVEL) { + /* + * A temporary RTT is needed during the map, precreate + * it, however if there is an error (e.g. missing + * parent tables) this will be handled in the + * realm_create_protected_data_page() call. + */ + realm_create_rtt_levels(realm, ipa, level, + RME_RTT_MAX_LEVEL, NULL); + } + + page = pfn_to_page(pfn); + + for (offset = 0; offset < map_size && !ret; + offset += PAGE_SIZE, page++) { + phys_addr_t page_ipa = ipa + offset; + + ret = realm_create_protected_data_page(realm, page_ipa, + page, tmp_page, + data_flags); + } + if (ret) + goto err_release_pfn; + + if (level == 2) { + ret = fold_rtt(realm, ipa, level); + if (ret) + goto err_release_pfn; + } + + ipa += map_size; + kvm_release_pfn_dirty(pfn); +err_release_pfn: + if (ret) { + kvm_release_pfn_clean(pfn); + break; + } + } + + mmap_read_unlock(current->mm); + __free_page(tmp_page); + +out: + srcu_read_unlock(&kvm->srcu, idx); + return ret; +} + +static int kvm_populate_realm(struct kvm *kvm, + struct kvm_cap_arm_rme_populate_realm_args *args) +{ + phys_addr_t ipa_base, ipa_end; + + if (kvm_realm_state(kvm) != REALM_STATE_NEW) + return -EINVAL; + + if (!IS_ALIGNED(args->populate_ipa_base, PAGE_SIZE) || + !IS_ALIGNED(args->populate_ipa_size, PAGE_SIZE)) + return -EINVAL; + + if (args->flags & ~RMI_MEASURE_CONTENT) + return -EINVAL; + + ipa_base = args->populate_ipa_base; + ipa_end = ipa_base + args->populate_ipa_size; + + if (ipa_end < ipa_base) + return -EINVAL; + + return populate_par_region(kvm, ipa_base, ipa_end, args->flags); +} + static int find_map_level(struct realm *realm, unsigned long start, unsigned long end) @@ -808,6 +1030,18 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) r = kvm_init_ipa_range_realm(kvm, &args); break; } + case KVM_CAP_ARM_RME_POPULATE_REALM: { + struct kvm_cap_arm_rme_populate_realm_args args; + void __user *argp = u64_to_user_ptr(cap->args[1]); + + if (copy_from_user(&args, argp, sizeof(args))) { + r = -EFAULT; + break; + } + + r = kvm_populate_realm(kvm, &args); + break; + } default: r = -EINVAL; break; -- 2.34.1