Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFEDEC64EC4 for ; Sat, 4 Mar 2023 12:32:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229556AbjCDMcV (ORCPT ); Sat, 4 Mar 2023 07:32:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229437AbjCDMcT (ORCPT ); Sat, 4 Mar 2023 07:32:19 -0500 Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com [IPv6:2a00:1450:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FA8A1025D; Sat, 4 Mar 2023 04:32:17 -0800 (PST) Received: by mail-lj1-x230.google.com with SMTP id y14so4996546ljq.4; Sat, 04 Mar 2023 04:32:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1677933135; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=mcHKvv8g96ZhAKb5AwDLIrCL89tQyNnrQzxqCXWuUeE=; b=V01ssbdymNaMMZI9CR+62p2GgQM3hfAb+usAylvU0IxMAwfwJvhWgNGCDyb3VsD+EV ws+o2VkGNF3MYz/uZUhJZktgogLfkCNZCp68UNwc6kK+IiNUWa9QVlW1phjxvoM6gHxM ZwrEAq8oVvN9oAiB8gM+6GoZUdWT5Fkxwr9nAYXnqc72jmt3nLl8zNW5woHW/ZT214XQ onoJuGWf3fiLonK+D1z3eKAcQ6NQ38RpO3h/8Bj82KysWzxOxMtDgVv1GttXXj7t/h7F 0LqloVsHHc6rGAEgvnzisRJ8JHRwMtMbjE08KNyEL9T2DxFqIqD37R+3DRlspPkgHTp9 UHQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677933135; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mcHKvv8g96ZhAKb5AwDLIrCL89tQyNnrQzxqCXWuUeE=; b=BayokoNPSVzCj9C+M4x1WmIsn9ik/NoZjcKRrWwYVh/2RmawhYIKt1rcY5hFqBVA5t UJy8fZeJERD/23b7yTTueBHJH7pLn4TvwBp3/96e4aaLZHWifci81+Zgpgqmpt8j/Ggo P2p6K3r++O57rCwJWF5n1JW9bPaZBqTjwRhI7vFgZKboam/bvJQKyLSjYBx2c4uqNUoz GF2MRbX5Mnb9z7sobsNllTTFoGrl+NCzMbVQFD9tgE/KdRB5TN3sNLN7QhjERyU/ejrW XARd7NhZ2yISqSU3tv3nwtReOod9e6V+KTb5V3NglTgvO3VVmLZo0RlDPtQFWvanBSUN pv3Q== X-Gm-Message-State: AO0yUKWdsFUvn6PWK3bJY4TRCbGc2BOoIqrTryDcYUA29et93jbmK5Fx f3IUHAsVKrd6FefsuY1xM6w= X-Google-Smtp-Source: AK7set9VdG1Sj0kTJKBFguMHDhPBxoNAV/k1jUBGp8tKjEe++y1NWO9eGUFekskFJtnqCxta7ia/lw== X-Received: by 2002:a2e:99c8:0:b0:295:9d09:e27 with SMTP id l8-20020a2e99c8000000b002959d090e27mr1485722ljj.3.1677933135280; Sat, 04 Mar 2023 04:32:15 -0800 (PST) Received: from localhost (88-113-32-99.elisa-laajakaista.fi. [88.113.32.99]) by smtp.gmail.com with ESMTPSA id l18-20020a2ea312000000b00290716d65dcsm764168lje.136.2023.03.04.04.32.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 04 Mar 2023 04:32:15 -0800 (PST) Date: Sat, 4 Mar 2023 14:32:13 +0200 From: Zhi Wang To: Steven Price Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: Re: [RFC PATCH 09/28] arm64: RME: RTT handling Message-ID: <20230304143213.00004823@gmail.com> In-Reply-To: <10be86c3-96ac-399f-850d-2ceaace22e24@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> <20230127112932.38045-10-steven.price@arm.com> <20230213194421.00005f7b@gmail.com> <10be86c3-96ac-399f-850d-2ceaace22e24@arm.com> X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 3 Mar 2023 14:04:56 +0000 Steven Price wrote: > On 13/02/2023 17:44, Zhi Wang wrote: > > On Fri, 27 Jan 2023 11:29:13 +0000 > > Steven Price wrote: > > > >> The RMM owns the stage 2 page tables for a realm, and KVM must request > >> that the RMM creates/destroys entries as necessary. The physical pages > >> to store the page tables are delegated to the realm as required, and can > >> be undelegated when no longer used. > >> > > > > This is only an introduction to RTT handling. While this patch is mostly like > > RTT teardown, better add more introduction to this patch. Also maybe refine > > the tittle to reflect what this patch is actually doing. > > You've a definite point that this patch is mostly about RTT teardown. > Technically it also adds the RTT creation path (realm_rtt_create) - > hence the generic patch title. > But realm_rtt_create() seem only used in realm_tear_down_rtt_range(). That makes me wonder where is the real RTT creation path. > But I'll definitely expand the commit message to mention the complexity > of tear down which is the bulk of the patch. It is also a good place to explain more about the RTT. > > >> Signed-off-by: Steven Price > >> --- > >> arch/arm64/include/asm/kvm_rme.h | 19 +++++ > >> arch/arm64/kvm/mmu.c | 7 +- > >> arch/arm64/kvm/rme.c | 139 +++++++++++++++++++++++++++++++ > >> 3 files changed, 162 insertions(+), 3 deletions(-) > >> > >> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h > >> index a6318af3ed11..eea5118dfa8a 100644 > >> --- a/arch/arm64/include/asm/kvm_rme.h > >> +++ b/arch/arm64/include/asm/kvm_rme.h > >> @@ -35,5 +35,24 @@ u32 kvm_realm_ipa_limit(void); > >> int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap); > >> int kvm_init_realm_vm(struct kvm *kvm); > >> void kvm_destroy_realm(struct kvm *kvm); > >> +void kvm_realm_destroy_rtts(struct realm *realm, u32 ia_bits, u32 start_level); > >> + > >> +#define RME_RTT_BLOCK_LEVEL 2 > >> +#define RME_RTT_MAX_LEVEL 3 > >> + > >> +#define RME_PAGE_SHIFT 12 > >> +#define RME_PAGE_SIZE BIT(RME_PAGE_SHIFT) > >> +/* See ARM64_HW_PGTABLE_LEVEL_SHIFT() */ > >> +#define RME_RTT_LEVEL_SHIFT(l) \ > >> + ((RME_PAGE_SHIFT - 3) * (4 - (l)) + 3) > >> +#define RME_L2_BLOCK_SIZE BIT(RME_RTT_LEVEL_SHIFT(2)) > >> + > >> +static inline unsigned long rme_rtt_level_mapsize(int level) > >> +{ > >> + if (WARN_ON(level > RME_RTT_MAX_LEVEL)) > >> + return RME_PAGE_SIZE; > >> + > >> + return (1UL << RME_RTT_LEVEL_SHIFT(level)); > >> +} > >> > >> #endif > >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > >> index 22c00274884a..f29558c5dcbc 100644 > >> --- a/arch/arm64/kvm/mmu.c > >> +++ b/arch/arm64/kvm/mmu.c > >> @@ -834,16 +834,17 @@ void stage2_unmap_vm(struct kvm *kvm) > >> void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) > >> { > >> struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); > >> - struct kvm_pgtable *pgt = NULL; > >> + struct kvm_pgtable *pgt; > >> > >> write_lock(&kvm->mmu_lock); > >> + pgt = mmu->pgt; > >> if (kvm_is_realm(kvm) && > >> kvm_realm_state(kvm) != REALM_STATE_DYING) { > >> - /* TODO: teardown rtts */ > >> write_unlock(&kvm->mmu_lock); > >> + kvm_realm_destroy_rtts(&kvm->arch.realm, pgt->ia_bits, > >> + pgt->start_level); > >> return; > >> } > >> - pgt = mmu->pgt; > >> if (pgt) { > >> mmu->pgd_phys = 0; > >> mmu->pgt = NULL; > >> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c > >> index 0c9d70e4d9e6..f7b0e5a779f8 100644 > >> --- a/arch/arm64/kvm/rme.c > >> +++ b/arch/arm64/kvm/rme.c > >> @@ -73,6 +73,28 @@ static int rmi_check_version(void) > >> return 0; > >> } > >> > >> +static void realm_destroy_undelegate_range(struct realm *realm, > >> + unsigned long ipa, > >> + unsigned long addr, > >> + ssize_t size) > >> +{ > >> + unsigned long rd = virt_to_phys(realm->rd); > >> + int ret; > >> + > >> + while (size > 0) { > >> + ret = rmi_data_destroy(rd, ipa); > >> + WARN_ON(ret); > >> + ret = rmi_granule_undelegate(addr); > >> + > > As the return value is not documented, what will happen if a page undelegate > > failed? Leaked? Some explanation is required here. > > Yes - it's leaked. I'll add a comment to explain the get_page() call. > > Thanks, > > Steve > > >> + if (ret) > >> + get_page(phys_to_page(addr)); > >> + > >> + addr += PAGE_SIZE; > >> + ipa += PAGE_SIZE; > >> + size -= PAGE_SIZE; > >> + } > >> +} > >> + > >> static unsigned long create_realm_feat_reg0(struct kvm *kvm) > >> { > >> unsigned long ia_bits = VTCR_EL2_IPA(kvm->arch.vtcr); > >> @@ -170,6 +192,123 @@ static int realm_create_rd(struct kvm *kvm) > >> return r; > >> } > >> > >> +static int realm_rtt_destroy(struct realm *realm, unsigned long addr, > >> + int level, phys_addr_t rtt_granule) > >> +{ > >> + addr = ALIGN_DOWN(addr, rme_rtt_level_mapsize(level - 1)); > >> + return rmi_rtt_destroy(rtt_granule, virt_to_phys(realm->rd), addr, > >> + level); > >> +} > >> + > >> +static int realm_destroy_free_rtt(struct realm *realm, unsigned long addr, > >> + int level, phys_addr_t rtt_granule) > >> +{ > >> + if (realm_rtt_destroy(realm, addr, level, rtt_granule)) > >> + return -ENXIO; > >> + if (!WARN_ON(rmi_granule_undelegate(rtt_granule))) > >> + put_page(phys_to_page(rtt_granule)); > >> + > >> + return 0; > >> +} > >> + > >> +static int realm_rtt_create(struct realm *realm, > >> + unsigned long addr, > >> + int level, > >> + phys_addr_t phys) > >> +{ > >> + addr = ALIGN_DOWN(addr, rme_rtt_level_mapsize(level - 1)); > >> + return rmi_rtt_create(phys, virt_to_phys(realm->rd), addr, level); > >> +} > >> + > >> +static int realm_tear_down_rtt_range(struct realm *realm, int level, > >> + unsigned long start, unsigned long end) > >> +{ > >> + phys_addr_t rd = virt_to_phys(realm->rd); > >> + ssize_t map_size = rme_rtt_level_mapsize(level); > >> + unsigned long addr, next_addr; > >> + bool failed = false; > >> + > >> + for (addr = start; addr < end; addr = next_addr) { > >> + phys_addr_t rtt_addr, tmp_rtt; > >> + struct rtt_entry rtt; > >> + unsigned long end_addr; > >> + > >> + next_addr = ALIGN(addr + 1, map_size); > >> + > >> + end_addr = min(next_addr, end); > >> + > >> + if (rmi_rtt_read_entry(rd, ALIGN_DOWN(addr, map_size), > >> + level, &rtt)) { > >> + failed = true; > >> + continue; > >> + } > >> + > >> + rtt_addr = rmi_rtt_get_phys(&rtt); > >> + WARN_ON(level != rtt.walk_level); > >> + > >> + switch (rtt.state) { > >> + case RMI_UNASSIGNED: > >> + case RMI_DESTROYED: > >> + break; > >> + case RMI_TABLE: > >> + if (realm_tear_down_rtt_range(realm, level + 1, > >> + addr, end_addr)) { > >> + failed = true; > >> + break; > >> + } > >> + if (IS_ALIGNED(addr, map_size) && > >> + next_addr <= end && > >> + realm_destroy_free_rtt(realm, addr, level + 1, > >> + rtt_addr)) > >> + failed = true; > >> + break; > >> + case RMI_ASSIGNED: > >> + WARN_ON(!rtt_addr); > >> + /* > >> + * If there is a block mapping, break it now, using the > >> + * spare_page. We are sure to have a valid delegated > >> + * page at spare_page before we enter here, otherwise > >> + * WARN once, which will be followed by further > >> + * warnings. > >> + */ > >> + tmp_rtt = realm->spare_page; > >> + if (level == 2 && > >> + !WARN_ON_ONCE(tmp_rtt == PHYS_ADDR_MAX) && > >> + realm_rtt_create(realm, addr, > >> + RME_RTT_MAX_LEVEL, tmp_rtt)) { > >> + WARN_ON(1); > >> + failed = true; > >> + break; > >> + } > >> + realm_destroy_undelegate_range(realm, addr, > >> + rtt_addr, map_size); > >> + /* > >> + * Collapse the last level table and make the spare page > >> + * reusable again. > >> + */ > >> + if (level == 2 && > >> + realm_rtt_destroy(realm, addr, RME_RTT_MAX_LEVEL, > >> + tmp_rtt)) > >> + failed = true; > >> + break; > >> + case RMI_VALID_NS: > >> + WARN_ON(rmi_rtt_unmap_unprotected(rd, addr, level)); > >> + break; > >> + default: > >> + WARN_ON(1); > >> + failed = true; > >> + break; > >> + } > >> + } > >> + > >> + return failed ? -EINVAL : 0; > >> +} > >> + > >> +void kvm_realm_destroy_rtts(struct realm *realm, u32 ia_bits, u32 start_level) > >> +{ > >> + realm_tear_down_rtt_range(realm, start_level, 0, (1UL << ia_bits)); > >> +} > >> + > >> /* Protects access to rme_vmid_bitmap */ > >> static DEFINE_SPINLOCK(rme_vmid_lock); > >> static unsigned long *rme_vmid_bitmap; > > >