Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3EB1C433EF for ; Wed, 1 Dec 2021 17:05:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351760AbhLARIv (ORCPT ); Wed, 1 Dec 2021 12:08:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351743AbhLARIU (ORCPT ); Wed, 1 Dec 2021 12:08:20 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE76BC06175C for ; Wed, 1 Dec 2021 09:04:38 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id h7-20020adfaa87000000b001885269a937so4483595wrc.17 for ; Wed, 01 Dec 2021 09:04:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3c2dJ4eKewKUxW9OPMDLajs9hR1P2X0mGpCHqHvqVFY=; b=bTlclf9xBTOAAuzZZJMyWEcfwP/C9oNdfGpG6mOtQuG9DYEudsF4CZsBALV4mKD9OQ tm/+AnZCNgyyfyWjfX3UBQmxCpc8ey7ulsU/5UowNt5+l4+NDSaCNXlsQbs+swv6jO22 sXQPKUzDX1/Q9of6po7MiBEZQqbr41FP1RjZPiiJz/s+/QpJiQGWOQico2yMAS7FpstP wPHbU9oYik16ZQEZj0XMi4J8Ws6a0pndJ2SNbXKM7Bjr1gw/d4MDyYcgXhN4mobC8WgB 4+kWCXhwKgoYsubsQQk/BD2yL1G/AbyOgLPmOwahvq2X6xHNjRBPDk8RxR0nvHzeDQRr dpkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3c2dJ4eKewKUxW9OPMDLajs9hR1P2X0mGpCHqHvqVFY=; b=6vQsJ83SX1hcxjfr0Gvv1AjqjW0CdBdnFPBbqYjYhDeJ7x5t4WUhsw+/Citvksq9/4 V5iB38gZvTHDyfaWLWLnbJhD/eu5+aFJ/84HZrHpDht48jGWiBCLnH6WKAq/KNnctZCw zbIr3+aiMhSgW1Pephq09rOC/RxXEM2HL/vf5whCymSE8tG8hhEZqefH+J19G6mtEtiN FJNStEH8rjwrRZ0auOS9I2Gh8bhHnPC5sDa/+ItC9Eftp0DAiW+1GDQRrdj16cns31gf haX6vrv8j3wHRBkS0XIy2FWX5uaRFi/FB0B5yJY2bqPG5WsyWVvMCnQFGfoF9GkzRMRd 7AXA== X-Gm-Message-State: AOAM532teeGceIeJiMcOc+uRqjf8UANbtso8b/3KUDDtPoPlRf2obRDZ LkbkR6X46MF5NE0mbnTs59QasYGF1tuG X-Google-Smtp-Source: ABdhPJxYpvAHsdmsXbB1y3zMWxiK1ZKFb+wxEi407hM/usX/+75gzO/p82iTF7zix/y32s/89ZoHtckD3SQL X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:1cab:1a20:2ef1:ddde]) (user=qperret job=sendgmr) by 2002:a05:600c:34c2:: with SMTP id d2mr8964734wmq.102.1638378277555; Wed, 01 Dec 2021 09:04:37 -0800 (PST) Date: Wed, 1 Dec 2021 17:04:00 +0000 In-Reply-To: <20211201170411.1561936-1-qperret@google.com> Message-Id: <20211201170411.1561936-7-qperret@google.com> Mime-Version: 1.0 References: <20211201170411.1561936-1-qperret@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 06/15] KVM: arm64: Implement kvm_pgtable_hyp_unmap() at EL2 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Will Deacon Implement kvm_pgtable_hyp_unmap() which can be used to remove hypervisor stage-1 mappings at EL2. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 21 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 63 ++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 027783829584..9d076f36401d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -251,6 +251,27 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt); int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot); +/** + * kvm_pgtable_hyp_unmap() - Remove a mapping from a hypervisor stage-1 page-table. + * @pgt: Page-table structure initialised by kvm_pgtable_hyp_init(). + * @addr: Virtual address from which to remove the mapping. + * @size: Size of the mapping. + * + * The offset of @addr within a page is ignored, @size is rounded-up to + * the next page boundary and @phys is rounded-down to the previous page + * boundary. + * + * TLB invalidation is performed for each page-table entry cleared during the + * unmapping operation and the reference count for the page-table page + * containing the cleared entry is decremented, with unreferenced pages being + * freed. The unmapping operation will stop early if it encounters either an + * invalid page-table entry or a valid block mapping which maps beyond the range + * being unmapped. + * + * Return: Number of bytes unmapped, which may be 0. + */ +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); + /** * kvm_get_vtcr() - Helper to construct VTCR_EL2 * @mmfr0: Sanitized value of SYS_ID_AA64MMFR0_EL1 register. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 768a58835153..6ad4cb2d6947 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -463,6 +463,69 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, return ret; } +struct hyp_unmap_data { + u64 unmapped; + struct kvm_pgtable_mm_ops *mm_ops; +}; + +static int hyp_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + kvm_pte_t pte = *ptep, *childp = NULL; + u64 granule = kvm_granule_size(level); + struct hyp_unmap_data *data = arg; + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; + + if (!kvm_pte_valid(pte)) + return -EINVAL; + + if (kvm_pte_table(pte, level)) { + childp = kvm_pte_follow(pte, mm_ops); + + if (mm_ops->page_count(childp) != 1) + return 0; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vae2is, __TLBI_VADDR(addr, 0), level); + } else { + if (end - addr < granule) + return -EINVAL; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); + data->unmapped += granule; + } + + dsb(ish); + isb(); + mm_ops->put_page(ptep); + + if (childp) + mm_ops->put_page(childp); + + return 0; +} + +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct hyp_unmap_data unmap_data = { + .mm_ops = pgt->mm_ops, + }; + struct kvm_pgtable_walker walker = { + .cb = hyp_unmap_walker, + .arg = &unmap_data, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + }; + + if (!pgt->mm_ops->page_count) + return 0; + + kvm_pgtable_walk(pgt, addr, size, &walker); + return unmap_data.unmapped; +} + int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, struct kvm_pgtable_mm_ops *mm_ops) { -- 2.34.0.rc2.393.gf8c9666880-goog