Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp350288pxb; Thu, 21 Apr 2022 00:41:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxgsBw470J96sVUnVe4sRLnWewjIfeQs5UNg0N+RC9/rrdsuiCMLo2WYYgPu6ST7PTBvWjU X-Received: by 2002:a05:6a00:1895:b0:50a:de86:b4b0 with SMTP id x21-20020a056a00189500b0050ade86b4b0mr1377906pfh.28.1650526903825; Thu, 21 Apr 2022 00:41:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650526903; cv=none; d=google.com; s=arc-20160816; b=IrhnrAxKJigmpOwovc3+cPBhIg7j3pv7B0OdynBRCEEGqIGcTrmANv1xRb6Hdy2BDb ZzV+vdSZ3cEiDO0TG/c+7FqA3WXzfNgnU9ztR24Nq8fUXZw1c2jTsueWmqCeoC2pY3o6 zPO2kldUgtbNqN0XG3tlh/dr6sXNaw76HE4DBjZXZ8eUegep7PXrl5YSD3ZvomGgiLGC Rcanfy27Tjvnv3p0PgqIbipYCr2Q8+tkSawvcEW8gPFkvKrdI9BQ/rOdbijs1wra88pw +cCuTAQmlx//+xeuZ1UtspoM7srOrmovHZSwFRewrNVctjUUR46HcbS3dQ/hfaITqXR6 tbUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:cc:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=H7kRCZxE86B6SWxgyz7zRi2sY2pDlR4nBd1oa6QSQkw=; b=tq2NtbUwOkDcJgoH3ySBB98sl6zYqTS3EReCkeHb+YQThHUwRe+XfT95lybnI8Bh88 +5tk6rDGpDTH5ZTdlBGJIUa1xVI+ZuIXGSO/WsOS/1sCZ2nkhn9yHaz8qW3mQGqF0wZp gvA+pw5NKNueBWeKySqDCG67fnJKKaRoAMksLSmp1WFEaKNxbrcbafoQkyboh9JOSOij 8OhDaPwMwk8CgmgLdcShbZ8c0hZMTHNujzu+YGaM3fubGFXfHndeylI+bwSICMOxbFzZ mVEOwK8Mt0pU0OItRxq/WuQj1aUId3VzmTp5EjDp5ctwLuMowZRHAf1EkCFawteMpPUw 4eww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=TTCcYdux; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ob16-20020a17090b391000b001c7c48c9fbbsi5200135pjb.8.2022.04.21.00.41.28; Thu, 21 Apr 2022 00:41:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=TTCcYdux; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382631AbiDTVqp (ORCPT + 99 others); Wed, 20 Apr 2022 17:46:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241443AbiDTVqo (ORCPT ); Wed, 20 Apr 2022 17:46:44 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02DDC43ACE for ; Wed, 20 Apr 2022 14:43:57 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id d129-20020a254f87000000b006411bf3f331so2675651ybb.4 for ; Wed, 20 Apr 2022 14:43:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=H7kRCZxE86B6SWxgyz7zRi2sY2pDlR4nBd1oa6QSQkw=; b=TTCcYduxyz27rtHm5u8I3DjglQhoNrJAIrlsxPZQiM0yBCqC2X5vG08pfDUgqS5D0S Yf4klwHp1Cy05ZRUNct7mdNPz79qITMOuOjzm0ybp9e0/UBjYQ3bHarl6JZ/Ptu4oYtY 6Y5jEHrpZWlVlPdYJwDQaNeQ4HtOsTFcxhAe0wopXwYBpn1Y05w+YZjqLYYF46/VaXyz soQ9gF1dGxoFiuKfBj2BRiJn3wt+MB3i7grEPI+CfFL/jqj80d9HsC3ZpHP4a3w0JFQb 5f965Ek9CQ9Htes05RLyI06DePLAEwRayeGgzF1B1W24ekpfCAiGI+u1OxGp/l4WZcpT Zq2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=H7kRCZxE86B6SWxgyz7zRi2sY2pDlR4nBd1oa6QSQkw=; b=anS517A345FxLyJfiil6hs4/I2gwIueo0c/u9wjCSe0aaDho/rc7+iUHgxJNXKaowl SsLmZfA8eYldKITek6vj3s0h2TKwHB2NmihL8ZEgTznsUo7SpMPGJXEqrMIGT/EMaBY3 EfZ7Z/R4mvNUfrYLkWmv2enim1jeGEbaMjxqAKXJYN8iWhdYf4FutZSfGf87WPMlSM1e USaEg+4G/GM/aBGanD867ern6H/p+5ydA/vo6lAByicx27AAJHEHiSfnDEzJ5E0k3v4D Y4xvk8UK/asYw+WRBpTIjLQEaf31F5Q8p4mbg6Dwq1RKZ3L2irFbyZiE9hGrXnyfllFn ghxA== X-Gm-Message-State: AOAM531ZM34GzbSUAd8waJThfL03vs5XkqMqyJjxR22HpaE1MevxnnC4 pCsJuVOiaXsNSjGrryx/pxt9za9DWe6OMfAbFQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:2a20:cec4:8f41:cd6f]) (user=kaleshsingh job=sendgmr) by 2002:a81:1557:0:b0:2f4:d3eb:6428 with SMTP id 84-20020a811557000000b002f4d3eb6428mr863893ywv.234.1650491036209; Wed, 20 Apr 2022 14:43:56 -0700 (PDT) Date: Wed, 20 Apr 2022 14:42:52 -0700 In-Reply-To: <20220420214317.3303360-1-kaleshsingh@google.com> Message-Id: <20220420214317.3303360-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220420214317.3303360-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v8 1/6] KVM: arm64: Introduce hyp_alloc_private_va_range() From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Andrew Jones , Ard Biesheuvel , Changbin Du , Nick Desaulniers , Masahiro Yamada , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-8.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MISSING_HEADERS, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org hyp_alloc_private_va_range() can be used to reserve private VA ranges in the nVHE hypervisor. Allocations are aligned based on the order of the requested size. This will be used to implement stack guard pages for KVM nVHE hypervisor (nVHE Hyp mode / not pKVM), in a subsequent patch in the series. Signed-off-by: Kalesh Singh Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba --- Changes in v8 - Remove !base check in hyp_alloc_private_va_range(), per Marc - PAGE_ALIGN the size in __create_hyp_private_mapping(), per Marc Changes in v7: - Add Fuad's Reviewed-by and Tested-by tags. Changes in v6: - Update kernel-doc for hyp_alloc_private_va_range() and add return description, per Stephen - Update hyp_alloc_private_va_range() to return an int error code, per Stephen - Replace IS_ERR() checks with IS_ERR_VALUE() check, per Stephen - Clean up goto, per Stephen Changes in v5: - Align private allocations based on the order of their size, per Marc Changes in v4: - Handle null ptr in hyp_alloc_private_va_range() and replace IS_ERR_OR_NULL checks in callers with IS_ERR checks, per Fuad - Fix kernel-doc comments format, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/mmu.c | 64 +++++++++++++++++++++----------- 2 files changed, 44 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 74735a864eee..a50cbb5ba402 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -154,6 +154,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) int kvm_share_hyp(void *from, void *to); void kvm_unshare_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); +int hyp_alloc_private_va_range(size_t size, unsigned long *haddr); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, void __iomem **haddr); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 53ae2c0640bc..7de1e02ebfd1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -457,23 +457,22 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot) return 0; } -static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, - unsigned long *haddr, - enum kvm_pgtable_prot prot) + +/** + * hyp_alloc_private_va_range - Allocates a private VA range. + * @size: The size of the VA range to reserve. + * @haddr: The hypervisor virtual start address of the allocation. + * + * The private virtual address (VA) range is allocated below io_map_base + * and aligned based on the order of @size. + * + * Return: 0 on success or negative error code on failure. + */ +int hyp_alloc_private_va_range(size_t size, unsigned long *haddr) { unsigned long base; int ret = 0; - if (!kvm_host_owns_hyp_mappings()) { - base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, - phys_addr, size, prot); - if (IS_ERR_OR_NULL((void *)base)) - return PTR_ERR((void *)base); - *haddr = base; - - return 0; - } - mutex_lock(&kvm_hyp_pgd_mutex); /* @@ -484,8 +483,10 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, * * The allocated size is always a multiple of PAGE_SIZE. */ - size = PAGE_ALIGN(size + offset_in_page(phys_addr)); - base = io_map_base - size; + base = io_map_base - PAGE_ALIGN(size); + + /* Align the allocation based on the order of its size */ + base = ALIGN_DOWN(base, PAGE_SIZE << get_order(size)); /* * Verify that BIT(VA_BITS - 1) hasn't been flipped by @@ -495,19 +496,40 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, if ((base ^ io_map_base) & BIT(VA_BITS - 1)) ret = -ENOMEM; else - io_map_base = base; + *haddr = io_map_base = base; mutex_unlock(&kvm_hyp_pgd_mutex); + return ret; +} + +static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, + unsigned long *haddr, + enum kvm_pgtable_prot prot) +{ + unsigned long addr; + int ret = 0; + + if (!kvm_host_owns_hyp_mappings()) { + addr = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, + phys_addr, size, prot); + if (IS_ERR_VALUE(addr)) + return addr; + *haddr = addr; + + return 0; + } + + size = PAGE_ALIGN(size + offset_in_page(phys_addr)); + ret = hyp_alloc_private_va_range(size, &addr); if (ret) - goto out; + return ret; - ret = __create_hyp_mappings(base, size, phys_addr, prot); + ret = __create_hyp_mappings(addr, size, phys_addr, prot); if (ret) - goto out; + return ret; - *haddr = base + offset_in_page(phys_addr); -out: + *haddr = addr + offset_in_page(phys_addr); return ret; } -- 2.36.0.rc0.470.gd361397f0d-goog