Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp58079pxm; Wed, 2 Mar 2022 10:21:04 -0800 (PST) X-Google-Smtp-Source: ABdhPJxYGCkOQWZDdiDhn4XkOfZUFP6WbU1GdjNzIvMueF6+xGjNd4+LWTfjbxzvVC4lWZhbJwIi X-Received: by 2002:a17:902:8c84:b0:14e:e827:a6f0 with SMTP id t4-20020a1709028c8400b0014ee827a6f0mr31419400plo.82.1646245264508; Wed, 02 Mar 2022 10:21:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646245264; cv=none; d=google.com; s=arc-20160816; b=XB2ALkzyWv6PH0kVBzuKdU7BwiYe1NHCGRkTz+cuNbati6ICXX15EtSEV7jFONVJap gsskfl0q7lg4OVGO/OD99CBZhKdV0+sKk4/85Db/C6/+Y/DQ8XMvjVQs+FKJgKT07Prj FMeQ7+AvJHnyPcOCRq4dWNpiwvPbYvsHBS33fwDcto8Ne1pRTv8LoE2MRtzJkktyvm8J xSzOgpXuy7hH62nQeEdsV1Lyrj4d5fGlsAW3tRg+/y2/HSzhTSb6BxzWocimHoyWDFcS CNJ6RoJQ3bbVJeberKMSuO7p0ovpXWmaHCghWfpb77p8n91p47WYCybRfbD0gUDjZ3SZ yg1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=r8D9Glo/IqaUHdT1IqNZ/4L9jANqiQOBO7H8ZnJ11bo=; b=D0aAKBxAC/sZkiNC6qETAO5mkZAUzpEMDkG+L4lYNAwWGZyEefKWunajtEhXQl25Pj LJCNtrT1F+V9ex0BNit/lsgz0NtIliiF6dwf+MpkpHhT+Hq2UYUfkf3sR2pNDelFREtl XLgSbqWDcdxxrGMmCft3yvO39VWTunqtuJjbFD/zVSLSBlrkB8t91EPCFj1Qmzkzm9E/ cPdxIvJ6omvHDPmKcsnn6XZ1XOOJle9JFp1TyEOaSiIJ+4YhLeIl4e8ThFG4OOtVdy1N nwc5Jg0EjPKNg6Zwgu2RslMANf5d3s6ty7OonzHjJmHeDsvZRnTKPtrwYS7GV15JEJqW vLKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=kprMM0Pa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 36-20020a630c64000000b003745324ee93si16120573pgm.382.2022.03.02.10.20.25; Wed, 02 Mar 2022 10:21:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=kprMM0Pa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243975AbiCBRcj (ORCPT + 99 others); Wed, 2 Mar 2022 12:32:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243947AbiCBRcZ (ORCPT ); Wed, 2 Mar 2022 12:32:25 -0500 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8921E5DA47 for ; Wed, 2 Mar 2022 09:31:16 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id k29-20020a05600c1c9d00b003817fdc0f00so1702787wms.4 for ; Wed, 02 Mar 2022 09:31:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=r8D9Glo/IqaUHdT1IqNZ/4L9jANqiQOBO7H8ZnJ11bo=; b=kprMM0PaxhQaUOtqFTUXwwSlh4FWOta0HNFj3k7wlkS3kkvEmZ+wqPXkOH8B7y9/oC bm2gpBOVubXzle2zPcw8J4V7zljJPHIGhK4LrcPmyX92uvl7bC9wBUgtv6SjxBBFh2dY hcYB13Xv3owOAW+CWyYokl0ROfZlLpNgD4VTywg6B1V5WHMB04iufGZ/UiYQmN4WA4Nr gaiSEvvwL/JrKDnH1wb7+LFr4UY8vX1Fl/UqusDD2AYSrYEW5Zk/uEXCJYGrCSk/ufW4 Hn7qtBMFNHJAEYKi1pg6r+9QpgVjIiYbQPjAYT+hhjhFeyakxl5yLxHCKGsJOlUmknxn +W3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=r8D9Glo/IqaUHdT1IqNZ/4L9jANqiQOBO7H8ZnJ11bo=; b=HWpYdKymjrrnSJ6u0MHdaflnUZpMkMk5bh1G8zIMvMWaObxYL/ftbXmVOxwb+vE9RY pLiFURnmaejtZc4Qb9Whqea34UWw3NJaNHCb8P+ayvC9WusybSj/XPpdDdkYzVvmvLyj yzhknMALujmaQ8gJ7ugi6mC9lsq3kV8km3MptW+8HdrWMnxrd82UCpp0/2XYISmqB6jT 6gH61Mx+j8zI34vP1VOqEvHSYRlktwqKK/35FVBFu3Ku/TIzbl+nGpHdQVG5mXcsYqc2 WyWtf7ga6vAMF/4tf8lHkM2GEJm8P/H2/5Z6N+9+y9CNE4kOrNZEppvjABYVT1GwR+bR yb9g== X-Gm-Message-State: AOAM5337292ogPCC1FNwDpkUHAvpaNt4kWvM4vEa5yOkPdaLiZiqAU6c uZQvCjwyhgOfJO8U2XsUybN9ndrhYu4COe9efhIw5A== X-Received: by 2002:a05:600c:2284:b0:386:90d8:73ca with SMTP id 4-20020a05600c228400b0038690d873camr714048wmf.66.1646242274759; Wed, 02 Mar 2022 09:31:14 -0800 (PST) MIME-Version: 1.0 References: <20220225033548.1912117-1-kaleshsingh@google.com> <20220225033548.1912117-4-kaleshsingh@google.com> <87tucg6b97.wl-maz@kernel.org> In-Reply-To: <87tucg6b97.wl-maz@kernel.org> From: Kalesh Singh Date: Wed, 2 Mar 2022 09:31:03 -0800 Message-ID: Subject: Re: [PATCH v4 3/8] KVM: arm64: Add guard pages for KVM nVHE hypervisor stack To: Marc Zyngier Cc: Will Deacon , Quentin Perret , Fuad Tabba , Suren Baghdasaryan , "Cc: Android Kernel" , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Andrew Walbran , Andrew Scull , "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , kvmarm , LKML Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-18.1 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 1, 2022 at 11:53 PM Marc Zyngier wrote: > > On Fri, 25 Feb 2022 03:34:48 +0000, > Kalesh Singh wrote: > > > > Maps the stack pages in the flexible private VA range and allocates > > guard pages below the stack as unbacked VA space. The stack is aligned > > to twice its size to aid overflow detection (implemented in a subsequent > > patch in the series). > > > > Signed-off-by: Kalesh Singh > > --- > > > > Changes in v4: > > - Replace IS_ERR_OR_NULL check with IS_ERR check now that > > hyp_alloc_private_va_range() returns an error for null > > pointer, per Fuad > > - Format comments to < 80 cols, per Fuad > > > > Changes in v3: > > - Handle null ptr in IS_ERR_OR_NULL checks, per Mark > > > > arch/arm64/include/asm/kvm_asm.h | 1 + > > arch/arm64/kvm/arm.c | 32 +++++++++++++++++++++++++++++--- > > 2 files changed, 30 insertions(+), 3 deletions(-) > > > > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h > > index d5b0386ef765..2e277f2ed671 100644 > > --- a/arch/arm64/include/asm/kvm_asm.h > > +++ b/arch/arm64/include/asm/kvm_asm.h > > @@ -169,6 +169,7 @@ struct kvm_nvhe_init_params { > > unsigned long tcr_el2; > > unsigned long tpidr_el2; > > unsigned long stack_hyp_va; > > + unsigned long stack_pa; > > phys_addr_t pgd_pa; > > unsigned long hcr_el2; > > unsigned long vttbr; > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index ecc5958e27fe..0a83c0e7f838 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -1541,7 +1541,6 @@ static void cpu_prepare_hyp_mode(int cpu) > > tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET; > > params->tcr_el2 = tcr; > > > > - params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) + PAGE_SIZE); > > params->pgd_pa = kvm_mmu_get_httbr(); > > if (is_protected_kvm_enabled()) > > params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS; > > @@ -1990,14 +1989,41 @@ static int init_hyp_mode(void) > > * Map the Hyp stack pages > > */ > > for_each_possible_cpu(cpu) { > > + struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); > > char *stack_page = (char *)per_cpu(kvm_arm_hyp_stack_page, cpu); > > - err = create_hyp_mappings(stack_page, stack_page + PAGE_SIZE, > > - PAGE_HYP); > > + unsigned long stack_hyp_va, guard_hyp_va; > > > > + /* > > + * Private mappings are allocated downwards from io_map_base > > + * so allocate the stack first then the guard page. > > + * > > + * The stack is aligned to twice its size to facilitate overflow > > + * detection. > > + */ > > + err = __create_hyp_private_mapping(__pa(stack_page), PAGE_SIZE, > > + PAGE_SIZE * 2, &stack_hyp_va, PAGE_HYP); > > Right, I guess that's where my earlier ask breaks, as you want an > alignment that is *larger* than the allocation. > > > if (err) { > > kvm_err("Cannot map hyp stack\n"); > > goto out_err; > > } > > + > > + /* Allocate unbacked private VA range for stack guard page */ > > + guard_hyp_va = hyp_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE); > > Huh. You are implicitly relying on the VA allocator handing you an > address contiguous with the previous mapping. That's... brave. I'd > rather you allocate the VA space upfront with the correct alignment > and then map the single page where it should be in the VA region. > > That'd be a lot less fragile. Agreed. I'll fix it in the next version. Thanks, Kalesh > > > + if (IS_ERR((void *)guard_hyp_va)) { > > + err = PTR_ERR((void *)guard_hyp_va); > > + kvm_err("Cannot allocate hyp stack guard page\n"); > > + goto out_err; > > + } > > + > > + /* > > + * Save the stack PA in nvhe_init_params. This will be needed > > + * to recreate the stack mapping in protected nVHE mode. > > + * __hyp_pa() won't do the right thing there, since the stack > > + * has been mapped in the flexible private VA space. > > + */ > > + params->stack_pa = __pa(stack_page) + PAGE_SIZE; > > + > > + params->stack_hyp_va = stack_hyp_va + PAGE_SIZE; > > } > > > > for_each_possible_cpu(cpu) { > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible.