Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1259359pxf; Fri, 19 Mar 2021 03:07:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxci6LxIZg7AxQPCvgayp7B1C2NHoAXOFuA7Yevk+AJ30yly6lhqlRY8Kru7u3p5nJoXcEs X-Received: by 2002:a17:906:b159:: with SMTP id bt25mr3373011ejb.364.1616148469118; Fri, 19 Mar 2021 03:07:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616148469; cv=none; d=google.com; s=arc-20160816; b=rhQz20L7nBh6+OhfutccDrCG33IxqdD+bu8TCVx5x+8Wwxf/dCS+VGUy8DxG41d//V hw5KH3DveMYa0PTwhJPCI80pYW8Vp97409dU5//xn5K561TNeLqRCimtsZDv/td/InhP XldOM1us/b58PerWbAbBa1LNtEi53AxcW5c10Rg1k3tgguKqxHqsVwVFs7BPvsRi8IJh tMFdPy3fF5blww/bcfIAXUKsdarHw9a3vrTk5ZvEQ1IKEaivqs4v1+h3g04l0tUyFYRJ QjxbsBRdeYh8zUfW4T2mLTrynq9PEyQz23SfzNI7hM/FKZUtcPdRHmKdFV6HAEH68x8q WADg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=wcv4vREKnArTuR36LJ4s5XYbQOambx51fL/UYTrLPw8=; b=0L6Fiwx0ZRjj3PY1y+b7sgqCb8yi/1xqtAMMHgW1opF476OC4Sa4X1yBocj78fzzQJ mpZ+pRt/kicPXkZyjDiH17Uowzfa2xoCVXNY4VR5BaReA+RUqGX/Jb2fg/qN+3CwUp5e Rb1uD3p8CBbrVw9Wmzs0B1U4T1Fe9AlbPZxzz0mZF8Ik8CIk+++vR/oA7vLRmAeRzgYC kMxv28DPyIfK8RCNInW4WmL8tsGJO2fOy8xcMG5quxynctMrICq0EpbAxUgzjBavmSbP Ojnn6hWq6PJ9S2C2fUopoLHgB1aFzQ/X6mUszsdHeAtX0m0QjqVpg8LwKsC2CWbSEExb BXsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ZlpIZX+E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cd22si3540407ejb.384.2021.03.19.03.07.26; Fri, 19 Mar 2021 03:07:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ZlpIZX+E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230498AbhCSKDy (ORCPT + 99 others); Fri, 19 Mar 2021 06:03:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230246AbhCSKDN (ORCPT ); Fri, 19 Mar 2021 06:03:13 -0400 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3886CC06174A for ; Fri, 19 Mar 2021 03:03:13 -0700 (PDT) Received: by mail-wm1-x349.google.com with SMTP id l16so8114574wmc.0 for ; Fri, 19 Mar 2021 03:03:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wcv4vREKnArTuR36LJ4s5XYbQOambx51fL/UYTrLPw8=; b=ZlpIZX+EzTuUNUCtgoUWPmG+bspeyGyRB0iNHGELTKyRSuIE88j3AtHAvQqGA2GL0K gDyCBwMDqdFgY2lQwbDIb0l4soFgilpvNr3ANJw1unBDhPqhTtRyK5xIIoym3M0rKoUs 4JNemtx4mZwpdPWi5jp/4fJsNU/CSkpOu4T38HhuC86U/n5c4y3wnAZochAsOpWYPT9z aVtjkpk79a/JpA97r4W8+zGjIQt8rR/2G1AlcUqnEG/qpvJwRwrvXoGBA3JmpEEg3KWB PW9WGsgujq48fEGSg9ZgOH/4gbdYH15MLKJL4BTnJphwvQboNVO0NxGPwF6yph209lq2 toww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wcv4vREKnArTuR36LJ4s5XYbQOambx51fL/UYTrLPw8=; b=tgO8dRKZK4/UwmB/OivazQb7i24TzgryCsBAr1sHfbb4ce2IBgnGhrt0+S9OLALM42 HAH3D0P3jrrkSr3zOPYlVG/XlXo9V2E913bEzAC7xUE3eAkLyOK2mWYEluXU9K+vTJ0C L+GbpagG/xC12cwEdvQvPSN+9Oxqi4XFLJ6+u5ruxtuBAQCo/eeq/YHX7Dvm/Lwtho0O lJtEU+VH5Nlqz1ILA/FHZu3bnp69sYKKWYQPctAoXuHwBEl6gZEEXvz26vwhB+tnTG8o +eZnxeNrrpQhOd8hied7VbFaa3hZw5FAiH4HMyeA+IxHKXvVzs2fy4EkDxHjZUlTppov 9qPA== X-Gm-Message-State: AOAM5334MDbpLkZLfvOrflouLcB5TlPLs/xj6Vg5HotgZ2ROUGQ556rD tWqy1ME+HK/lfZTA/xXnEyPi/5ZsuMRc X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:adf:ba87:: with SMTP id p7mr3723317wrg.298.1616148191951; Fri, 19 Mar 2021 03:03:11 -0700 (PDT) Date: Fri, 19 Mar 2021 10:01:46 +0000 In-Reply-To: <20210319100146.1149909-1-qperret@google.com> Message-Id: <20210319100146.1149909-39-qperret@google.com> Mime-Version: 1.0 References: <20210319100146.1149909-1-qperret@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v6 38/38] KVM: arm64: Protect the .hyp sections from the host From: Quentin Perret To: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com Cc: android-kvm@google.com, seanjc@google.com, mate.toth-pal@arm.com, linux-kernel@vger.kernel.org, robh+dt@kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, tabba@google.com, ardb@kernel.org, mark.rutland@arm.com, dbrazdil@google.com, qperret@google.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When KVM runs in nVHE protected mode, use the host stage 2 to unmap the hypervisor sections by marking them as owned by the hypervisor itself. The long-term goal is to ensure the EL2 code can remain robust regardless of the host's state, so this starts by making sure the host cannot e.g. write to the .hyp sections directly. Acked-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/arm.c | 46 +++++++++++++++++++ arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 ++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 33 +++++++++++++ 5 files changed, 91 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 4149283b4cd1..cf8df032b9c3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -62,6 +62,7 @@ #define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping 17 #define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector 18 #define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize 19 +#define __KVM_HOST_SMCCC_FUNC___pkvm_mark_hyp 20 #ifndef __ASSEMBLY__ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d237c378e6fb..368159021dee 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1899,11 +1899,57 @@ void _kvm_host_prot_finalize(void *discard) WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize)); } +static inline int pkvm_mark_hyp(phys_addr_t start, phys_addr_t end) +{ + return kvm_call_hyp_nvhe(__pkvm_mark_hyp, start, end); +} + +#define pkvm_mark_hyp_section(__section) \ + pkvm_mark_hyp(__pa_symbol(__section##_start), \ + __pa_symbol(__section##_end)) + static int finalize_hyp_mode(void) { + int cpu, ret; + if (!is_protected_kvm_enabled()) return 0; + ret = pkvm_mark_hyp_section(__hyp_idmap_text); + if (ret) + return ret; + + ret = pkvm_mark_hyp_section(__hyp_text); + if (ret) + return ret; + + ret = pkvm_mark_hyp_section(__hyp_rodata); + if (ret) + return ret; + + ret = pkvm_mark_hyp_section(__hyp_bss); + if (ret) + return ret; + + ret = pkvm_mark_hyp(hyp_mem_base, hyp_mem_base + hyp_mem_size); + if (ret) + return ret; + + for_each_possible_cpu(cpu) { + phys_addr_t start = virt_to_phys((void *)kvm_arm_hyp_percpu_base[cpu]); + phys_addr_t end = start + (PAGE_SIZE << nvhe_percpu_order()); + + ret = pkvm_mark_hyp(start, end); + if (ret) + return ret; + + start = virt_to_phys((void *)per_cpu(kvm_arm_hyp_stack_page, cpu)); + end = start + PAGE_SIZE; + ret = pkvm_mark_hyp(start, end); + if (ret) + return ret; + } + /* * Flip the static key upfront as that may no longer be possible * once the host stage 2 is installed. diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index d293cb328cc4..42d81ec739fa 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -21,6 +21,8 @@ struct host_kvm { extern struct host_kvm host_kvm; int __pkvm_prot_finalize(void); +int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end); + int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 69163f2cbb63..b4eaa7ef13e0 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -156,6 +156,14 @@ static void handle___pkvm_prot_finalize(struct kvm_cpu_context *host_ctxt) { cpu_reg(host_ctxt, 1) = __pkvm_prot_finalize(); } + +static void handle___pkvm_mark_hyp(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(phys_addr_t, start, host_ctxt, 1); + DECLARE_REG(phys_addr_t, end, host_ctxt, 2); + + cpu_reg(host_ctxt, 1) = __pkvm_mark_hyp(start, end); +} typedef void (*hcall_t)(struct kvm_cpu_context *); #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x @@ -180,6 +188,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_create_mappings), HANDLE_FUNC(__pkvm_create_private_mapping), HANDLE_FUNC(__pkvm_prot_finalize), + HANDLE_FUNC(__pkvm_mark_hyp), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 77b48c47344d..808e2471091b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -27,6 +27,8 @@ struct host_kvm host_kvm; struct hyp_pool host_s2_mem; struct hyp_pool host_s2_dev; +static const u8 pkvm_hyp_id = 1; + static void *host_s2_zalloc_pages_exact(size_t size) { return hyp_alloc_pages(&host_s2_mem, get_order(size)); @@ -182,6 +184,18 @@ static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) return false; } +static bool range_is_memory(u64 start, u64 end) +{ + struct kvm_mem_range r1, r2; + + if (!find_mem_range(start, &r1) || !find_mem_range(end, &r2)) + return false; + if (r1.start != r2.start) + return false; + + return true; +} + static inline int __host_stage2_idmap(u64 start, u64 end, enum kvm_pgtable_prot prot, struct hyp_pool *pool) @@ -229,6 +243,25 @@ static int host_stage2_idmap(u64 addr) return ret; } +int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end) +{ + int ret; + + /* + * host_stage2_unmap_dev_all() currently relies on MMIO mappings being + * non-persistent, so don't allow changing page ownership in MMIO range. + */ + if (!range_is_memory(start, end)) + return -EINVAL; + + hyp_spin_lock(&host_kvm.lock); + ret = kvm_pgtable_stage2_set_owner(&host_kvm.pgt, start, end - start, + &host_s2_mem, pkvm_hyp_id); + hyp_spin_unlock(&host_kvm.lock); + + return ret != -EAGAIN ? ret : 0; +} + void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu_fault_info fault; -- 2.31.0.rc2.261.g7f71774620-goog