Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp1663717imj; Thu, 14 Feb 2019 09:58:33 -0800 (PST) X-Google-Smtp-Source: AHgI3IZT7JZp5BZVvcPMXN7SGsUZ6zoWqR203p68xstKX0HeJxvflh2yHwgMFnPEOsSW7jZsZdEv X-Received: by 2002:a17:902:31a4:: with SMTP id x33mr5328241plb.198.1550167113102; Thu, 14 Feb 2019 09:58:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550167113; cv=none; d=google.com; s=arc-20160816; b=lShvejtJSycEQs57F48cbgpBlJGQXg7AIKOpLgWLNB51ys1vpTHB5GHXcbP70cSLa0 OUwG+Tuv+9e9gJP8Bz6aQpsUF5aTYSQehRmlxzCrsFWHfnFTy+sI7n3dz0wax8dSZdSQ ymGJ2OXBOT/N9rNcLrPwKwtZovuZfSU0DsS4qnqfcvStP8MA9yRQmt0/MW1eYvzECcbE kMF8ClPG89qprriLIiQbCXjwfDeVWhbTyMVdNEqSs51kqA0yknT9A+UvJqOv0J4ovwqr 5rbdzPDqLt53FJKWyT1TolvWibJAt/OrSKxq0x4kfbWR9DlELCnwPaq2x0jBd1hKXsQl Dfag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=m6uzkNKnJwpT+O8KriSlv3VZJcWnqtK1bctEKXaOEV8=; b=k6LTnJzid/5OEBvNJuso5ESMQbByjdszy4kL/ZJwvt7DU5rHomyR78vy7EP9/pvAP+ ewyOX7224BePEVq3pHHBFCa7br+1yuiEhR6hTB98f9VM+iXOP4mADC7NNXCa7U5pIxm5 bw8wGAa10hjqc5cIBwNT4ZeK0iV152vEJ1KQ5C2OtHWI6yQVvFRT8s5ExD4CP7K9fXBx WZEPSmGw+WhbsR8SQnxcuW++Smcs1LYtWTCzjqK5NUBeYYcDDIaIY+QjaIKH+83rxYm2 d3HsAXfxUZ3rXiYht+9EYGtKRtYs0FDSuc5qiatcU3b32Zs9CjVRH+YEnqOcdl5nJIL5 x1iQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y7si2967129plt.387.2019.02.14.09.58.16; Thu, 14 Feb 2019 09:58:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438259AbfBNKQ7 (ORCPT + 99 others); Thu, 14 Feb 2019 05:16:59 -0500 Received: from foss.arm.com ([217.140.101.70]:40300 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730040AbfBNKQ7 (ORCPT ); Thu, 14 Feb 2019 05:16:59 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 42128EBD; Thu, 14 Feb 2019 02:16:58 -0800 (PST) Received: from [10.162.0.144] (a075553-lin.blr.arm.com [10.162.0.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6CCC53F575; Thu, 14 Feb 2019 02:16:55 -0800 (PST) Subject: Re: [PATCH v5 3/5] arm64/kvm: context-switch ptrauth register To: James Morse Cc: linux-arm-kernel@lists.infradead.org, Marc Zyngier , Catalin Marinas , Will Deacon , Kristina Martsenko , kvmarm@lists.cs.columbia.edu, Ramana Radhakrishnan , Dave Martin , linux-kernel@vger.kernel.org References: <1548658727-14271-1-git-send-email-amit.kachhap@arm.com> <1548658727-14271-4-git-send-email-amit.kachhap@arm.com> From: Amit Daniel Kachhap Message-ID: <6982dfea-8dad-f293-8193-39eb0ca789c3@arm.com> Date: Thu, 14 Feb 2019 15:46:53 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi James, On 1/31/19 9:55 PM, James Morse wrote: > Hi Amit, > > On 28/01/2019 06:58, Amit Daniel Kachhap wrote: >> When pointer authentication is supported, a guest may wish to use it. >> This patch adds the necessary KVM infrastructure for this to work, with >> a semi-lazy context switch of the pointer auth state. >> >> Pointer authentication feature is only enabled when VHE is built >> into the kernel and present into CPU implementation so only VHE code > > ~s/into/in the/? > >> paths are modified. >> >> When we schedule a vcpu, we disable guest usage of pointer >> authentication instructions and accesses to the keys. While these are >> disabled, we avoid context-switching the keys. When we trap the guest >> trying to use pointer authentication functionality, we change to eagerly >> context-switching the keys, and enable the feature. The next time the >> vcpu is scheduled out/in, we start again. > >> Pointer authentication consists of address authentication and generic >> authentication, and CPUs in a system might have varied support for >> either. Where support for either feature is not uniform, it is hidden >> from guests via ID register emulation, as a result of the cpufeature >> framework in the host. > > >> Unfortunately, address authentication and generic authentication cannot >> be trapped separately, as the architecture provides a single EL2 trap >> covering both. If we wish to expose one without the other, we cannot >> prevent a (badly-written) guest from intermittently using a feature >> which is not uniformly supported (when scheduled on a physical CPU which >> supports the relevant feature). When the guest is scheduled on a >> physical CPU lacking the feature, these attempts will result in an UNDEF >> being taken by the guest. > > This won't be fun. Can't KVM check that both are supported on all CPUs to avoid > this? ... The above message is confusing as both checks actually present. I will update. > > >> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h >> index dfcfba7..e1bf2a5 100644 >> --- a/arch/arm64/include/asm/cpufeature.h >> +++ b/arch/arm64/include/asm/cpufeature.h >> @@ -612,6 +612,11 @@ static inline bool system_supports_generic_auth(void) >> cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF)); >> } >> >> +static inline bool kvm_supports_ptrauth(void) >> +{ >> + return system_supports_address_auth() && system_supports_generic_auth(); >> +} > > ... oh you do check. Could you cover this in the commit message? (to avoid an > UNDEF being taken by the guest we ... ) > > cpufeature.h is a strange place to put this, there are no other kvm symbols in > there. But there are users of system_supports_foo() in kvm_host.h. ok will check. > > >> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c >> new file mode 100644 >> index 0000000..0576c01 >> --- /dev/null >> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c >> @@ -0,0 +1,44 @@ >> +// SPDX-License-Identifier: GPL-2.0 >> +/* >> + * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore >> + * >> + * Copyright 2018 Arm Limited >> + * Author: Mark Rutland >> + * Amit Daniel Kachhap >> + */ >> +#include >> +#include >> + >> +#include >> +#include >> +#include >> +#include >> +#include >> + >> +static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu) > > Why __always_inline? Doesn't the compiler decide for 'static' symbols in C files? This is to make the function pointer authentication safe. Although it placed before key switch so may not be required. > > >> +{ >> + return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && >> + vcpu->arch.ctxt.hcr_el2 & (HCR_API | HCR_APK); >> +} >> + >> +void __no_ptrauth __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu, >> + struct kvm_cpu_context *host_ctxt, >> + struct kvm_cpu_context *guest_ctxt) >> +{ >> + if (!__ptrauth_is_enabled(vcpu)) >> + return; >> + > >> + ptrauth_keys_store((struct ptrauth_keys *) &host_ctxt->sys_regs[APIAKEYLO_EL1]); > > We can't cast part of an array to a structure like this. What happens if the > compiler inserts padding in struct-ptrauth_keys, or the struct randomization > thing gets hold of it: https://lwn.net/Articles/722293/ Yes this has got issue. > > If we want to use the helpers that take a struct-ptrauth_keys, we need to keep > the keys in a struct-ptrauth_keys. To do this we'd need to provide accessors so > that GET_ONE_REG() of APIAKEYLO_EL1 comes from the struct-ptrauth_keys, instead > of the sys_reg array. ok. > > > Wouldn't the host keys be available somewhere else? (they must get transfer to > secondary CPUs somehow). Can we skip the save step when switching from the host? Yes Host save can be done during vcpu_load and it works fine. However it does not work during hypervisor configuration stage(i.e where HCR_EL2 is saved/restored now) as keys are different. > >> + ptrauth_keys_switch((struct ptrauth_keys *) &guest_ctxt->sys_regs[APIAKEYLO_EL1]); >> +} > >> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h >> index 1f2d237..c798d0c 100644 >> --- a/arch/arm64/include/asm/kvm_host.h >> +++ b/arch/arm64/include/asm/kvm_host.h >> @@ -439,6 +451,18 @@ static inline bool kvm_arch_requires_vhe(void) >> return false; >> } >> >> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu); >> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu); >> + >> +static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu) >> +{ >> + /* Disable ptrauth and use it in a lazy context via traps */ >> + if (has_vhe() && kvm_supports_ptrauth()) >> + kvm_arm_vcpu_ptrauth_disable(vcpu); >> +} > > (Could you move 'has_vhe()' into kvm_supports_ptrauth()? It fits with testing > the other cpufeatures, and makes this a little more readable when you add > 'allowed' to it later.) yes. > > >> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c >> index 03b36f1..301d332 100644 >> --- a/arch/arm64/kvm/hyp/switch.c >> +++ b/arch/arm64/kvm/hyp/switch.c >> @@ -483,6 +483,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) >> sysreg_restore_guest_state_vhe(guest_ctxt); >> __debug_switch_to_guest(vcpu); >> >> + __ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt); >> + >> __set_guest_arch_workaround_state(vcpu); >> >> do { >> @@ -494,6 +496,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) >> >> __set_host_arch_workaround_state(vcpu); >> >> + __ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt); >> + >> sysreg_save_guest_state_vhe(guest_ctxt); >> >> __deactivate_traps(vcpu); > > ...This makes me nervous... > > __guest_enter() is a function that (might) change the keys, then we change them > again here. We can't have any signed return address between these two points. I > don't trust the compiler not to generate any. > > ~ > > I had a chat with some friendly compiler folk... because there are two identical > sequences in kvm_vcpu_run_vhe() and __kvm_vcpu_run_nvhe(), the compiler could > move the common code to a function it then calls. Apparently this is called > 'function outlining'. > > If the compiler does this, and the guest changes the keys, I think we would fail > the return address check. > > Painting the whole thing with no_prauth would solve this, but this code then > becomes a target. > Because the compiler can't anticipate the keys changing, we ought to treat them > the same way we do the callee saved registers, stack-pointer etc, and > save/restore them in the __guest_enter() assembly code. > > (we can still keep the save/restore in C, but call it from assembly so we know > nothing new is going on the stack). I checked this and it seems easy to put inside guest_enter/guest_exit. //Amit D > > > Thanks, > > James >