Received: by 2002:ac0:8845:0:0:0:0:0 with SMTP id g63csp398268img; Thu, 28 Feb 2019 01:09:18 -0800 (PST) X-Google-Smtp-Source: AHgI3IY055xk6FrbRXovt16moFz8YKnSe71WjkjBHlSHlhY6Mt3SaZgFQt2U0OTL37Mk+EZ/vdww X-Received: by 2002:a62:b61a:: with SMTP id j26mr6305438pff.151.1551344958366; Thu, 28 Feb 2019 01:09:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551344958; cv=none; d=google.com; s=arc-20160816; b=LmBGEv1itu6hsUspz1ssct7mF2FFDkBIbPhZ67mdpyXdjpeP1rs9U87redab6YfBxo yiGOA8H9/aLUx/NeGT7jL/rUv8jNRVB9GiFwIpjWhw41cQEBIHuOBa7ga7sg+ESqJzm6 Z3gv8wjY6iAzjBjQUFD16/iQ76nYVKWcME/TwZrhkQMNYPES3Kgn51Jt8qc8K9bdFDTD uIb8l9eAk5Ov/naY/qUWtVrGWjtxlD3hmNtUd3qjNP+rV4BAXObLjA9P97ZoMaJ1xuME DqzZAdLXWcVBhfRFoEvx/FHb1Qw5JYIqzJPeqIJ/MMti3ge1kWekxsV3b7rU6AOkUYpy omMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=S2kllCUJrIq0CJNGk3RPGmAG1rMGIoh1UmUOTwNTzxA=; b=NT8Ddq2IbDrza12zlZb62/4ruZxtRyRRoxEzBe4ugKI6auC/okkfE5n5cpU9MiWUtK PT2xqSgFPxffXSjQCXzeA6hWSLlv3GeRHcfwI0jcRtii996hstLbbQHa4RKGPLkZ30o5 65SPNahzh1V7yFFjirl4DMBJClh5DV9s3F0mk9XqKJNLzuiWfM2Vx7S5iUhuYfbQWMqm lTClU1lc+zleqaqQ2puXgLQSwvCvua3gqNGh5MMpmqY1Jkm/VnvFNvQVWpFfODSsb6ZY BluvqedR4x7avkOeJaMTpHoYFZHA6lADoHsRVcQAAL1vE1r5D62U+YRDEjqk63mmHlA1 A78g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 91si17352367plb.265.2019.02.28.01.09.02; Thu, 28 Feb 2019 01:09:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732128AbfB1JID (ORCPT + 99 others); Thu, 28 Feb 2019 04:08:03 -0500 Received: from foss.arm.com ([217.140.101.70]:44624 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731800AbfB1JIC (ORCPT ); Thu, 28 Feb 2019 04:08:02 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 83855EBD; Thu, 28 Feb 2019 01:08:02 -0800 (PST) Received: from [10.162.0.144] (a075553-lin.blr.arm.com [10.162.0.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AD6B13F738; Thu, 28 Feb 2019 01:07:58 -0800 (PST) Subject: Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers To: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org, Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Andrew Jones , Dave Martin , Ramana Radhakrishnan , kvmarm@lists.cs.columbia.edu, Kristina Martsenko , linux-kernel@vger.kernel.org, James Morse , Julien Thierry References: <1550568271-5319-1-git-send-email-amit.kachhap@arm.com> <1550568271-5319-4-git-send-email-amit.kachhap@arm.com> <20190221122942.GC33673@lakrids.cambridge.arm.com> From: Amit Daniel Kachhap Message-ID: <04f19012-3a18-60af-b43f-dc52139633bd@arm.com> Date: Thu, 28 Feb 2019 14:37:56 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: <20190221122942.GC33673@lakrids.cambridge.arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Mark, On 2/21/19 5:59 PM, Mark Rutland wrote: > On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote: >> From: Mark Rutland >> >> When pointer authentication is supported, a guest may wish to use it. >> This patch adds the necessary KVM infrastructure for this to work, with >> a semi-lazy context switch of the pointer auth state. >> >> Pointer authentication feature is only enabled when VHE is built >> in the kernel and present into CPU implementation so only VHE code >> paths are modified. > > Nit: s/into/in the/ ok. > >> >> When we schedule a vcpu, we disable guest usage of pointer >> authentication instructions and accesses to the keys. While these are >> disabled, we avoid context-switching the keys. When we trap the guest >> trying to use pointer authentication functionality, we change to eagerly >> context-switching the keys, and enable the feature. The next time the >> vcpu is scheduled out/in, we start again. However the host key registers >> are saved in vcpu load stage as they remain constant for each vcpu >> schedule. >> >> Pointer authentication consists of address authentication and generic >> authentication, and CPUs in a system might have varied support for >> either. Where support for either feature is not uniform, it is hidden >> from guests via ID register emulation, as a result of the cpufeature >> framework in the host. >> >> Unfortunately, address authentication and generic authentication cannot >> be trapped separately, as the architecture provides a single EL2 trap >> covering both. If we wish to expose one without the other, we cannot >> prevent a (badly-written) guest from intermittently using a feature >> which is not uniformly supported (when scheduled on a physical CPU which >> supports the relevant feature). Hence, this patch expects both type of >> authentication to be present in a cpu. >> >> Signed-off-by: Mark Rutland >> [Only VHE, key switch from from assembly, kvm_supports_ptrauth >> checks, save host key in vcpu_load] > > Hmm, why do we need to do the key switch in assembly, given it's not > used in-kernel right now? > > Is that in preparation for in-kernel pointer auth usage? If so, please > call that out in the commit message. ok sure. > > [...] > >> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c >> index 4e2fb87..5cac605 100644 >> --- a/arch/arm64/kernel/traps.c >> +++ b/arch/arm64/kernel/traps.c >> @@ -749,6 +749,7 @@ static const char *esr_class_str[] = { >> [ESR_ELx_EC_CP14_LS] = "CP14 LDC/STC", >> [ESR_ELx_EC_FP_ASIMD] = "ASIMD", >> [ESR_ELx_EC_CP10_ID] = "CP10 MRC/VMRS", >> + [ESR_ELx_EC_PAC] = "Pointer authentication trap", > > For consistency with the other strings, can we please make this "PAC"? ok. It makes sense. > > [...] > >> diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile >> index 82d1904..17cec99 100644 >> --- a/arch/arm64/kvm/hyp/Makefile >> +++ b/arch/arm64/kvm/hyp/Makefile >> @@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o >> obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o >> obj-$(CONFIG_KVM_ARM_HOST) += tlb.o >> obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o >> +obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o > > Huh, so we're actually doing the switch in C code... > >> # KVM code is run at a different exception code with a different map, so >> # compiler instrumentation that inserts callbacks or checks into the code may >> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S >> index 675fdc1..b78cc15 100644 >> --- a/arch/arm64/kvm/hyp/entry.S >> +++ b/arch/arm64/kvm/hyp/entry.S >> @@ -64,6 +64,12 @@ ENTRY(__guest_enter) >> >> add x18, x0, #VCPU_CONTEXT >> >> +#ifdef CONFIG_ARM64_PTR_AUTH >> + // Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest). >> + mov x2, x18 >> + bl __ptrauth_switch_to_guest >> +#endif > > ... and conditionally *calling* that switch code from assembly ... > >> + >> // Restore guest regs x0-x17 >> ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)] >> ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)] >> @@ -118,6 +124,17 @@ ENTRY(__guest_exit) >> >> get_host_ctxt x2, x3 >> >> +#ifdef CONFIG_ARM64_PTR_AUTH >> + // Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host). >> + // Save x0, x2 which are used later in callee saved registers. >> + mov x19, x0 >> + mov x20, x2 >> + sub x0, x1, #VCPU_CONTEXT >> + ldr x29, [x2, #CPU_XREG_OFFSET(29)] >> + bl __ptrauth_switch_to_host >> + mov x0, x19 >> + mov x2, x20 >> +#endif > > ... which adds a load of boilerplate for no immediate gain. Here some parameter optimizations may be possible as guest and host ctxt can be derived from vcpu itself as James suggested in other review comments. I thought about doing all save/restore in assembly but for optimization now host keys are saved in vcpu_load stage in C so reused those C codes here also. Again all these codes are beneficial with in-kernel ptrauth and hence in case of strong objection may revert to old way. > > Do we really need to do this in assembly today? During the last patchset review [1], James provided a lot of supporting arguments to have these switch routines called from assembly due to function outlining between kvm_vcpu_run_vhe() and __kvm_vcpu_run_nvhe(). [1]: https://lkml.org/lkml/2019/1/31/662 Thanks, Amit D > > Thanks, > Mark. >