Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp370253img; Wed, 20 Mar 2019 23:11:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqynYMSrgolsmUle2kLpBP9tLB/vIMyX9zV3nvNLexv5SCnbiHESPnsmquq7ialoa4qXNn+d X-Received: by 2002:a17:902:f089:: with SMTP id go9mr1861629plb.335.1553148667659; Wed, 20 Mar 2019 23:11:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553148667; cv=none; d=google.com; s=arc-20160816; b=mr4zBpOoKTU7TUanaiL52eoJOX0meF7Kc2JzzLhol5JFooAl0Oqat1Yuy769hPzoOf iu/oag75qaHN/NkvZVYwLudzC4FbYj9wkS4J0toJ0t9yyxonKHIdKotPzu5V3HK9DFE8 oZ26wmJNwTRMF3I4IOeO7OqFrDFJX6qns6N04B1Y8EDt3MIEXk6/7xIeARrwe/b+xgJd tNbr5Sybb0UKhCfumK/MfeVfCpTlJUuKUPY4Hp7O9PPLRNcCyMsV1zhBhG110IFyeXFj PntBC7TbcUMp609SUjKwWDj3xPy9XjIEdgDAERAu3+8kYbWiqVWRM0fhp/383See0sNg PYvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=Emsw3nhtCBeG7XfNRgzKz1dsPgNQ48U30B2eehF9oaQ=; b=JRZ9EHMNcEDolQlKD2k2osE86muJs0ozjF+QuNw19xpYRjvsoeG/0PK8G1aIU0BoRM X+EgMgKnO6uEoN/1To2v1izAnq4cOJAfpdwE2tOuY9IURsUnJZd4aFarJTqVb3/sJzPd f7Qc5eJg/neuo3ciOapegj0tjG8XUDPQL/oORT0UPorSsHX1wYi8DkerYw6RBvwBcmMl glusQsH9xOj6BK7HT99Ndu77jPE++b6H0Aex69vc5TQuFZfZteE0CDF2Ob+HfMZ7Sph/ 3duaBaZx3s3meWOyuv7hq2gyUHLTFCmmFYIjUNVEU9aGDqZ2KvfJXMGyTu9M+KF19ypb 2Z1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z1si628205plb.199.2019.03.20.23.10.51; Wed, 20 Mar 2019 23:11:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727705AbfCUGIm (ORCPT + 99 others); Thu, 21 Mar 2019 02:08:42 -0400 Received: from foss.arm.com ([217.140.101.70]:51064 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726373AbfCUGIm (ORCPT ); Thu, 21 Mar 2019 02:08:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B945A78; Wed, 20 Mar 2019 23:08:41 -0700 (PDT) Received: from [10.162.0.144] (a075553-lin.blr.arm.com [10.162.0.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 83DC13F71A; Wed, 20 Mar 2019 23:08:37 -0700 (PDT) Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers To: Julien Thierry , linux-arm-kernel@lists.infradead.org Cc: Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Andrew Jones , Dave Martin , Ramana Radhakrishnan , kvmarm@lists.cs.columbia.edu, Kristina Martsenko , linux-kernel@vger.kernel.org, Mark Rutland , James Morse References: <1552984243-7689-1-git-send-email-amit.kachhap@arm.com> <1552984243-7689-8-git-send-email-amit.kachhap@arm.com> From: Amit Daniel Kachhap Message-ID: <44e16529-4792-0ec2-d2d5-f5c14c6bb2c4@arm.com> Date: Thu, 21 Mar 2019 11:38:35 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Julien, On 3/20/19 5:43 PM, Julien Thierry wrote: > Hi Amit, > > On 19/03/2019 08:30, Amit Daniel Kachhap wrote: >> From: Mark Rutland >> >> When pointer authentication is supported, a guest may wish to use it. >> This patch adds the necessary KVM infrastructure for this to work, with >> a semi-lazy context switch of the pointer auth state. >> >> Pointer authentication feature is only enabled when VHE is built >> in the kernel and present in the CPU implementation so only VHE code >> paths are modified. >> >> When we schedule a vcpu, we disable guest usage of pointer >> authentication instructions and accesses to the keys. While these are >> disabled, we avoid context-switching the keys. When we trap the guest >> trying to use pointer authentication functionality, we change to eagerly >> context-switching the keys, and enable the feature. The next time the >> vcpu is scheduled out/in, we start again. However the host key save is >> optimized and implemented inside ptrauth instruction/register access >> trap. >> >> Pointer authentication consists of address authentication and generic >> authentication, and CPUs in a system might have varied support for >> either. Where support for either feature is not uniform, it is hidden >> from guests via ID register emulation, as a result of the cpufeature >> framework in the host. >> >> Unfortunately, address authentication and generic authentication cannot >> be trapped separately, as the architecture provides a single EL2 trap >> covering both. If we wish to expose one without the other, we cannot >> prevent a (badly-written) guest from intermittently using a feature >> which is not uniformly supported (when scheduled on a physical CPU which >> supports the relevant feature). Hence, this patch expects both type of >> authentication to be present in a cpu. >> >> This switch of key is done from guest enter/exit assembly as preperation >> for the upcoming in-kernel pointer authentication support. Hence, these >> key switching routines are not implemented in C code as they may cause >> pointer authentication key signing error in some situations. >> >> Signed-off-by: Mark Rutland >> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks >> , save host key in ptrauth exception trap] >> Signed-off-by: Amit Daniel Kachhap >> Reviewed-by: Julien Thierry >> Cc: Marc Zyngier >> Cc: Christoffer Dall >> Cc: kvmarm@lists.cs.columbia.edu >> --- >> arch/arm64/include/asm/kvm_host.h | 17 ++++++ >> arch/arm64/include/asm/kvm_ptrauth_asm.h | 100 +++++++++++++++++++++++++++++++ >> arch/arm64/kernel/asm-offsets.c | 6 ++ >> arch/arm64/kvm/guest.c | 14 +++++ >> arch/arm64/kvm/handle_exit.c | 24 +++++--- >> arch/arm64/kvm/hyp/entry.S | 7 +++ >> arch/arm64/kvm/reset.c | 7 +++ >> arch/arm64/kvm/sys_regs.c | 46 +++++++++++++- >> virt/kvm/arm/arm.c | 2 + >> 9 files changed, 212 insertions(+), 11 deletions(-) >> create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h >> [...] >> +#ifdef CONFIG_ARM64_PTR_AUTH >> + >> +#define PTRAUTH_REG_OFFSET(x) (x - CPU_APIAKEYLO_EL1) > > I don't really see the point of this macro. You move the pointers of > kvm_cpu_contexts to point to where the ptr auth registers are (which is > in the middle of an array) by adding the offset of APIAKEYLO and then we > have to recompute all offsets with this macro. > > Why not just pass the kvm_cpu_context pointers to > ptrauth_save/restore_state and use the already defined offsets > (CPU_AP*_EL1) directly? > > I think this would also allow to use one less register for the > ptrauth_switch_to_* macros. Actually the values of CPU_AP*_EL1 are exceeding the immediate range (i.e 512), so this was done to keep the immediate offset within the range. The other way would have been to calculate the destination register but these would add one more add instruction everywhere. I should have mentioned them as comments somewhere. > >> + >> +.macro ptrauth_save_state base, reg1, reg2 >> + mrs_s \reg1, SYS_APIAKEYLO_EL1 >> + mrs_s \reg2, SYS_APIAKEYHI_EL1 >> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)] >> + mrs_s \reg1, SYS_APIBKEYLO_EL1 >> + mrs_s \reg2, SYS_APIBKEYHI_EL1 >> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)] >> + mrs_s \reg1, SYS_APDAKEYLO_EL1 >> + mrs_s \reg2, SYS_APDAKEYHI_EL1 >> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)] >> + mrs_s \reg1, SYS_APDBKEYLO_EL1 >> + mrs_s \reg2, SYS_APDBKEYHI_EL1 >> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)] >> + mrs_s \reg1, SYS_APGAKEYLO_EL1 >> + mrs_s \reg2, SYS_APGAKEYHI_EL1 >> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)] >> +.endm >> + >> +.macro ptrauth_restore_state base, reg1, reg2 >> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)] >> + msr_s SYS_APIAKEYLO_EL1, \reg1 >> + msr_s SYS_APIAKEYHI_EL1, \reg2 >> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)] >> + msr_s SYS_APIBKEYLO_EL1, \reg1 >> + msr_s SYS_APIBKEYHI_EL1, \reg2 >> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)] >> + msr_s SYS_APDAKEYLO_EL1, \reg1 >> + msr_s SYS_APDAKEYHI_EL1, \reg2 >> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)] >> + msr_s SYS_APDBKEYLO_EL1, \reg1 >> + msr_s SYS_APDBKEYHI_EL1, \reg2 >> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)] >> + msr_s SYS_APGAKEYLO_EL1, \reg1 >> + msr_s SYS_APGAKEYHI_EL1, \reg2 >> + .endm > > Nit: Bad indentation. oops. > >> + >> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3 >> + ldr \reg1, [\g_ctxt, #CPU_HCR_EL2] >> + and \reg1, \reg1, #(HCR_API | HCR_APK) >> + cbz \reg1, skip_switch_to_guest >> + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1 >> + ptrauth_restore_state \reg1, \reg2, \reg3 > > As mentioned we probably can get rid of one temporary register and > directly pass g_ctxt here. > >> +skip_switch_to_guest: > > I believe you should use numbered labels for assembly macros, otherwise > multiple references to that macros will lead to symbol redefinitions. Yes agreed. Thanks, Amit > >> +.endm >> + >> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3 >> + ldr \reg1, [\g_ctxt, #CPU_HCR_EL2] >> + and \reg1, \reg1, #(HCR_API | HCR_APK) >> + cbz \reg1, skip_switch_to_host >> + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1 >> + ptrauth_save_state \reg1, \reg2, \reg3 >> + add \reg1, \h_ctxt, #CPU_APIAKEYLO_EL1 >> + ptrauth_restore_state \reg1, \reg2, \reg3 >> + isb >> +skip_switch_to_host: > > Same. > > Cheers, > > Julien > >> +.endm >> + >> +#else /* !CONFIG_ARM64_PTR_AUTH */ >> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3 >> +.endm >> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3 [...] >> >