2023-05-16 14:22:06

by Mostafa Saleh

[permalink] [raw]
Subject: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

When the kernel is compiled with CONFIG_ARM64_PTR_AUTH_KERNEL, it
uses Armv8.3-Pauth for return address protection for the kernel code
including nvhe code in EL2.

Same keys are used in both kernel(EL1) and nvhe code(EL2), this is
fine for nvhe but not when running in protected mode(pKVM) as the host
can't be trusted.

The keys for the hypervisor are generated from the kernel before it
de-privileges, each cpu has different keys, this relies on nvhe code
not being migratable while running.

This patch adds host/hyp save/restore for the keys.
For guest/hyp, they are already handled in common kvm code in
__guest_enter, where they are saved/restored if they are not
trapped.

Signed-off-by: Mostafa Saleh <[email protected]>
---
arch/arm64/kvm/arm.c | 26 +++++++++++++++++++++++++
arch/arm64/kvm/hyp/nvhe/host.S | 35 +++++++++++++++++++++++++++++++++-
2 files changed, 60 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 14391826241c..dd03b52f035d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -51,6 +51,8 @@ DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);

+DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+
static bool vgic_present;

static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
@@ -2067,6 +2069,26 @@ static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
return 0;
}

+static void pkvm_hyp_init_ptrauth(void)
+{
+ struct kvm_cpu_context *hyp_ctxt;
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ hyp_ctxt = per_cpu_ptr_nvhe_sym(kvm_hyp_ctxt, cpu);
+ hyp_ctxt->sys_regs[APIAKEYLO_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APIAKEYHI_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APIBKEYLO_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APIBKEYHI_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APDAKEYLO_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APDAKEYHI_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APDBKEYLO_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APDBKEYHI_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APGAKEYLO_EL1] = get_random_long();
+ hyp_ctxt->sys_regs[APGAKEYHI_EL1] = get_random_long();
+ }
+}
+
/* Inits Hyp-mode on all online CPUs */
static int __init init_hyp_mode(void)
{
@@ -2228,6 +2250,10 @@ static int __init init_hyp_mode(void)
kvm_hyp_init_symbols();

if (is_protected_kvm_enabled()) {
+ if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) &&
+ cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+ pkvm_hyp_init_ptrauth();
+
init_cpu_logical_map();

if (!init_psci_relay()) {
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index b6c0188c4b35..255ba4af911b 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -10,6 +10,7 @@
#include <asm/kvm_arm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth.h>

.text

@@ -37,10 +38,42 @@ SYM_FUNC_START(__host_exit)

/* Save the host context pointer in x29 across the function call */
mov x29, x0
+
+#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+b __skip_pauth_save
+alternative_else_nop_endif
+
+alternative_if ARM64_KVM_PROTECTED_MODE
+ /* Save kernel ptrauth keys. */
+ add x18, x29, #CPU_APIAKEYLO_EL1
+ ptrauth_save_state x18, x19, x20
+
+ /* Use hyp keys. */
+ adr_this_cpu x18, kvm_hyp_ctxt, x19
+ add x18, x18, #CPU_APIAKEYLO_EL1
+ ptrauth_restore_state x18, x19, x20
+alternative_else_nop_endif
+__skip_pauth_save:
+#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
+
bl handle_trap

- /* Restore host regs x0-x17 */
__host_enter_restore_full:
+ /* Restore kernel keys. */
+#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+b __skip_pauth_restore
+alternative_else_nop_endif
+
+alternative_if ARM64_KVM_PROTECTED_MODE
+ add x18, x29, #CPU_APIAKEYLO_EL1
+ ptrauth_restore_state x18, x19, x20
+alternative_else_nop_endif
+__skip_pauth_restore:
+#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
+
+ /* Restore host regs x0-x17 */
ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)]
ldp x2, x3, [x29, #CPU_XREG_OFFSET(2)]
ldp x4, x5, [x29, #CPU_XREG_OFFSET(4)]
--
2.40.1.606.ga4b1b128d6-goog



2023-05-26 21:01:34

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

On Tue, May 16, 2023 at 02:15:31PM +0000, Mostafa Saleh wrote:
> When the kernel is compiled with CONFIG_ARM64_PTR_AUTH_KERNEL, it
> uses Armv8.3-Pauth for return address protection for the kernel code
> including nvhe code in EL2.
>
> Same keys are used in both kernel(EL1) and nvhe code(EL2), this is
> fine for nvhe but not when running in protected mode(pKVM) as the host
> can't be trusted.

But we trust it enough to hand pKVM a fresh set of keys before firing
off? I understand there is some degree of initialization required to get
pKVM off the ground, but I question in this case if key handoff is
strictly necessary.

There are potentially other sources of random directly available at EL2,
such as the SMCCC TRNG ABI or FEAT_RNG. Should pKVM prefer one of these
random implementations and only fall back to host-provided keys if
absolutely necessary?

> The keys for the hypervisor are generated from the kernel before it
> de-privileges, each cpu has different keys, this relies on nvhe code
> not being migratable while running.
>
> This patch adds host/hyp save/restore for the keys.
> For guest/hyp, they are already handled in common kvm code in
> __guest_enter, where they are saved/restored if they are not
> trapped.

Try to avoid "this patch" or any self-referential language in the
changelog. Just directly state what the patch does:

Similar to guest entry/exit, start context switching the pointer
authentication keys on host/entry exit if the feature is in use.

> Signed-off-by: Mostafa Saleh <[email protected]>
> ---
> arch/arm64/kvm/arm.c | 26 +++++++++++++++++++++++++
> arch/arm64/kvm/hyp/nvhe/host.S | 35 +++++++++++++++++++++++++++++++++-
> 2 files changed, 60 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 14391826241c..dd03b52f035d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -51,6 +51,8 @@ DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
> DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
> DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
>
> +DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
> +
> static bool vgic_present;
>
> static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
> @@ -2067,6 +2069,26 @@ static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
> return 0;
> }
>
> +static void pkvm_hyp_init_ptrauth(void)
> +{
> + struct kvm_cpu_context *hyp_ctxt;
> + int cpu;
> +
> + for_each_possible_cpu(cpu) {
> + hyp_ctxt = per_cpu_ptr_nvhe_sym(kvm_hyp_ctxt, cpu);
> + hyp_ctxt->sys_regs[APIAKEYLO_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APIAKEYHI_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APIBKEYLO_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APIBKEYHI_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APDAKEYLO_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APDAKEYHI_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APDBKEYLO_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APDBKEYHI_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APGAKEYLO_EL1] = get_random_long();
> + hyp_ctxt->sys_regs[APGAKEYHI_EL1] = get_random_long();
> + }
> +}
> +
> /* Inits Hyp-mode on all online CPUs */
> static int __init init_hyp_mode(void)
> {
> @@ -2228,6 +2250,10 @@ static int __init init_hyp_mode(void)
> kvm_hyp_init_symbols();
>
> if (is_protected_kvm_enabled()) {
> + if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) &&
> + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
> + pkvm_hyp_init_ptrauth();
> +
> init_cpu_logical_map();
>
> if (!init_psci_relay()) {
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index b6c0188c4b35..255ba4af911b 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -10,6 +10,7 @@
> #include <asm/kvm_arm.h>
> #include <asm/kvm_asm.h>
> #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth.h>
>
> .text
>
> @@ -37,10 +38,42 @@ SYM_FUNC_START(__host_exit)
>
> /* Save the host context pointer in x29 across the function call */
> mov x29, x0
> +
> +#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
> +alternative_if_not ARM64_HAS_ADDRESS_AUTH
> +b __skip_pauth_save
> +alternative_else_nop_endif
> +
> +alternative_if ARM64_KVM_PROTECTED_MODE
> + /* Save kernel ptrauth keys. */
> + add x18, x29, #CPU_APIAKEYLO_EL1
> + ptrauth_save_state x18, x19, x20
> +
> + /* Use hyp keys. */
> + adr_this_cpu x18, kvm_hyp_ctxt, x19
> + add x18, x18, #CPU_APIAKEYLO_EL1
> + ptrauth_restore_state x18, x19, x20
> +alternative_else_nop_endif
> +__skip_pauth_save:
> +#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
> +
> bl handle_trap
>
> - /* Restore host regs x0-x17 */
> __host_enter_restore_full:
> + /* Restore kernel keys. */
> +#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
> +alternative_if_not ARM64_HAS_ADDRESS_AUTH
> +b __skip_pauth_restore
> +alternative_else_nop_endif
> +
> +alternative_if ARM64_KVM_PROTECTED_MODE
> + add x18, x29, #CPU_APIAKEYLO_EL1
> + ptrauth_restore_state x18, x19, x20
> +alternative_else_nop_endif
> +__skip_pauth_restore:
> +#endif /* CONFIG_ARM64_PTR_AUTH_KERNEL */
> +
> + /* Restore host regs x0-x17 */
> ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)]
> ldp x2, x3, [x29, #CPU_XREG_OFFSET(2)]
> ldp x4, x5, [x29, #CPU_XREG_OFFSET(4)]
> --
> 2.40.1.606.ga4b1b128d6-goog
>

--
Thanks,
Oliver

2023-05-29 11:35:09

by Mostafa Saleh

[permalink] [raw]
Subject: Re: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

Hi Oliver,

Thanks for reviewing the patch.

On Fri, May 26, 2023 at 08:47:52PM +0000, Oliver Upton wrote:
> On Tue, May 16, 2023 at 02:15:31PM +0000, Mostafa Saleh wrote:
> > When the kernel is compiled with CONFIG_ARM64_PTR_AUTH_KERNEL, it
> > uses Armv8.3-Pauth for return address protection for the kernel code
> > including nvhe code in EL2.
> >
> > Same keys are used in both kernel(EL1) and nvhe code(EL2), this is
> > fine for nvhe but not when running in protected mode(pKVM) as the host
> > can't be trusted.
>
> But we trust it enough to hand pKVM a fresh set of keys before firing
> off? I understand there is some degree of initialization required to get
> pKVM off the ground, but I question in this case if key handoff is
> strictly necessary.
>
> There are potentially other sources of random directly available at EL2,
> such as the SMCCC TRNG ABI or FEAT_RNG. Should pKVM prefer one of these
> random implementations and only fall back to host-provided keys if
> absolutely necessary?
>
According to my understanding, the kernel is still completely trusted at
this point (it sets the initial page table for the hypervisor), so I
believe it should be fine to trust it for ptrauth keys. However, I agree,
it would be better if the hypervisor can get its own keys through
firmware/hardware if supported. I will add this in V2.

> > The keys for the hypervisor are generated from the kernel before it
> > de-privileges, each cpu has different keys, this relies on nvhe code
> > not being migratable while running.
> >
> > This patch adds host/hyp save/restore for the keys.
> > For guest/hyp, they are already handled in common kvm code in
> > __guest_enter, where they are saved/restored if they are not
> > trapped.
>
> Try to avoid "this patch" or any self-referential language in the
> changelog. Just directly state what the patch does:
>
> Similar to guest entry/exit, start context switching the pointer
>
I will update it in V2.

Thanks,
Mostafa

2023-06-08 22:13:03

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

On Mon, May 29, 2023 at 11:17:51AM +0000, Mostafa Saleh wrote:
> On Fri, May 26, 2023 at 08:47:52PM +0000, Oliver Upton wrote:
> > On Tue, May 16, 2023 at 02:15:31PM +0000, Mostafa Saleh wrote:
> > > When the kernel is compiled with CONFIG_ARM64_PTR_AUTH_KERNEL, it
> > > uses Armv8.3-Pauth for return address protection for the kernel code
> > > including nvhe code in EL2.
> > >
> > > Same keys are used in both kernel(EL1) and nvhe code(EL2), this is
> > > fine for nvhe but not when running in protected mode(pKVM) as the host
> > > can't be trusted.
> >
> > But we trust it enough to hand pKVM a fresh set of keys before firing
> > off? I understand there is some degree of initialization required to get
> > pKVM off the ground, but I question in this case if key handoff is
> > strictly necessary.
> >
> > There are potentially other sources of random directly available at EL2,
> > such as the SMCCC TRNG ABI or FEAT_RNG. Should pKVM prefer one of these
> > random implementations and only fall back to host-provided keys if
> > absolutely necessary?
> >
> According to my understanding, the kernel is still completely trusted at
> this point (it sets the initial page table for the hypervisor), so I
> believe it should be fine to trust it for ptrauth keys. However, I agree,
> it would be better if the hypervisor can get its own keys through
> firmware/hardware if supported. I will add this in V2.

I appreciate the sentiment, but I think we should avoid adding additional
complexity here if it adds no security benefit. If nothing else, it adds
pointless overhead, but beyond that it gives the false illusion of a
security boundary.

Prior to deprivilege, the kernel can write to the hypervisor text, modify
its stage-1 page-table and change its data values. I think the pointer
auth keys are the least of our worries if it's compromised...

Will

2023-06-12 09:55:54

by Mostafa Saleh

[permalink] [raw]
Subject: Re: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

On Thu, Jun 08, 2023 at 10:55:26PM +0100, Will Deacon wrote:
> On Mon, May 29, 2023 at 11:17:51AM +0000, Mostafa Saleh wrote:
> > On Fri, May 26, 2023 at 08:47:52PM +0000, Oliver Upton wrote:
> > > On Tue, May 16, 2023 at 02:15:31PM +0000, Mostafa Saleh wrote:
> > > > When the kernel is compiled with CONFIG_ARM64_PTR_AUTH_KERNEL, it
> > > > uses Armv8.3-Pauth for return address protection for the kernel code
> > > > including nvhe code in EL2.
> > > >
> > > > Same keys are used in both kernel(EL1) and nvhe code(EL2), this is
> > > > fine for nvhe but not when running in protected mode(pKVM) as the host
> > > > can't be trusted.
> > >
> > > But we trust it enough to hand pKVM a fresh set of keys before firing
> > > off? I understand there is some degree of initialization required to get
> > > pKVM off the ground, but I question in this case if key handoff is
> > > strictly necessary.
> > >
> > > There are potentially other sources of random directly available at EL2,
> > > such as the SMCCC TRNG ABI or FEAT_RNG. Should pKVM prefer one of these
> > > random implementations and only fall back to host-provided keys if
> > > absolutely necessary?
> > >
> > According to my understanding, the kernel is still completely trusted at
> > this point (it sets the initial page table for the hypervisor), so I
> > believe it should be fine to trust it for ptrauth keys. However, I agree,
> > it would be better if the hypervisor can get its own keys through
> > firmware/hardware if supported. I will add this in V2.
>
> I appreciate the sentiment, but I think we should avoid adding additional
> complexity here if it adds no security benefit. If nothing else, it adds
> pointless overhead, but beyond that it gives the false illusion of a
> security boundary.
>
> Prior to deprivilege, the kernel can write to the hypervisor text, modify
> its stage-1 page-table and change its data values. I think the pointer
> auth keys are the least of our worries if it's compromised...

Thanks a lot Will for explaining this.

Oliver, what do you think for V2, about it including FEAT_RNG/TRNG in EL2?

Thanks,
Mostafa

2023-06-12 19:26:19

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

Hi Mostafa,

Thank you for bumping this, I didn't see Will's reply.

On Mon, Jun 12, 2023 at 09:20:16AM +0000, Mostafa Saleh wrote:
> On Thu, Jun 08, 2023 at 10:55:26PM +0100, Will Deacon wrote:
> > I appreciate the sentiment, but I think we should avoid adding additional
> > complexity here if it adds no security benefit. If nothing else, it adds
> > pointless overhead, but beyond that it gives the false illusion of a
> > security boundary.
> >
> > Prior to deprivilege, the kernel can write to the hypervisor text, modify
> > its stage-1 page-table and change its data values. I think the pointer
> > auth keys are the least of our worries if it's compromised...

Ack -- well aware of the other issues there :) My whining as it relates
to security was more focused at the changelog than the diff of the
patch.

My tilt in terms of the functionality is more aimed at limiting the
interactions between pKVM and the host before it deprivilege. Longer
term, it'd be interesting to have pKVM capable of bootstrapping itself,
such that it can possibly be used in other contexts. So, when I saw the
patch I had questioned whether or not the dependency was strictly
necessary.

No reason to boil the ocean as part of this otherwise isolated change.

Tangent: Will you folks need random in EL2 at runtime?

> Thanks a lot Will for explaining this.
>
> Oliver, what do you think for V2, about it including FEAT_RNG/TRNG in EL2?

The patch itself looks good to me. May I suggest something along these
lines for the changelog? I can do it myself to avoid the need for a v2:

"""
When the use of pointer authentication is enabled in the kernel it
applies to both the kernel itself as well as KVM's nVHE hypervisor. The
same keys are used for both the kernel and the nVHE hypervisor, which is
less than desirable for pKVM as the host is not trusted at runtime.

Naturally, the fix is to use a different set of keys for the hypervisor
when running in protected mode. Have the host generate a new set of keys
for the hypervisor before deprivileging the kernel. While there might be
other sources of random directly available at EL2, this keeps the
implementation simple, and the host is trusted anyways until it is
deprivileged.

Since the host and hypervisor no longer share a set of pointer
authentication keys, start context switching them on the host entry/exit
path exactly as we do for guest entry/exit. There is no need to handle
CPU migration as the nVHE code is not migratable in the first place.
"""

--
Thanks,
Oliver

2023-06-13 12:21:08

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

On Tue, 16 May 2023 14:15:31 +0000, Mostafa Saleh wrote:
> When the kernel is compiled with CONFIG_ARM64_PTR_AUTH_KERNEL, it
> uses Armv8.3-Pauth for return address protection for the kernel code
> including nvhe code in EL2.
>
> Same keys are used in both kernel(EL1) and nvhe code(EL2), this is
> fine for nvhe but not when running in protected mode(pKVM) as the host
> can't be trusted.
>
> [...]

Applied to kvmarm/next, thanks!

[1/1] KVM: arm64: Use different pointer authentication keys for pKVM
https://git.kernel.org/kvmarm/kvmarm/c/fb737685beee

--
Best,
Oliver

2023-06-13 16:54:20

by Mostafa Saleh

[permalink] [raw]
Subject: Re: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

Hi Oliver,

On Tue, Jun 13, 2023 at 12:16:02PM +0000, Oliver Upton wrote:
> On Tue, 16 May 2023 14:15:31 +0000, Mostafa Saleh wrote:
> > When the kernel is compiled with CONFIG_ARM64_PTR_AUTH_KERNEL, it
> > uses Armv8.3-Pauth for return address protection for the kernel code
> > including nvhe code in EL2.
> >
> > Same keys are used in both kernel(EL1) and nvhe code(EL2), this is
> > fine for nvhe but not when running in protected mode(pKVM) as the host
> > can't be trusted.
> >
> > [...]
>
> Applied to kvmarm/next, thanks!
>
> [1/1] KVM: arm64: Use different pointer authentication keys for pKVM
> https://git.kernel.org/kvmarm/kvmarm/c/fb737685beee
>
> --

Thanks! I did more testing and I found a bug in this patch.

It seems there is another entry point for the kenrel where pauth was
not handled properly "kvm_host_psci_cpu_entry", I will investigate this
further and send V2.

Sorry for the inconvenience!

Thanks,
Mostafa

2023-06-14 13:05:27

by Mostafa Saleh

[permalink] [raw]
Subject: Re: [PATCH] KVM: arm64: Use different pointer authentication keys for pKVM

On Tue, Jun 13, 2023 at 5:27 PM Mostafa Saleh <[email protected]> wrote:
>
> Hi Oliver,
>
> On Tue, Jun 13, 2023 at 12:16:02PM +0000, Oliver Upton wrote:
> > On Tue, 16 May 2023 14:15:31 +0000, Mostafa Saleh wrote:
> > > When the kernel is compiled with CONFIG_ARM64_PTR_AUTH_KERNEL, it
> > > uses Armv8.3-Pauth for return address protection for the kernel code
> > > including nvhe code in EL2.
> > >
> > > Same keys are used in both kernel(EL1) and nvhe code(EL2), this is
> > > fine for nvhe but not when running in protected mode(pKVM) as the host
> > > can't be trusted.
> > >
> > > [...]
> >
> > Applied to kvmarm/next, thanks!
> >
> > [1/1] KVM: arm64: Use different pointer authentication keys for pKVM
> > https://git.kernel.org/kvmarm/kvmarm/c/fb737685beee
> >
> > --
>
> Thanks! I did more testing and I found a bug in this patch.
>
> It seems there is another entry point for the kenrel where pauth was
> not handled properly "kvm_host_psci_cpu_entry", I will investigate this
> further and send V2.
>
> Sorry for the inconvenience!
>
> Thanks,
> Mostafa

I investigated the problem further and actually the bug is missing
isb() after updating
Pauth system registers at EL2, I sent a v2 with this fix.

Although when entering from kvm_host_psci_cpu_entry, the hyp keys are not used,
but this call directly exists to the host, so I believe this should be fine.