2022-07-26 07:47:46

by Kalesh Singh

[permalink] [raw]
Subject: [PATCH v6 17/17] KVM: arm64: Introduce pkvm_dump_backtrace()

Dumps the pKVM hypervisor backtrace from EL1 by reading the unwinded
addresses from the shared stacktrace buffer.

The nVHE hyp backtrace is dumped on hyp_panic(), before panicking the
host.

[ 111.623091] kvm [367]: nVHE call trace:
[ 111.623215] kvm [367]: [<ffff8000090a6570>] __kvm_nvhe_hyp_panic+0xac/0xf8
[ 111.623448] kvm [367]: [<ffff8000090a65cc>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10
[ 111.623642] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
. . .
[ 111.640366] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
[ 111.640467] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
[ 111.640574] kvm [367]: [<ffff8000090a5de4>] __kvm_nvhe___kvm_vcpu_run+0x30/0x40c
[ 111.640676] kvm [367]: [<ffff8000090a8b64>] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x48
[ 111.640778] kvm [367]: [<ffff8000090a88b8>] __kvm_nvhe_handle_trap+0xc4/0x128
[ 111.640880] kvm [367]: [<ffff8000090a7864>] __kvm_nvhe___host_exit+0x64/0x64
[ 111.640996] kvm [367]: ---[ end nVHE call trace ]---

Signed-off-by: Kalesh Singh <[email protected]>
---

Changes in v6:
- And range check when dumping pkvm stacktrace, per Oliver
- Use consistent nVHE call trace delimiters between protected and
non-protected mode, per Oliver
- Fix typo in comment, per Fuad

Changes in v5:
- Move code out from nvhe.h header to handle_exit.c, per Marc
- Fix stacktrace symbolization when CONFIG_RAMDOMIZE_BASE is enabled,
per Fuad
- Use regular comments instead of doc comments, per Fuad

arch/arm64/kvm/handle_exit.c | 35 ++++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index e83e6f735100..c14fc4ba4422 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -371,6 +371,39 @@ static void hyp_dump_backtrace(unsigned long hyp_offset)
kvm_nvhe_dump_backtrace_end();
}

+#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
+DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)],
+ pkvm_stacktrace);
+
+/*
+ * pkvm_dump_backtrace - Dump the protected nVHE HYP backtrace.
+ *
+ * @hyp_offset: hypervisor offset, used for address translation.
+ *
+ * Dumping of the pKVM HYP backtrace is done by reading the
+ * stack addresses from the shared stacktrace buffer, since the
+ * host cannot directly access hypervisor memory in protected
+ * mode.
+ */
+static void pkvm_dump_backtrace(unsigned long hyp_offset)
+{
+ unsigned long *stacktrace
+ = (unsigned long *) this_cpu_ptr_nvhe_sym(pkvm_stacktrace);
+ int i, size = NVHE_STACKTRACE_SIZE / sizeof(long);
+
+ kvm_nvhe_dump_backtrace_start();
+ /* The saved stacktrace is terminated by a null entry */
+ for (i = 0; i < size && stacktrace[i]; i++)
+ kvm_nvhe_dump_backtrace_entry((void *)hyp_offset, stacktrace[i]);
+ kvm_nvhe_dump_backtrace_end();
+}
+#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */
+static void pkvm_dump_backtrace(unsigned long hyp_offset)
+{
+ kvm_err("Cannot dump pKVM nVHE stacktrace: !CONFIG_PROTECTED_NVHE_STACKTRACE\n");
+}
+#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */
+
/*
* kvm_nvhe_dump_backtrace - Dump KVM nVHE hypervisor backtrace.
*
@@ -379,7 +412,7 @@ static void hyp_dump_backtrace(unsigned long hyp_offset)
static void kvm_nvhe_dump_backtrace(unsigned long hyp_offset)
{
if (is_protected_kvm_enabled())
- return;
+ pkvm_dump_backtrace(hyp_offset);
else
hyp_dump_backtrace(hyp_offset);
}
--
2.37.1.359.gd136c6c3e2-goog