From: Joerg Roedel <[email protected]>
Hi,
here is a rebased version of the latest SEV-ES patches. They are now
based on latest tip/master instead of upstream Linux and include the
necessary changes.
Changes to v4 are in particular:
- Moved early IDT setup code to idt.c, because the idt_descr
and the idt_table are now static
- This required to make stack protector work early (or disable
it for idt.c, but I didn't go that road), so MSR_GS_BASE is
now set up very early too, before calling into any C code that
has stack protector checks.
- As a result I decided to move the setup code which is needed
before the kernel switches to virtual addresses into a C
function as well. This should be much easier to maintain.
- paranoid_entry/exit now uses FSGSBASE instructions, so some
refactoring was needed to make that work early for secondary
CPUs too.
- As a result, some state of the APs is now set up on the
boot-cpu already, like the TSS and the CPU_NODE GDT entry,
so that the AP only needs to load the descriptors to handle
exceptions early.
The previous versions can be found as a linked-list starting here:
https://lore.kernel.org/lkml/[email protected]/
There you also find more detailed information about SEV-ES in general
and its implications.
Please review.
Thanks,
Joerg
Borislav Petkov (1):
KVM: SVM: Use __packed shorthand
Doug Covelli (1):
x86/vmware: Add VMware specific handling for VMMCALL under SEV-ES
Joerg Roedel (53):
KVM: SVM: Add GHCB Accessor functions
x86/traps: Move pf error codes to <asm/trap_pf.h>
x86/insn: Make inat-tables.c suitable for pre-decompression code
x86/umip: Factor out instruction fetch
x86/umip: Factor out instruction decoding
x86/insn: Add insn_get_modrm_reg_off()
x86/insn: Add insn_has_rep_prefix() helper
x86/boot/compressed/64: Disable red-zone usage
x86/boot/compressed/64: Add IDT Infrastructure
x86/boot/compressed/64: Rename kaslr_64.c to ident_map_64.c
x86/boot/compressed/64: Add page-fault handler
x86/boot/compressed/64: Always switch to own page-table
x86/boot/compressed/64: Don't pre-map memory in KASLR code
x86/boot/compressed/64: Change add_identity_map() to take start and
end
x86/boot/compressed/64: Add stage1 #VC handler
x86/boot/compressed/64: Call set_sev_encryption_mask earlier
x86/boot/compressed/64: Check return value of
kernel_ident_mapping_init()
x86/boot/compressed/64: Add set_page_en/decrypted() helpers
x86/boot/compressed/64: Setup GHCB Based VC Exception handler
x86/boot/compressed/64: Unmap GHCB page before booting the kernel
x86/fpu: Move xgetbv()/xsetbv() into separate header
x86/idt: Move IDT to data segment
x86/idt: Split idt_data setup out of set_intr_gate()
x86/head/64: Install startup GDT
x86/head/64: Setup MSR_GS_BASE before calling into C code
x86/head/64: Load GDT after switch to virtual addresses
x86/head/64: Load segment registers earlier
x86/head/64: Switch to initial stack earlier
x86/head/64: Make fixup_pointer() static inline
x86/head/64: Load IDT earlier
x86/head/64: Move early exception dispatch to C code
x86/head/64: Set CR4.FSGSBASE early
x86/sev-es: Add SEV-ES Feature Detection
x86/sev-es: Print SEV-ES info into kernel log
x86/sev-es: Compile early handler code into kernel image
x86/sev-es: Setup early #VC handler
x86/sev-es: Setup GHCB based boot #VC handler
x86/sev-es: Allocate and Map IST stack for #VC handler
x86/sev-es: Adjust #VC IST Stack on entering NMI handler
x86/dumpstack/64: Add noinstr version of get_stack_info()
x86/entry/64: Add entry code for #VC handler
x86/sev-es: Wire up existing #VC exit-code handlers
x86/sev-es: Handle instruction fetches from user-space
x86/sev-es: Handle MMIO String Instructions
x86/sev-es: Handle #AC Events
x86/sev-es: Handle #DB Events
x86/paravirt: Allow hypervisor specific VMMCALL handling under SEV-ES
x86/realmode: Add SEV-ES specific trampoline entry point
x86/smpboot: Setup TSS for starting AP
x86/head/64: Don't call verify_cpu() on starting APs
x86/head/64: Rename start_cpu0
x86/sev-es: Support CPU offline/online
x86/sev-es: Handle NMI State
Martin Radev (1):
x86/sev-es: Check required CPU features for SEV-ES
Tom Lendacky (19):
KVM: SVM: Add GHCB definitions
x86/cpufeatures: Add SEV-ES CPU feature
x86/sev-es: Add support for handling IOIO exceptions
x86/sev-es: Add CPUID handling to #VC handler
x86/sev-es: Setup per-cpu GHCBs for the runtime handler
x86/sev-es: Add Runtime #VC Exception Handler
x86/sev-es: Handle MMIO events
x86/sev-es: Handle MSR events
x86/sev-es: Handle DR7 read/write events
x86/sev-es: Handle WBINVD Events
x86/sev-es: Handle RDTSC(P) Events
x86/sev-es: Handle RDPMC Events
x86/sev-es: Handle INVD Events
x86/sev-es: Handle MONITOR/MONITORX Events
x86/sev-es: Handle MWAIT/MWAITX Events
x86/sev-es: Handle VMMCALL Events
x86/kvm: Add KVM specific VMMCALL handling under SEV-ES
x86/realmode: Setup AP jump table
x86/efi: Add GHCB mappings when SEV-ES is active
arch/x86/Kconfig | 1 +
arch/x86/boot/Makefile | 2 +-
arch/x86/boot/compressed/Makefile | 9 +-
arch/x86/boot/compressed/cpuflags.c | 4 -
arch/x86/boot/compressed/head_64.S | 32 +-
arch/x86/boot/compressed/ident_map_64.c | 349 +++++
arch/x86/boot/compressed/idt_64.c | 54 +
arch/x86/boot/compressed/idt_handlers_64.S | 77 ++
arch/x86/boot/compressed/kaslr.c | 36 +-
arch/x86/boot/compressed/kaslr_64.c | 153 ---
arch/x86/boot/compressed/misc.c | 7 +
arch/x86/boot/compressed/misc.h | 50 +-
arch/x86/boot/compressed/sev-es.c | 214 +++
arch/x86/entry/entry_64.S | 78 ++
arch/x86/include/asm/cpu.h | 2 +-
arch/x86/include/asm/cpu_entry_area.h | 33 +-
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/desc_defs.h | 3 +
arch/x86/include/asm/fpu/internal.h | 33 +-
arch/x86/include/asm/fpu/xcr.h | 37 +
arch/x86/include/asm/idtentry.h | 49 +
arch/x86/include/asm/insn-eval.h | 6 +
arch/x86/include/asm/mem_encrypt.h | 5 +
arch/x86/include/asm/msr-index.h | 3 +
arch/x86/include/asm/page_64_types.h | 1 +
arch/x86/include/asm/pgtable.h | 2 +-
arch/x86/include/asm/processor.h | 7 +
arch/x86/include/asm/proto.h | 1 +
arch/x86/include/asm/realmode.h | 4 +
arch/x86/include/asm/segment.h | 2 +-
arch/x86/include/asm/setup.h | 16 +-
arch/x86/include/asm/sev-es.h | 113 ++
arch/x86/include/asm/stacktrace.h | 2 +
arch/x86/include/asm/svm.h | 118 +-
arch/x86/include/asm/trap_pf.h | 24 +
arch/x86/include/asm/trapnr.h | 1 +
arch/x86/include/asm/traps.h | 20 +-
arch/x86/include/asm/x86_init.h | 16 +-
arch/x86/include/uapi/asm/svm.h | 11 +
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/cpu/amd.c | 3 +-
arch/x86/kernel/cpu/common.c | 37 +-
arch/x86/kernel/cpu/scattered.c | 1 +
arch/x86/kernel/cpu/vmware.c | 50 +-
arch/x86/kernel/dumpstack.c | 7 +-
arch/x86/kernel/dumpstack_64.c | 47 +-
arch/x86/kernel/head64.c | 85 +-
arch/x86/kernel/head_32.S | 4 +-
arch/x86/kernel/head_64.S | 159 ++-
arch/x86/kernel/idt.c | 94 +-
arch/x86/kernel/kvm.c | 35 +-
arch/x86/kernel/nmi.c | 12 +
arch/x86/kernel/sev-es-shared.c | 507 +++++++
arch/x86/kernel/sev-es.c | 1404 ++++++++++++++++++++
arch/x86/kernel/smpboot.c | 10 +-
arch/x86/kernel/traps.c | 56 +
arch/x86/kernel/umip.c | 49 +-
arch/x86/kvm/svm/svm.c | 2 +
arch/x86/lib/insn-eval.c | 130 ++
arch/x86/mm/cpu_entry_area.c | 3 +-
arch/x86/mm/extable.c | 1 +
arch/x86/mm/mem_encrypt.c | 38 +-
arch/x86/mm/mem_encrypt_identity.c | 3 +
arch/x86/platform/efi/efi_64.c | 10 +
arch/x86/realmode/init.c | 24 +-
arch/x86/realmode/rm/header.S | 3 +
arch/x86/realmode/rm/trampoline_64.S | 20 +
arch/x86/tools/gen-insn-attr-x86.awk | 50 +-
tools/arch/x86/tools/gen-insn-attr-x86.awk | 50 +-
69 files changed, 4025 insertions(+), 446 deletions(-)
create mode 100644 arch/x86/boot/compressed/ident_map_64.c
create mode 100644 arch/x86/boot/compressed/idt_64.c
create mode 100644 arch/x86/boot/compressed/idt_handlers_64.S
delete mode 100644 arch/x86/boot/compressed/kaslr_64.c
create mode 100644 arch/x86/boot/compressed/sev-es.c
create mode 100644 arch/x86/include/asm/fpu/xcr.h
create mode 100644 arch/x86/include/asm/sev-es.h
create mode 100644 arch/x86/include/asm/trap_pf.h
create mode 100644 arch/x86/kernel/sev-es-shared.c
create mode 100644 arch/x86/kernel/sev-es.c
--
2.27.0
From: Joerg Roedel <[email protected]>
With the page-fault handler in place the identity mapping can be built
on-demand. So remove the code which manually creates the mappings and
unexport/remove the functions used for it.
Signed-off-by: Joerg Roedel <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
---
arch/x86/boot/compressed/ident_map_64.c | 6 ++----
arch/x86/boot/compressed/kaslr.c | 24 +-----------------------
arch/x86/boot/compressed/misc.h | 10 ----------
3 files changed, 3 insertions(+), 37 deletions(-)
diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index ecf9353b064d..c63257bf8373 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -87,11 +87,9 @@ phys_addr_t physical_mask = (1ULL << __PHYSICAL_MASK_SHIFT) - 1;
static struct x86_mapping_info mapping_info;
/*
- * Adds the specified range to what will become the new identity mappings.
- * Once all ranges have been added, the new mapping is activated by calling
- * finalize_identity_maps() below.
+ * Adds the specified range to the identity mappings.
*/
-void add_identity_map(unsigned long start, unsigned long size)
+static void add_identity_map(unsigned long start, unsigned long size)
{
unsigned long end = start + size;
diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 856dc1c9bb0d..c466fb738de0 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -399,8 +399,6 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
*/
mem_avoid[MEM_AVOID_ZO_RANGE].start = input;
mem_avoid[MEM_AVOID_ZO_RANGE].size = (output + init_size) - input;
- add_identity_map(mem_avoid[MEM_AVOID_ZO_RANGE].start,
- mem_avoid[MEM_AVOID_ZO_RANGE].size);
/* Avoid initrd. */
initrd_start = (u64)boot_params->ext_ramdisk_image << 32;
@@ -420,14 +418,10 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
;
mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
- add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
- mem_avoid[MEM_AVOID_CMDLINE].size);
/* Avoid boot parameters. */
mem_avoid[MEM_AVOID_BOOTPARAMS].start = (unsigned long)boot_params;
mem_avoid[MEM_AVOID_BOOTPARAMS].size = sizeof(*boot_params);
- add_identity_map(mem_avoid[MEM_AVOID_BOOTPARAMS].start,
- mem_avoid[MEM_AVOID_BOOTPARAMS].size);
/* We don't need to set a mapping for setup_data. */
@@ -436,11 +430,6 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
/* Enumerate the immovable memory regions */
num_immovable_mem = count_immovable_mem_regions();
-
-#ifdef CONFIG_X86_VERBOSE_BOOTUP
- /* Make sure video RAM can be used. */
- add_identity_map(0, PMD_SIZE);
-#endif
}
/*
@@ -919,19 +908,8 @@ void choose_random_location(unsigned long input,
warn("Physical KASLR disabled: no suitable memory region!");
} else {
/* Update the new physical address location. */
- if (*output != random_addr) {
- add_identity_map(random_addr, output_size);
+ if (*output != random_addr)
*output = random_addr;
- }
-
- /*
- * This loads the identity mapping page table.
- * This should only be done if a new physical address
- * is found for the kernel, otherwise we should keep
- * the old page table to make it be like the "nokaslr"
- * case.
- */
- finalize_identity_maps();
}
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 345c90fbc500..ea6174bad699 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -98,17 +98,7 @@ static inline void choose_random_location(unsigned long input,
#endif
#ifdef CONFIG_X86_64
-void initialize_identity_maps(void);
-void add_identity_map(unsigned long start, unsigned long size);
-void finalize_identity_maps(void);
extern unsigned char _pgtable[];
-#else
-static inline void initialize_identity_maps(void)
-{ }
-static inline void add_identity_map(unsigned long start, unsigned long size)
-{ }
-static inline void finalize_identity_maps(void)
-{ }
#endif
#ifdef CONFIG_EARLY_PRINTK
--
2.27.0
From: Joerg Roedel <[email protected]>
Factor out the code to fetch the instruction from user-space to a helper
function.
No functional changes.
Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/include/asm/insn-eval.h | 2 ++
arch/x86/kernel/umip.c | 26 +++++-----------------
arch/x86/lib/insn-eval.c | 38 ++++++++++++++++++++++++++++++++
3 files changed, 46 insertions(+), 20 deletions(-)
diff --git a/arch/x86/include/asm/insn-eval.h b/arch/x86/include/asm/insn-eval.h
index 2b6ccf2c49f1..b8b9ef1bbd06 100644
--- a/arch/x86/include/asm/insn-eval.h
+++ b/arch/x86/include/asm/insn-eval.h
@@ -19,5 +19,7 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs);
int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs);
unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx);
int insn_get_code_seg_params(struct pt_regs *regs);
+int insn_fetch_from_user(struct pt_regs *regs,
+ unsigned char buf[MAX_INSN_SIZE]);
#endif /* _ASM_X86_INSN_EVAL_H */
diff --git a/arch/x86/kernel/umip.c b/arch/x86/kernel/umip.c
index 8d5cbe1bbb3b..c9e5345da793 100644
--- a/arch/x86/kernel/umip.c
+++ b/arch/x86/kernel/umip.c
@@ -317,11 +317,11 @@ static void force_sig_info_umip_fault(void __user *addr, struct pt_regs *regs)
*/
bool fixup_umip_exception(struct pt_regs *regs)
{
- int not_copied, nr_copied, reg_offset, dummy_data_size, umip_inst;
- unsigned long seg_base = 0, *reg_addr;
+ int nr_copied, reg_offset, dummy_data_size, umip_inst;
/* 10 bytes is the maximum size of the result of UMIP instructions */
unsigned char dummy_data[10] = { 0 };
unsigned char buf[MAX_INSN_SIZE];
+ unsigned long *reg_addr;
void __user *uaddr;
struct insn insn;
int seg_defs;
@@ -329,26 +329,12 @@ bool fixup_umip_exception(struct pt_regs *regs)
if (!regs)
return false;
- /*
- * If not in user-space long mode, a custom code segment could be in
- * use. This is true in protected mode (if the process defined a local
- * descriptor table), or virtual-8086 mode. In most of the cases
- * seg_base will be zero as in USER_CS.
- */
- if (!user_64bit_mode(regs))
- seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
-
- if (seg_base == -1L)
- return false;
-
- not_copied = copy_from_user(buf, (void __user *)(seg_base + regs->ip),
- sizeof(buf));
- nr_copied = sizeof(buf) - not_copied;
+ nr_copied = insn_fetch_from_user(regs, buf);
/*
- * The copy_from_user above could have failed if user code is protected
- * by a memory protection key. Give up on emulation in such a case.
- * Should we issue a page fault?
+ * The insn_fetch_from_user above could have failed if user code
+ * is protected by a memory protection key. Give up on emulation
+ * in such a case. Should we issue a page fault?
*/
if (!nr_copied)
return false;
diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
index 31600d851fd8..0c4f7ebc261b 100644
--- a/arch/x86/lib/insn-eval.c
+++ b/arch/x86/lib/insn-eval.c
@@ -1369,3 +1369,41 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
return (void __user *)-1L;
}
}
+
+/**
+ * insn_fetch_from_user() - Copy instruction bytes from user-space memory
+ * @regs: Structure with register values as seen when entering kernel mode
+ * @buf: Array to store the fetched instruction
+ *
+ * Gets the linear address of the instruction and copies the instruction bytes
+ * to the buf.
+ *
+ * Returns:
+ *
+ * Number of instruction bytes copied.
+ *
+ * 0 if nothing was copied.
+ */
+int insn_fetch_from_user(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE])
+{
+ unsigned long seg_base = 0;
+ int not_copied;
+
+ /*
+ * If not in user-space long mode, a custom code segment could be in
+ * use. This is true in protected mode (if the process defined a local
+ * descriptor table), or virtual-8086 mode. In most of the cases
+ * seg_base will be zero as in USER_CS.
+ */
+ if (!user_64bit_mode(regs)) {
+ seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
+ if (seg_base == -1L)
+ return 0;
+ }
+
+
+ not_copied = copy_from_user(buf, (void __user *)(seg_base + regs->ip),
+ MAX_INSN_SIZE);
+
+ return MAX_INSN_SIZE - not_copied;
+}
--
2.27.0
From: Tom Lendacky <[email protected]>
Extend the vmcb_safe_area with SEV-ES fields and add a new
'struct ghcb' which will be used for guest-hypervisor communication.
Signed-off-by: Tom Lendacky <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/include/asm/svm.h | 45 +++++++++++++++++++++++++++++++++++++-
arch/x86/kvm/svm/svm.c | 2 ++
2 files changed, 46 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 8a1f5382a4ea..9a3e0b802716 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -200,13 +200,56 @@ struct __attribute__ ((__packed__)) vmcb_save_area {
u64 br_to;
u64 last_excp_from;
u64 last_excp_to;
+
+ /*
+ * The following part of the save area is valid only for
+ * SEV-ES guests when referenced through the GHCB.
+ */
+ u8 reserved_7[104];
+ u64 reserved_8; /* rax already available at 0x01f8 */
+ u64 rcx;
+ u64 rdx;
+ u64 rbx;
+ u64 reserved_9; /* rsp already available at 0x01d8 */
+ u64 rbp;
+ u64 rsi;
+ u64 rdi;
+ u64 r8;
+ u64 r9;
+ u64 r10;
+ u64 r11;
+ u64 r12;
+ u64 r13;
+ u64 r14;
+ u64 r15;
+ u8 reserved_10[16];
+ u64 sw_exit_code;
+ u64 sw_exit_info_1;
+ u64 sw_exit_info_2;
+ u64 sw_scratch;
+ u8 reserved_11[56];
+ u64 xcr0;
+ u8 valid_bitmap[16];
+ u64 x87_state_gpa;
+};
+
+struct __attribute__ ((__packed__)) ghcb {
+ struct vmcb_save_area save;
+ u8 reserved_save[2048 - sizeof(struct vmcb_save_area)];
+
+ u8 shared_buffer[2032];
+
+ u8 reserved_1[10];
+ u16 protocol_version; /* negotiated SEV-ES/GHCB protocol version */
+ u32 ghcb_usage;
};
static inline void __unused_size_checks(void)
{
- BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 0x298);
+ BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 1032);
BUILD_BUG_ON(sizeof(struct vmcb_control_area) != 256);
+ BUILD_BUG_ON(sizeof(struct ghcb) != 4096);
}
struct __attribute__ ((__packed__)) vmcb {
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c0da4dd78ac5..af6dc22437c1 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4099,6 +4099,8 @@ static struct kvm_x86_init_ops svm_init_ops __initdata = {
static int __init svm_init(void)
{
+ __unused_size_checks();
+
return kvm_init(&svm_init_ops, sizeof(struct vcpu_svm),
__alignof__(struct vcpu_svm), THIS_MODULE);
}
--
2.27.0
Hi Joerg,
> On Jul 24, 2020, at 9:02 AM, Joerg Roedel <[email protected]> wrote:
>
> From: Joerg Roedel <[email protected]>
>
> Hi,
>
> here is a rebased version of the latest SEV-ES patches. They are now
> based on latest tip/master instead of upstream Linux and include the
> necessary changes.
Thanks for the updated patches! I applied this patch-set onto commit
01634f2bd42e ("Merge branch 'x86/urgent’”) from your tree. It boots,
but CPU 1 (on a two-CPU VM) is offline at boot, and `chcpu -e 1` returns:
chcpu: CPU 1 enable failed: Input/output error
with nothing in dmesg to indicate why it failed. The first thing I thought
of was anything relating to the AP jump table, but I haven’t changed
anything there on the hypervisor side. Let me know what other data I can
provide for you.
Mike
Hi Mike,
On Thu, Jul 30, 2020 at 01:27:48AM +0000, Mike Stunes wrote:
> Thanks for the updated patches! I applied this patch-set onto commit
> 01634f2bd42e ("Merge branch 'x86/urgent’”) from your tree. It boots,
> but CPU 1 (on a two-CPU VM) is offline at boot, and `chcpu -e 1` returns:
>
> chcpu: CPU 1 enable failed: Input/output error
>
> with nothing in dmesg to indicate why it failed. The first thing I thought
> of was anything relating to the AP jump table, but I haven’t changed
> anything there on the hypervisor side. Let me know what other data I can
> provide for you.
Hard to tell, have you enabled FSGSBASE in the guest? If yes, can you
try to disable it?
Regards,
Joerg
Hi Joerg,
> On Jul 30, 2020, at 5:26 AM, Joerg Roedel <[email protected]> wrote:
>
> Hi Mike,
>
> On Thu, Jul 30, 2020 at 01:27:48AM +0000, Mike Stunes wrote:
>> Thanks for the updated patches! I applied this patch-set onto commit
>> 01634f2bd42e ("Merge branch 'x86/urgent’”) from your tree. It boots,
>> but CPU 1 (on a two-CPU VM) is offline at boot, and `chcpu -e 1` returns:
>>
>> chcpu: CPU 1 enable failed: Input/output error
>>
>> with nothing in dmesg to indicate why it failed. The first thing I thought
>> of was anything relating to the AP jump table, but I haven’t changed
>> anything there on the hypervisor side. Let me know what other data I can
>> provide for you.
>
> Hard to tell, have you enabled FSGSBASE in the guest? If yes, can you
> try to disable it?
Yes, FSGSBASE was enabled. If I disable it*, this kernel boots fine, with
both CPUs online.
*That is, by forcing guest-CPUID[7].EBX bit 0 to 0.
Thanks!
Mike
Hi Mike,
On Thu, Jul 30, 2020 at 11:23:50PM +0000, Mike Stunes wrote:
> Yes, FSGSBASE was enabled. If I disable it*, this kernel boots fine, with
> both CPUs online.
>
> *That is, by forcing guest-CPUID[7].EBX bit 0 to 0.
Can you please test whether
https://git.kernel.org/pub/scm/linux/kernel/git/joro/linux.git/log/?h=sev-es-client-tip-5.9
still triggers this issue on your side?
Thanks,
Joerg
Hi Joerg,
> On Aug 18, 2020, at 8:07 AM, Joerg Roedel <[email protected]> wrote:
>
> Can you please test whether
>
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fjoro%2Flinux.git%2Flog%2F%3Fh%3Dsev-es-client-tip-5.9&data=02%7C01%7Cmstunes%40vmware.com%7C715e9e6433bc449d34ee08d8438878eb%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637333600716918463&sdata=etla7SGg4HmmJqhJ4AIul043VDNVvlllnbtfdc0Z%2BCU%3D&reserved=0
>
> still triggers this issue on your side?
Yes, I still see the issue — APs are offline after boot. I’ll spend some time seeing if I can figure out what the problem is. Thanks!
Mike
Hi Mike,
On Thu, Aug 20, 2020 at 12:58:13AM +0000, Mike Stunes wrote:
> Yes, I still see the issue — APs are offline after boot. I’ll spend
> some time seeing if I can figure out what the problem is. Thanks!
Thanks. I think the first step here would be to find out where on the
APs (which RIP) the first #VC exception happens. I guess in the #VC
entry code it triggers the next exception when trying to execute the
fsgsbase instructions.
Regards,
Joerg
Hi Mike,
On Thu, Aug 20, 2020 at 12:58:13AM +0000, Mike Stunes wrote:
> Yes, I still see the issue — APs are offline after boot. I’ll spend
> some time seeing if I can figure out what the problem is. Thanks!
Tom and a few others debugged another FSGSBASE issue yesterday, which I
think might also be the cause for the AP startup problems you are
seeing (if you test on Rome).
Can you try to disable support for RDPID in the guest, but keep fsgsbase
enabled?
Thanks,
Joerg
Hi Joerg,
> On Aug 21, 2020, at 1:05 AM, Joerg Roedel <[email protected]> wrote:
>
> Tom and a few others debugged another FSGSBASE issue yesterday, which I
> think might also be the cause for the AP startup problems you are
> seeing (if you test on Rome).
>
> Can you try to disable support for RDPID in the guest, but keep fsgsbase
> enabled?
Yes, that fixes the problem — I can see both CPUs running now. Thanks!
Mike
Hi Mike,
On Fri, Aug 21, 2020 at 05:42:16PM +0000, Mike Stunes wrote:
> Yes, that fixes the problem — I can see both CPUs running now. Thanks!
Thanks a lot for testing, good to have this issue resolved.
Regards,
Joerg