2018-05-15 12:53:41

by Dongjiu Geng

[permalink] [raw]
Subject: [PATCH v12 0/4] set VSESR_EL2 by user space and support NOTIFY_SEI notification

1. Detect whether KVM can set set guest SError syndrome
2. Support to Set VSESR_EL2 and inject SError by user space.
3. Support live migration to keep SError pending state and VSESR_EL2 value.
4. ACPI 6.1 adds support for NOTIFY_SEI as a GHES notification mechanism, so support this
notification in software, KVM or kernel ARCH code call handle_guest_sei() to let ACP driver
to handle this notification.

Change since V11:
Address James's comments, thanks James
1. Align the struct of kvm_vcpu_events to 64 bytes
2. Avoid exposing the stale ESR value in the kvm_arm_vcpu_get_events()
3. Change variables 'injected' name to 'serror_pending' in the kvm_arm_vcpu_set_events()
4. Change to sizeof(events) from sizeof(struct kvm_vcpu_events) in kvm_arch_vcpu_ioctl()
5. Update the patches commit message and document description


Change since V10:
Address James's comments, thanks James
1. Merge the helper function with the user.
2. Move the ISS_MASK into pend_guest_serror() to clear top bits
3. Make kvm_vcpu_events struct align to 4 bytes
4. Add something check in the kvm_arm_vcpu_set_events()
5. Check kvm_arm_vcpu_get/set_events()'s return value.
6. Initialise kvm_vcpu_events to 0 so that padding transferred to user-space doesn't
contain kernel stack.


Dongjiu Geng (4):
arm64: KVM: export the capability to set guest SError syndrome
arm/arm64: KVM: Add KVM_GET/SET_VCPU_EVENTS
ACPI / APEI: Add SEI notification type support for ARMv8
arm64: handle NOTIFY_SEI notification by the APEI driver

Documentation/virtual/kvm/api.txt | 42 ++++++++++++++++++++++++++--
arch/arm/include/asm/kvm_host.h | 6 ++++
arch/arm/kvm/guest.c | 12 ++++++++
arch/arm64/include/asm/kvm_emulate.h | 5 ++++
arch/arm64/include/asm/kvm_host.h | 7 +++++
arch/arm64/include/asm/system_misc.h | 1 +
arch/arm64/include/uapi/asm/kvm.h | 13 +++++++++
arch/arm64/kernel/traps.c | 4 +++
arch/arm64/kvm/guest.c | 36 ++++++++++++++++++++++++
arch/arm64/kvm/inject_fault.c | 7 ++++-
arch/arm64/kvm/reset.c | 4 +++
arch/arm64/mm/fault.c | 10 +++++++
drivers/acpi/apei/Kconfig | 15 ++++++++++
drivers/acpi/apei/ghes.c | 53 ++++++++++++++++++++++++++++++++++++
include/acpi/ghes.h | 1 +
include/uapi/linux/kvm.h | 1 +
virt/kvm/arm/arm.c | 21 ++++++++++++++
17 files changed, 234 insertions(+), 4 deletions(-)

--
1.9.1



2018-05-15 12:54:35

by Dongjiu Geng

[permalink] [raw]
Subject: [PATCH v12 4/4] arm64: handle NOTIFY_SEI notification by the APEI driver

Add a helper to handle the NOTIFY_SEI notification, when kernel
gets the NOTIFY_SEI notification, call this helper and let APEI
driver to handle this notification.

Signed-off-by: Dongjiu Geng <[email protected]>
---
arch/arm64/include/asm/system_misc.h | 1 +
arch/arm64/kernel/traps.c | 4 ++++
arch/arm64/mm/fault.c | 10 ++++++++++
3 files changed, 15 insertions(+)

diff --git a/arch/arm64/include/asm/system_misc.h b/arch/arm64/include/asm/system_misc.h
index 07aa8e3..9ee13ad 100644
--- a/arch/arm64/include/asm/system_misc.h
+++ b/arch/arm64/include/asm/system_misc.h
@@ -57,6 +57,7 @@ void hook_debug_fault_code(int nr, int (*fn)(unsigned long, unsigned int,
})

int handle_guest_sea(phys_addr_t addr, unsigned int esr);
+int handle_guest_sei(void);

#endif /* __ASSEMBLY__ */

diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index bbb0fde..d888eb2 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -681,6 +681,10 @@ bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned int esr)
{
u32 aet = arm64_ras_serror_get_severity(esr);

+ /* The APEI driver may handle this RAS error. */
+ if (!handle_guest_sei())
+ return false;
+
switch (aet) {
case ESR_ELx_AET_CE: /* corrected error */
case ESR_ELx_AET_UEO: /* restartable, not yet consumed */
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index f76bb2c..8f29bd8 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -683,6 +683,16 @@ int handle_guest_sea(phys_addr_t addr, unsigned int esr)
return ret;
}

+int handle_guest_sei(void)
+{
+ int ret = -ENOENT;
+
+ if (IS_ENABLED(CONFIG_ACPI_APEI_SEI))
+ ret = ghes_notify_sei();
+
+ return ret;
+}
+
asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
{
--
1.9.1


2018-05-15 12:54:47

by Dongjiu Geng

[permalink] [raw]
Subject: [PATCH v12 1/4] arm64: KVM: export the capability to set guest SError syndrome

For the arm64 RAS Extension, user space can inject a virtual-SError
with specified ESR. So user space needs to know whether KVM support
to inject such SError, this interface adds this query for this capability.

KVM will check whether system support RAS Extension, if supported, KVM
returns true to user space, otherwise returns false.

Signed-off-by: Dongjiu Geng <[email protected]>
Reviewed-by: James Morse <[email protected]>

Change from V11:
1. Change the commit message
2. Update the Documentation/virtual/kvm/api.tx
---
Documentation/virtual/kvm/api.txt | 11 +++++++++++
arch/arm64/kvm/reset.c | 3 +++
include/uapi/linux/kvm.h | 1 +
3 files changed, 15 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index fc3ae95..66494a5 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -4415,3 +4415,14 @@ Parameters: none
This capability indicates if the flic device will be able to get/set the
AIS states for migration via the KVM_DEV_FLIC_AISM_ALL attribute and allows
to discover this without having to create a flic device.
+
+8.14 KVM_CAP_ARM_SET_SERROR_ESR
+
+Architectures: arm, arm64
+
+This capability indicates that userspace can specify the syndrome value reported
+to the guest OS when guest takes a virtual SError interrupt exception.
+If KVM has this capability, userspace can only specify the ISS field for the ESR
+syndrome, it can not specify the EC field which is not under control by KVM.
+If this virtual SError is taken to EL1 using AArch64, this value will be reported
+in ISS filed of ESR_EL1.
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 3256b92..38c8a64 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -77,6 +77,9 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_PMU_V3:
r = kvm_arm_support_pmu_v3();
break;
+ case KVM_CAP_ARM_INJECT_SERROR_ESR:
+ r = cpus_have_const_cap(ARM64_HAS_RAS_EXTN);
+ break;
case KVM_CAP_SET_GUEST_DEBUG:
case KVM_CAP_VCPU_ATTRIBUTES:
r = 1;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 8fb90a0..3587b33 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -934,6 +934,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_S390_AIS_MIGRATION 150
#define KVM_CAP_PPC_GET_CPU_CHAR 151
#define KVM_CAP_S390_BPB 152
+#define KVM_CAP_ARM_INJECT_SERROR_ESR 153

#ifdef KVM_CAP_IRQ_ROUTING

--
1.9.1


2018-05-15 12:55:16

by Dongjiu Geng

[permalink] [raw]
Subject: [PATCH v12 2/4] arm/arm64: KVM: Add KVM_GET/SET_VCPU_EVENTS

For the migrating VMs, user space may need to know the exception
state. For example, in the machine A, KVM make an SError pending,
when migrate to B, KVM also needs to pend an SError.

This new IOCTL exports user-invisible states related to SError.
Together with appropriate user space changes, user space can get/set
the SError exception state to do migrate/snapshot/suspend.

Signed-off-by: Dongjiu Geng <[email protected]>

Change since V11:
Address James's comments, thanks James
1. Align the struct of kvm_vcpu_events to 64 bytes
2. Avoid exposing the stale ESR value in the kvm_arm_vcpu_get_events()
3. Change variables 'injected' name to 'serror_pending' in the kvm_arm_vcpu_set_events()
4. change to sizeof(events) from sizeof(struct kvm_vcpu_events) in kvm_arch_vcpu_ioctl()

Change since V10:
Address James's comments, thanks James
1. Merge the helper function with the user.
2. Move the ISS_MASK into pend_guest_serror() to clear top bits
3. Make kvm_vcpu_events struct align to 4 bytes
4. Add something check in the kvm_arm_vcpu_set_events()
5. Check kvm_arm_vcpu_get/set_events()'s return value.
6. Initialise kvm_vcpu_events to 0 so that padding transferred to user-space doesn't
contain kernel stack.
---
Documentation/virtual/kvm/api.txt | 31 ++++++++++++++++++++++++++++---
arch/arm/include/asm/kvm_host.h | 6 ++++++
arch/arm/kvm/guest.c | 12 ++++++++++++
arch/arm64/include/asm/kvm_emulate.h | 5 +++++
arch/arm64/include/asm/kvm_host.h | 7 +++++++
arch/arm64/include/uapi/asm/kvm.h | 13 +++++++++++++
arch/arm64/kvm/guest.c | 36 ++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/inject_fault.c | 7 ++++++-
arch/arm64/kvm/reset.c | 1 +
virt/kvm/arm/arm.c | 21 +++++++++++++++++++++
10 files changed, 135 insertions(+), 4 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 66494a5..36a9dc3 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -819,11 +819,13 @@ struct kvm_clock_data {

Capability: KVM_CAP_VCPU_EVENTS
Extended by: KVM_CAP_INTR_SHADOW
-Architectures: x86
+Architectures: x86, arm, arm64
Type: vm ioctl
Parameters: struct kvm_vcpu_event (out)
Returns: 0 on success, -1 on error

+X86:
+
Gets currently pending exceptions, interrupts, and NMIs as well as related
states of the vcpu.

@@ -865,15 +867,32 @@ Only two fields are defined in the flags field:
- KVM_VCPUEVENT_VALID_SMM may be set in the flags field to signal that
smi contains a valid state.

+ARM, ARM64:
+
+Gets currently pending SError exceptions as well as related states of the vcpu.
+
+struct kvm_vcpu_events {
+ struct {
+ __u8 serror_pending;
+ __u8 serror_has_esr;
+ /* Align it to 8 bytes */
+ __u8 pad[6];
+ __u64 serror_esr;
+ } exception;
+ __u32 reserved[12];
+};
+
4.32 KVM_SET_VCPU_EVENTS

-Capability: KVM_CAP_VCPU_EVENTS
+Capebility: KVM_CAP_VCPU_EVENTS
Extended by: KVM_CAP_INTR_SHADOW
-Architectures: x86
+Architectures: x86, arm, arm64
Type: vm ioctl
Parameters: struct kvm_vcpu_event (in)
Returns: 0 on success, -1 on error

+X86:
+
Set pending exceptions, interrupts, and NMIs as well as related states of the
vcpu.

@@ -894,6 +913,12 @@ shall be written into the VCPU.

KVM_VCPUEVENT_VALID_SMM can only be set if KVM_CAP_X86_SMM is available.

+ARM, ARM64:
+
+Set pending SError exceptions as well as related states of the vcpu.
+
+See KVM_GET_VCPU_EVENTS for the data structure.
+

4.33 KVM_GET_DEBUGREGS

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index ef54013..d81621e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -211,6 +211,12 @@ struct kvm_vcpu_stat {
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
+int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events);
+
+int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events);
+
unsigned long kvm_call_hyp(void *hypfn, ...);
void force_vm_exit(const cpumask_t *mask);

diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c
index 1e0784e..39f895d 100644
--- a/arch/arm/kvm/guest.c
+++ b/arch/arm/kvm/guest.c
@@ -248,6 +248,18 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
return -EINVAL;
}

+int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events)
+{
+ return -EINVAL;
+}
+
+int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events)
+{
+ return -EINVAL;
+}
+
int __attribute_const__ kvm_target_cpu(void)
{
switch (read_cpuid_part()) {
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 413dc82..3294885 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -71,6 +71,11 @@ static inline void vcpu_set_hcr(struct kvm_vcpu *vcpu, unsigned long hcr)
vcpu->arch.hcr_el2 = hcr;
}

+static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.vsesr_el2;
+}
+
static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr)
{
vcpu->arch.vsesr_el2 = vsesr;
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a73f63a..1125540 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -326,6 +326,11 @@ struct kvm_vcpu_stat {
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
+int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events);
+
+int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events);

#define KVM_ARCH_WANT_MMU_NOTIFIER
int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
@@ -354,6 +359,8 @@ void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
int kvm_perf_init(void);
int kvm_perf_teardown(void);

+void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);
+
struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);

static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 9abbf30..d815b67 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -39,6 +39,7 @@
#define __KVM_HAVE_GUEST_DEBUG
#define __KVM_HAVE_IRQ_LINE
#define __KVM_HAVE_READONLY_MEM
+#define __KVM_HAVE_VCPU_EVENTS

#define KVM_COALESCED_MMIO_PAGE_OFFSET 1

@@ -153,6 +154,18 @@ struct kvm_sync_regs {
struct kvm_arch_memory_slot {
};

+/* for KVM_GET/SET_VCPU_EVENTS */
+struct kvm_vcpu_events {
+ struct {
+ __u8 serror_pending;
+ __u8 serror_has_esr;
+ /* Align it to 8 bytes */
+ __u8 pad[6];
+ __u64 serror_esr;
+ } exception;
+ __u32 reserved[12];
+};
+
/* If you need to interpret the index values, here is the key: */
#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
#define KVM_REG_ARM_COPROC_SHIFT 16
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5c7f657..f9b7e8f 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -277,6 +277,42 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
return -EINVAL;
}

+int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events)
+{
+ events->exception.serror_pending = (vcpu_get_hcr(vcpu) & HCR_VSE);
+ events->exception.serror_has_esr =
+ cpus_have_const_cap(ARM64_HAS_RAS_EXTN) &&
+ (!!vcpu_get_vsesr(vcpu));
+
+ if (events->exception.serror_pending &&
+ events->exception.serror_has_esr)
+ events->exception.serror_esr = vcpu_get_vsesr(vcpu);
+ else
+ events->exception.serror_esr = 0;
+
+ return 0;
+}
+
+int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events)
+{
+ bool serror_pending = events->exception.serror_pending;
+ bool has_esr = events->exception.serror_has_esr;
+
+ if (serror_pending && has_esr) {
+ if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN))
+ return -EINVAL;
+
+ kvm_set_sei_esr(vcpu, events->exception.serror_esr);
+
+ } else if (serror_pending) {
+ kvm_inject_vabt(vcpu);
+ }
+
+ return 0;
+}
+
int __attribute_const__ kvm_target_cpu(void)
{
unsigned long implementor = read_cpuid_implementor();
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 60666a0..aa0358a 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -166,7 +166,7 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu)

static void pend_guest_serror(struct kvm_vcpu *vcpu, u64 esr)
{
- vcpu_set_vsesr(vcpu, esr);
+ vcpu_set_vsesr(vcpu, esr & ESR_ELx_ISS_MASK);
vcpu_set_hcr(vcpu, vcpu_get_hcr(vcpu) | HCR_VSE);
}

@@ -186,3 +186,8 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu)
{
pend_guest_serror(vcpu, ESR_ELx_ISV);
}
+
+void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome)
+{
+ pend_guest_serror(vcpu, syndrome);
+}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 38c8a64..20e919a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -82,6 +82,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
break;
case KVM_CAP_SET_GUEST_DEBUG:
case KVM_CAP_VCPU_ATTRIBUTES:
+ case KVM_CAP_VCPU_EVENTS:
r = 1;
break;
default:
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 7e3941f..e466f5d 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1051,6 +1051,27 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return -EFAULT;
return kvm_arm_vcpu_has_attr(vcpu, &attr);
}
+ case KVM_GET_VCPU_EVENTS: {
+ struct kvm_vcpu_events events;
+
+ memset(&events, 0, sizeof(events));
+ if (kvm_arm_vcpu_get_events(vcpu, &events))
+ return -EINVAL;
+
+ if (copy_to_user(argp, &events, sizeof(events)))
+ return -EFAULT;
+
+ return 0;
+ }
+ case KVM_SET_VCPU_EVENTS: {
+ struct kvm_vcpu_events events;
+
+ if (copy_from_user(&events, argp,
+ sizeof(struct kvm_vcpu_events)))
+ return -EFAULT;
+
+ return kvm_arm_vcpu_set_events(vcpu, &events);
+ }
default:
return -EINVAL;
}
--
1.9.1


2018-05-15 12:56:59

by Dongjiu Geng

[permalink] [raw]
Subject: [PATCH v12 3/4] ACPI / APEI: Add SEI notification type support for ARMv8

ACPI 6.x adds support for NOTIFY_SEI as a GHES notification
mechanism, so add new GHES notification handling functions.
Expose API ghes_notify_sei() to arch code, arch code will call
this API when it gets this NOTIFY_SEI.

Signed-off-by: Dongjiu Geng <[email protected]>

Note:
Firmware will follow the SError mask rule, if the SError is masked,
the firmware will not deliver NOTIFY_SEI notification.
---
drivers/acpi/apei/Kconfig | 15 ++++++++++++++
drivers/acpi/apei/ghes.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++
include/acpi/ghes.h | 1 +
3 files changed, 69 insertions(+)

diff --git a/drivers/acpi/apei/Kconfig b/drivers/acpi/apei/Kconfig
index 52ae543..ff4afc3 100644
--- a/drivers/acpi/apei/Kconfig
+++ b/drivers/acpi/apei/Kconfig
@@ -55,6 +55,21 @@ config ACPI_APEI_SEA
option allows the OS to look for such hardware error record, and
take appropriate action.

+config ACPI_APEI_SEI
+ bool "APEI SError(System Error) Interrupt logging/recovering support"
+ depends on ARM64 && ACPI_APEI_GHES
+ default y
+ help
+ This option should be enabled if the system supports
+ firmware first handling of SEI (SError interrupt).
+
+ SEI happens with asynchronous external abort for errors on device
+ memory reads on ARMv8 systems. If a system supports firmware first
+ handling of SEI, the platform analyzes and handles hardware error
+ notifications from SEI, and it may then form a hardware error record for
+ the OS to parse and handle. This option allows the OS to look for
+ such hardware error record, and take appropriate action.
+
config ACPI_APEI_MEMORY_FAILURE
bool "APEI memory error recovering support"
depends on ACPI_APEI && MEMORY_FAILURE
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 1efefe9..33f77ae 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -827,6 +827,46 @@ static inline void ghes_sea_add(struct ghes *ghes) { }
static inline void ghes_sea_remove(struct ghes *ghes) { }
#endif /* CONFIG_ACPI_APEI_SEA */

+#ifdef CONFIG_ACPI_APEI_SEI
+static LIST_HEAD(ghes_sei);
+
+/*
+ * Return 0 only if one of the SEI error sources successfully reported an error
+ * record sent from the firmware.
+ */
+int ghes_notify_sei(void)
+{
+ struct ghes *ghes;
+ int ret = -ENOENT;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(ghes, &ghes_sei, list) {
+ if (!ghes_proc(ghes))
+ ret = 0;
+ }
+ rcu_read_unlock();
+ return ret;
+}
+
+static void ghes_sei_add(struct ghes *ghes)
+{
+ mutex_lock(&ghes_list_mutex);
+ list_add_rcu(&ghes->list, &ghes_sei);
+ mutex_unlock(&ghes_list_mutex);
+}
+
+static void ghes_sei_remove(struct ghes *ghes)
+{
+ mutex_lock(&ghes_list_mutex);
+ list_del_rcu(&ghes->list);
+ mutex_unlock(&ghes_list_mutex);
+ synchronize_rcu();
+}
+#else /* CONFIG_ACPI_APEI_SEI */
+static inline void ghes_sei_add(struct ghes *ghes) { }
+static inline void ghes_sei_remove(struct ghes *ghes) { }
+#endif /* CONFIG_ACPI_APEI_SEI */
+
#ifdef CONFIG_HAVE_ACPI_APEI_NMI
/*
* printk is not safe in NMI context. So in NMI handler, we allocate
@@ -1055,6 +1095,13 @@ static int ghes_probe(struct platform_device *ghes_dev)
goto err;
}
break;
+ case ACPI_HEST_NOTIFY_SEI:
+ if (!IS_ENABLED(CONFIG_ACPI_APEI_SEI)) {
+ pr_warn(GHES_PFX "Generic hardware error source: %d notified via SEI is not supported!\n",
+ generic->header.source_id);
+ goto err;
+ }
+ break;
case ACPI_HEST_NOTIFY_NMI:
if (!IS_ENABLED(CONFIG_HAVE_ACPI_APEI_NMI)) {
pr_warn(GHES_PFX "Generic hardware error source: %d notified via NMI interrupt is not supported!\n",
@@ -1126,6 +1173,9 @@ static int ghes_probe(struct platform_device *ghes_dev)
case ACPI_HEST_NOTIFY_SEA:
ghes_sea_add(ghes);
break;
+ case ACPI_HEST_NOTIFY_SEI:
+ ghes_sei_add(ghes);
+ break;
case ACPI_HEST_NOTIFY_NMI:
ghes_nmi_add(ghes);
break;
@@ -1179,6 +1229,9 @@ static int ghes_remove(struct platform_device *ghes_dev)
case ACPI_HEST_NOTIFY_SEA:
ghes_sea_remove(ghes);
break;
+ case ACPI_HEST_NOTIFY_SEI:
+ ghes_sei_remove(ghes);
+ break;
case ACPI_HEST_NOTIFY_NMI:
ghes_nmi_remove(ghes);
break;
diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h
index 8feb0c8..9ba59e2 100644
--- a/include/acpi/ghes.h
+++ b/include/acpi/ghes.h
@@ -120,5 +120,6 @@ static inline void *acpi_hest_get_next(struct acpi_hest_generic_data *gdata)
section = acpi_hest_get_next(section))

int ghes_notify_sea(void);
+int ghes_notify_sei(void);

#endif /* GHES_H */
--
1.9.1