Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp2455698rwb; Fri, 9 Dec 2022 02:15:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf4EnR9bV7KUTI5lQyQ0c8miU2WZKYd+3x84id1Y1AKqtJtziTkgypqsx2F30AQXZQShnJlr X-Received: by 2002:a17:907:8b87:b0:7bc:14f7:9daa with SMTP id tb7-20020a1709078b8700b007bc14f79daamr4944720ejc.39.1670580944006; Fri, 09 Dec 2022 02:15:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670580943; cv=none; d=google.com; s=arc-20160816; b=MF3tqt5aSFQpArMQhnYJrMZWz0ULwFjKu6QybdJTikRGrbUAeiHlel3o9VmQybZNZ0 Z9JUJZTvI4wchSeVcQEhOUIsbU5L/yN3JDpAbHJjFqaHjFtG4EvVv3SuQvNjX3yW4iZc hyHod7MoJiLPEiF5R35Ut5bYmPDDy4h4N/EcFkoE2DhtSbxL7DBceAHU+5c9UBwm9Nme O822jxjjyw9h/52AQzJ831SkJeeNyve2H+8zs40Vleex7P0U3lRNOr07W4mkr79m+4kh PaLAbRTc/LuZ0PMaBF/IvkXSRGNVDeon2InoGuHOSy9ltaJGqlij49tEoU5Nzijloxt3 pl1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=LAo/GkZTIcDPfgHwIPUWTPDsD2i98ZPBhRkAan+3Da4=; b=ZsNk5sZRoQ+2G20MSIlPVNr3HVkxQPkv8NC+gleEhrAkRXkFyNIa+bXA4OWjQqmtMo sJo9aV2cQ/ja4K4TRdGEKykhesQpORqIxafk0xZHxIN8Xmn7n3PjG/LZOgN/RER3ogJn MTNXs3ceoyeBO3Ywxxi+QPHUTWXYxhQmGBtrM3B9fJT+6HxUg0Gz7Wi2SGubQXsW6en0 w03OiJeyGoNifwLHdxdeRCxJ3UqHoVCF0XnJbz3ETJ8zjV/dLMSqsR4n/V+DsqKvDl9M 9NUq+uhh8MeRB40GxO3DURUSs4JoL7D9kHRuUgPFwH7+/wnQvUai51yM1s7aUvArgDmh D3hA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hd16-20020a170907969000b007c0dcc79ec2si729163ejc.171.2022.12.09.02.15.26; Fri, 09 Dec 2022 02:15:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229784AbiLIJy3 (ORCPT + 74 others); Fri, 9 Dec 2022 04:54:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229488AbiLIJyV (ORCPT ); Fri, 9 Dec 2022 04:54:21 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8183051C25; Fri, 9 Dec 2022 01:54:18 -0800 (PST) Received: from kwepemi500015.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NT5tq0QhnzRpsT; Fri, 9 Dec 2022 17:53:23 +0800 (CST) Received: from huawei.com (10.175.124.27) by kwepemi500015.china.huawei.com (7.221.188.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 9 Dec 2022 17:53:46 +0800 From: Lv Ying To: , , , , , , , , , CC: , , , , , , , , Subject: [RFC PATCH v2 1/1] ACPI: APEI: Make memory_failure() triggered by synchronization errors execute in the current context Date: Fri, 9 Dec 2022 17:54:07 +0800 Message-ID: <20221209095407.383211-2-lvying6@huawei.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221209095407.383211-1-lvying6@huawei.com> References: <20221209095407.383211-1-lvying6@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemi500015.china.huawei.com (7.221.188.92) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The memory uncorrected error which is detected by an external component and notified via an IRQ, can be called asynchronization error. If an error is detected as a result of user-space process accessing a corrupt memory location, the CPU may take an abort. On arm64 this is a 'synchronous external abort', and on a firmware first system it is notified via NOTIFY_SEA, this can be called synchronization error. Currently, synchronization error and asynchronization error both use memory_failure_queue to schedule memory_failure() exectute in kworker context. Commit 7f17b4a121d0 ("ACPI: APEI: Kick the memory_failure() queue for synchronous errors") make task_work pending to flush out the queue, cancel_work_sync() in memory_failure_queue_kick() will make memory_failure() exectute in kworker context first which will get synchronization error info from kfifo, so task_work later will get nothing from kfifo which doesn't work as expected. Even worse, synchronization error notification has NMI like properties, (it can interrupt IRQ-masked code), task_work may get wrong kfifo entry from interrupted asynchronization error which is notified by IRQ. Since the memory_failure() triggered by a synchronous exception is executed in the kworker context, the early_kill mode of memory_failure() will send wrong si_code by SIGBUS signal: current process is kworker thread, the actual user-space process accessing the corrupt memory location will be collected by find_early_kill_thread(), and then send SIGBUS with BUS_MCEERR_AO si_code to the actual user-space process instead of BUS_MCEERR_AR. The machine-manager(kvm) use the si_code: BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR for 'action required' synchronous/late notifications. Make memory_failure() triggered by synchronization errors execute in the current context, we do not need workqueue for synchronization error anymore, use task_work handle synchronization errors directly. Since, synchronization errors and asynchronization errors share the same kfifo, use MF_ACTION_REQUIRED flag to distinguish them. And the asynchronization error keeps the same as before. Currently, it's hard to distinguish synchronization error in APEI. It can be determined that the SEA report synchronization error, so currently only the synchronization error reported by SEA is distinguished and handled in current context. Signed-off-by: Lv Ying --- drivers/acpi/apei/ghes.c | 20 +++++++++------- mm/memory-failure.c | 50 +++++++++++++++++++++++++++++----------- 2 files changed, 48 insertions(+), 22 deletions(-) diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 9952f3a792ba..19d62ec2177f 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -423,8 +423,8 @@ static void ghes_clear_estatus(struct ghes *ghes, /* * Called as task_work before returning to user-space. - * Ensure any queued work has been done before we return to the context that - * triggered the notification. + * Ensure any queued corrupt page in synchronous errors has been handled before + * we return to the user context that triggered the notification. */ static void ghes_kick_task_work(struct callback_head *head) { @@ -461,7 +461,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags) } static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, - int sev) + int sev, int notify_type) { int flags = -1; int sec_sev = ghes_severity(gdata->error_severity); @@ -475,7 +475,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED)) flags = MF_SOFT_OFFLINE; if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE) - flags = 0; + flags = (notify_type == ACPI_HEST_NOTIFY_SEA) ? MF_ACTION_REQUIRED : 0; if (flags != -1) return ghes_do_memory_failure(mem_err->physical_addr, flags); @@ -483,7 +483,8 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, return false; } -static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev) +static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev, + int notify_type) { struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata); bool queued = false; @@ -510,7 +511,9 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s * and don't filter out 'corrected' error here. */ if (is_cache && has_pa) { - queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0); + queued = ghes_do_memory_failure(err_info->physical_fault_addr, + (notify_type == ACPI_HEST_NOTIFY_SEA) ? + MF_ACTION_REQUIRED : 0); p += err_info->length; continue; } @@ -631,6 +634,7 @@ static bool ghes_do_proc(struct ghes *ghes, const guid_t *fru_id = &guid_null; char *fru_text = ""; bool queued = false; + int notify_type = ghes->generic->notify.type; sev = ghes_severity(estatus->error_severity); apei_estatus_for_each_section(estatus, gdata) { @@ -648,13 +652,13 @@ static bool ghes_do_proc(struct ghes *ghes, ghes_edac_report_mem_error(sev, mem_err); arch_apei_report_mem_error(sev, mem_err); - queued = ghes_handle_memory_failure(gdata, sev); + queued = ghes_handle_memory_failure(gdata, sev, notify_type); } else if (guid_equal(sec_type, &CPER_SEC_PCIE)) { ghes_handle_aer(gdata); } else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) { - queued = ghes_handle_arm_hw_error(gdata, sev); + queued = ghes_handle_arm_hw_error(gdata, sev, notify_type); } else { void *err = acpi_hest_get_payload(gdata); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index bead6bccc7f2..82238ec86acd 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2204,7 +2204,11 @@ struct memory_failure_cpu { static DEFINE_PER_CPU(struct memory_failure_cpu, memory_failure_cpu); /** - * memory_failure_queue - Schedule handling memory failure of a page. + * memory_failure_queue + * - Schedule handling memory failure of a page for asynchronous error, memory + * failure page will be executed in kworker thread + * - put corrupt memory info into kfifo for synchronous error, task_work will + * handle them before returning to the user * @pfn: Page Number of the corrupted page * @flags: Flags for memory failure handling * @@ -2217,6 +2221,11 @@ static DEFINE_PER_CPU(struct memory_failure_cpu, memory_failure_cpu); * happen outside the current execution context (e.g. when * detected by a background scrubber) * + * This function can also be used in synchronous errors which was detected as a + * result of user-space accessing a corrupt memory location, just put memory + * error info into kfifo, and then, task_work get and handle it in current + * execution context instead of scheduling kworker to handle it + * * Can run in IRQ context. */ void memory_failure_queue(unsigned long pfn, int flags) @@ -2230,9 +2239,10 @@ void memory_failure_queue(unsigned long pfn, int flags) mf_cpu = &get_cpu_var(memory_failure_cpu); spin_lock_irqsave(&mf_cpu->lock, proc_flags); - if (kfifo_put(&mf_cpu->fifo, entry)) - schedule_work_on(smp_processor_id(), &mf_cpu->work); - else + if (kfifo_put(&mf_cpu->fifo, entry)) { + if (!(entry.flags & MF_ACTION_REQUIRED)) + schedule_work_on(smp_processor_id(), &mf_cpu->work); + } else pr_err("buffer overflow when queuing memory failure at %#lx\n", pfn); spin_unlock_irqrestore(&mf_cpu->lock, proc_flags); @@ -2240,12 +2250,15 @@ void memory_failure_queue(unsigned long pfn, int flags) } EXPORT_SYMBOL_GPL(memory_failure_queue); -static void memory_failure_work_func(struct work_struct *work) +/* + * (a)synchronous error info should be consumed by the corresponding handler + */ +static void __memory_failure_work_func(struct work_struct *work, bool sync) { struct memory_failure_cpu *mf_cpu; struct memory_failure_entry entry = { 0, }; unsigned long proc_flags; - int gotten; + int gotten, ret; mf_cpu = container_of(work, struct memory_failure_cpu, work); for (;;) { @@ -2256,22 +2269,31 @@ static void memory_failure_work_func(struct work_struct *work) break; if (entry.flags & MF_SOFT_OFFLINE) soft_offline_page(entry.pfn, entry.flags); - else - memory_failure(entry.pfn, entry.flags); + else { + if (sync && (entry.flags & MF_ACTION_REQUIRED)) { + ret = memory_failure(entry.pfn, entry.flags); + if (ret == -EHWPOISON || ret == -EOPNOTSUPP) + return; + + pr_err("Memory error not recovered"); + force_sig(SIGBUS); + } else if (!sync && !(entry.flags & MF_ACTION_REQUIRED)) + memory_failure(entry.pfn, entry.flags); + } } } -/* - * Process memory_failure work queued on the specified CPU. - * Used to avoid return-to-userspace racing with the memory_failure workqueue. - */ +static void memory_failure_work_func(struct work_struct *work) +{ + __memory_failure_work_func(work, false); +} + void memory_failure_queue_kick(int cpu) { struct memory_failure_cpu *mf_cpu; mf_cpu = &per_cpu(memory_failure_cpu, cpu); - cancel_work_sync(&mf_cpu->work); - memory_failure_work_func(&mf_cpu->work); + __memory_failure_work_func(&mf_cpu->work, true); } static int __init memory_failure_init(void) -- 2.36.1