Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1976800rwb; Thu, 8 Dec 2022 18:21:28 -0800 (PST) X-Google-Smtp-Source: AA0mqf5dTBtcNWsvzpnSLMoJN5qvntxSa3JwI2a/C29ig7/q5LeEn35Iq2sDN6TSR/0aRsmauTTU X-Received: by 2002:aa7:93da:0:b0:576:de1:cd32 with SMTP id y26-20020aa793da000000b005760de1cd32mr4317354pff.0.1670552488352; Thu, 08 Dec 2022 18:21:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670552488; cv=none; d=google.com; s=arc-20160816; b=PmIAAJb1lVVsaDeYrkP1f4ddkzenQkx8w6eWPv4kGixFrBBDyel6uyuYdGxs8DnQLv /UsGJAwpPkaQPFA4eyMKfmBo68piRGUX4mjSRBu/QahKlel4RlVLXETj0dV7/2K0kJnI C/4FqczhIjvWEbHEOQM5DHrvnN5Cwd82vxsaMjY63iYvoe/b1sZysxPJY7kLKEumEVvh kAnOJhQkm86AhFNBt9VoXLA3DWCly1hTqwpFARDrmUOM+Zs6A4wL0VPZ/N7J44kTi6+9 TeGJGxh65PHW7X2AYjmI1+Q/AW1fKT4J9ES7lzuYiBY33zCveqXaTbzaMoXU2VlBOT9e u/ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=+lJCVM3bz8IcuMXRh4C5FjtylfVai7I+pl5JBdbHMk4=; b=uOdY/72fYafToxGjcV++niEDZAFYY3GeQP79fMHUvVnpwS9Nui10mZslNnN/vfPJ0S Ac7N/HBKs+fDXJ31MOM9HtHIVnzCjKqvTyCQaR3qM4tI5I1VMGR5MD04JkrNbltu6M1A zK9SkeG3cnzBEGWei88cRsZoWV0pL0dtuE00URPq+Vtye5bR9TEAmJhD3EhRBJyZevTP 9I3KVHgXYdyDk8X660cLwEWZYwB9nQJf4FBBJ3PI3kS4b1Yks8hKmFsfYMLLfl+mu3cg f80oBYciSTAqivWgkUb6FsRHLfM029oz0XZ2K6HtWquMn2BFTUZroH+WzdaUszGF8oy0 zBtQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u17-20020a62d451000000b0056e48944843si436631pfl.355.2022.12.08.18.21.18; Thu, 08 Dec 2022 18:21:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229818AbiLIB6J (ORCPT + 74 others); Thu, 8 Dec 2022 20:58:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230070AbiLIB5k (ORCPT ); Thu, 8 Dec 2022 20:57:40 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 289267D071 for ; Thu, 8 Dec 2022 17:56:44 -0800 (PST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NSvHl3bLwzJqRM; Fri, 9 Dec 2022 09:55:47 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 9 Dec 2022 09:56:09 +0800 Message-ID: <91ec9413-8045-428e-d7e6-9327d63685d1@huawei.com> Date: Fri, 9 Dec 2022 09:56:09 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1 Subject: Re: [PATCH] mm: hwposion: support recovery from ksm_might_need_to_copy() Content-Language: en-US To: , , CC: , , References: <20221209021041.192835-1-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: <20221209021041.192835-1-wangkefeng.wang@huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org sorry, please ignore it,  will resend. On 2022/12/9 10:10, Kefeng Wang wrote: > When the kernel copy a page from ksm_might_need_to_copy(), but runs > into an uncorrectable error, it will crash since poisoned page is > consumed by kernel, this is similar to Copy-on-write poison recovery, > When an error is detected during the page copy, return VM_FAULT_HWPOISON, > which help us to avoid system crash. Note, memory failure on a KSM > page will be skipped, but still call memory_failure_queue() to be > consistent with general memory failure process. > > Signed-off-by: Kefeng Wang > --- > mm/ksm.c | 8 ++++++-- > mm/memory.c | 3 +++ > mm/swapfile.c | 2 +- > 3 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/mm/ksm.c b/mm/ksm.c > index f1e06b1d47f3..356e93b85287 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *page, > new_page = NULL; > } > if (new_page) { > - copy_user_highpage(new_page, page, address, vma); > - > + if (copy_mc_user_highpage(new_page, page, address, vma)) { > + put_page(new_page); > + new_page = ERR_PTR(-EHWPOISON); > + memory_failure_queue(page_to_pfn(page), 0); > + return new_page; > + } > SetPageDirty(new_page); > __SetPageUptodate(new_page); > __SetPageLocked(new_page); > diff --git a/mm/memory.c b/mm/memory.c > index 2615fa615be4..bb7b35e42297 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (unlikely(!page)) { > ret = VM_FAULT_OOM; > goto out_page; > + } els if (unlikely(PTR_ERR(page) == -EHWPOISON)) { > + ret = VM_FAULT_HWPOISON; > + goto out_page; > } > folio = page_folio(page); > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index f670ffb7df7e..763ff6a8a576 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1767,7 +1767,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, > > swapcache = page; > page = ksm_might_need_to_copy(page, vma, addr); > - if (unlikely(!page)) > + if (IS_ERR_OR_NULL(!page)) should  be IS_ERR_OR_NULL(page) > return -ENOMEM; > > pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);