Received: by 2002:a05:7412:8598:b0:f9:33c2:5753 with SMTP id n24csp440894rdh; Tue, 19 Dec 2023 03:51:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IFSbYB3vMlVO99NMNk7DTLo+S1mh2nSYaGvi03M5E/1W2ksijzIKz0ukINzjfgpWKTizHYx X-Received: by 2002:a05:6512:1384:b0:50e:350:e40e with SMTP id fc4-20020a056512138400b0050e0350e40emr9249039lfb.39.1702986685442; Tue, 19 Dec 2023 03:51:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702986685; cv=none; d=google.com; s=arc-20160816; b=Rlb3sNMP7gdZJ8R77qaDM50CDarmmu6CXX4nteiCQ2RpltuMRrBYdAatl9OrnCz+8u v3HQ+NWoS3LEhx03Ec+AMluRE5Ri+HrGU1NwmM3+4VP5GlM0YiA4/OLNYGEBQy268SLY 8QMPMJDRomZlrQN1v/r+tx5Axth3HV849H0DiCZYkkufzL7WqeMHjDjdIzEtgdfv3pXs 01HvgkAlVmP5wS/qEgsQTqVH2lY+81g9VYShqkR+ktNA6uhtLGdNrfvmM7sn+P4d5XPT dbsemtEpR1oE4+3Ul36IeGz6UDwQM+uq02KYTXU1W1pTanhDv0HKIY5NxlwD4u7xvr4K NK3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:content-language:in-reply-to:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:user-agent:date :message-id:from:references:cc:to:subject; bh=tY0r2lyKp713YiQDDonQH4ns8LXVI+WvnrsbguaqVaY=; fh=HlsDlUMw50zSIdNZqCxtw2r4kGPb8KkyfECFIiA9+HU=; b=yFt7eNzKZ4ob9YizuZpW95tr8cy34mJEM2QBD8hY+ZyuqJsLI4D3MGXRugrvkcPoNA 4vv1xYP0dFrwpPfJ80v5ETZYMhO0E4rpc/faDyr3B/NtZ1QbXIzYDP8ALx2dlzz4RxsL Fj2EL0x2/D7iv6AEiaHlLn/SQi/mP540DWC4L4t4HEmLLdnMrozDOtWWmAaBeL+J7vjO zNUhAcXgw9/aCgt49m5VJ4n52aIPdvSQf9fASh2YWkxlLA+yEH2PH+GTeOhRgTdvwA1g /V9roWkC3RqqfPwBRcvxgGTKhp92y9SYhXFxuG2oFZx/uDCZEqdQ0gBaaA6bFI13LPcr Sn6w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-5110-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5110-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id cx3-20020a05640222a300b0055397a184d4si537604edb.109.2023.12.19.03.51.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 03:51:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5110-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-5110-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5110-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 058271F224F5 for ; Tue, 19 Dec 2023 11:51:25 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CEE1818E29; Tue, 19 Dec 2023 11:51:01 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FBE218C05 for ; Tue, 19 Dec 2023 11:50:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4SvZkx1mLYzMp0Y; Tue, 19 Dec 2023 19:50:33 +0800 (CST) Received: from canpemm500002.china.huawei.com (unknown [7.192.104.244]) by mail.maildlp.com (Postfix) with ESMTPS id 494BD18006C; Tue, 19 Dec 2023 19:50:47 +0800 (CST) Received: from [10.174.151.185] (10.174.151.185) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 19 Dec 2023 19:50:46 +0800 Subject: Re: [PATCH 1/1] mm: memory-failure: Re-split hw-poisoned huge page on -EAGAIN To: Qiuxu Zhuo CC: , , , , , HORIGUCHI NAOYA References: <20231215081204.8802-1-qiuxu.zhuo@intel.com> From: Miaohe Lin Message-ID: <81eebf23-fce3-3bb3-857d-8aab5a75d788@huawei.com> Date: Tue, 19 Dec 2023 19:50:46 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In-Reply-To: <20231215081204.8802-1-qiuxu.zhuo@intel.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500002.china.huawei.com (7.192.104.244) On 2023/12/15 16:12, Qiuxu Zhuo wrote: > During the process of splitting a hw-poisoned huge page, it is possible > for the reference count of the huge page to be increased by the threads > within the affected process, leading to a failure in splitting the > hw-poisoned huge page with an error code of -EAGAIN. > > This issue can be reproduced when doing memory error injection to a > multiple-thread process, and the error occurs within a huge page. > The call path with the returned -EAGAIN during the testing is shown below: > > memory_failure() > try_to_split_thp_page() > split_huge_page() > split_huge_page_to_list() { > ... > Step A: can_split_folio() - Checked that the thp can be split. > Step B: unmap_folio() > Step C: folio_ref_freeze() - Failed and returned -EAGAIN. > ... > } > > The testing logs indicated that some huge pages were split successfully > via the call path above (Step C was successful for these huge pages). > However, some huge pages failed to split due to a failure at Step C, and > it was observed that the reference count of the huge page increased between > Step A and Step C. > > Testing has shown that after receiving -EAGAIN, simply re-splitting the > hw-poisoned huge page within memory_failure() always results in the same > -EAGAIN. This is possible because memory_failure() is executed in the > currently affected process. Before this process exits memory_failure() and > is terminated, its threads could increase the reference count of the > hw-poisoned page. > > To address this issue, employ the kernel worker to re-split the hw-poisoned > huge page. By the time this worker begins re-splitting the hw-poisoned huge > page, the affected process has already been terminated, preventing its > threads from increasing the reference count. Experimental results have > consistently shown that this worker successfully re-splits these > hw-poisoned huge pages on its first attempt. > > The kernel log (before): > [ 1116.862895] Memory failure: 0x4097fa7: recovery action for unsplit thp: Ignored > > The kernel log (after): > [ 793.573536] Memory failure: 0x2100dda: recovery action for unsplit thp: Delayed > [ 793.574666] Memory failure: 0x2100dda: split unsplit thp successfully. > > Signed-off-by: Qiuxu Zhuo Thanks for your patch. Except for the comment from Naoya, I have some questions about the code itself. > --- > mm/memory-failure.c | 73 +++++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 71 insertions(+), 2 deletions(-) > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 660c21859118..0db4cf712a78 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -72,6 +72,60 @@ atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0); > > static bool hw_memory_failure __read_mostly = false; > > +#define SPLIT_THP_MAX_RETRY_CNT 10 > +#define SPLIT_THP_INIT_DELAYED_MS 1 > + > +static bool split_thp_pending; > + > +struct split_thp_req { > + struct delayed_work work; > + struct page *thp; > + int retries; > +}; > + > +static void split_thp_work_fn(struct work_struct *work) > +{ > + struct split_thp_req *req = container_of(work, typeof(*req), work.work); > + int ret; > + > + /* Split the thp. */ > + get_page(req->thp); Can req->thp be freed when split_thp_work_fn is scheduled ? > + lock_page(req->thp); > + ret = split_huge_page(req->thp); > + unlock_page(req->thp); > + put_page(req->thp); > + > + /* Retry with an exponential backoff. */ > + if (ret && ++req->retries < SPLIT_THP_MAX_RETRY_CNT) { > + schedule_delayed_work(to_delayed_work(work), > + msecs_to_jiffies(SPLIT_THP_INIT_DELAYED_MS << req->retries)); > + return; > + } > + > + pr_err("%#lx: split unsplit thp %ssuccessfully.\n", page_to_pfn(req->thp), ret ? "un" : ""); > + kfree(req); > + split_thp_pending = false; split_thp_pending is not protected against split_thp_delayed? Though this race should be benign. Thanks.