Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp6816202pxv; Fri, 30 Jul 2021 03:08:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxNVE0oscrQaodD88utxEVQqasmWgFaWuEJPSBub80QXbR8YPirPkhn270w7FTNyAtG79Qw X-Received: by 2002:a17:906:c20d:: with SMTP id d13mr1741264ejz.259.1627639708672; Fri, 30 Jul 2021 03:08:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627639708; cv=none; d=google.com; s=arc-20160816; b=dqJ9OoRcDilBN4KByeKAzxpfvb50/nbVJmwLeD+GBX7gZd4aicA4tQEZY1V5JQGpVq Xxx6PWjuT3WfLO6B/qmlswS5pLi1c7nsoaQTywSgOIsbcsCXZn3R9n43AvWPafJYlU2E X6lzVWswQfg2N3ufOtZEwOiCDmXsJzQNgVyab1N1sk7PftoBqJob72dn/WW3P/F7SA1o jP5NN59Pc0S4XPvEMi/PEoLFuXEOr5Poi+QovVnHtKflUGe26fEUsTqStO5QS8P2xEBw HN5FCGZ2hsJCBK8AswBvMKw+JFpkXgW2DBa9doQH9rElVlujKzvpqabPkqV9oN7KamA1 vW3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-hdrordr; bh=ni1fB7mjw5ywq9C8SWijg8ww7OFuKnEtQOUiaVi28Vo=; b=hbIVVEf1hmMq0x9P3XCZ1ziAJZI1tEJYZ5+xulwyQXy1jn5jk8o0RiMRT8hD7iBFIi drASZPjpFHilpgE/lu7xniP9aCOQEt7mmXOZhRFXtL99xq4Jj8Hxisf1pCNuWrss5KgY mYQH//93TMA5inCcfck2U+Inyn5nzdf6PIsV4PmQBNljWK3eYR+zs48fY7GNIwo13WDy Fox2tLc6L+o6M6MubnPN+nEaiCE8aOqrMc238hBVQAGlJjT2v/BtvTnHBWcqXt52m7xA Wghyy2H93UZF+CfIpTBqJ53crggzKRaLfsuj15pMNwumsbZFOwaPLH49+ukPiBznN0kd O0iQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=fujitsu.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f14si1389802ejj.667.2021.07.30.03.08.05; Fri, 30 Jul 2021 03:08:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=fujitsu.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238421AbhG3KEI (ORCPT + 99 others); Fri, 30 Jul 2021 06:04:08 -0400 Received: from mail.cn.fujitsu.com ([183.91.158.132]:32041 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S238375AbhG3KDz (ORCPT ); Fri, 30 Jul 2021 06:03:55 -0400 IronPort-HdrOrdr: =?us-ascii?q?A9a23=3ALO/O/67IexGHJtDKaQPXwPTXdLJyesId70hD?= =?us-ascii?q?6qkRc20wTiX8ra2TdZsguyMc9wx6ZJhNo7G90cq7MBbhHPxOkOos1N6ZNWGIhI?= =?us-ascii?q?LCFvAB0WKN+V3dMhy73utc+IMlSKJmFeD3ZGIQse/KpCW+DPYsqePqzJyV?= X-IronPort-AV: E=Sophos;i="5.84,281,1620662400"; d="scan'208";a="112074011" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 30 Jul 2021 18:02:12 +0800 Received: from G08CNEXMBPEKD06.g08.fujitsu.local (unknown [10.167.33.206]) by cn.fujitsu.com (Postfix) with ESMTP id 3679D4D0D498; Fri, 30 Jul 2021 18:02:10 +0800 (CST) Received: from G08CNEXCHPEKD09.g08.fujitsu.local (10.167.33.85) by G08CNEXMBPEKD06.g08.fujitsu.local (10.167.33.206) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Fri, 30 Jul 2021 18:02:04 +0800 Received: from irides.mr.mr.mr (10.167.225.141) by G08CNEXCHPEKD09.g08.fujitsu.local (10.167.33.209) with Microsoft SMTP Server id 15.0.1497.23 via Frontend Transport; Fri, 30 Jul 2021 18:02:03 +0800 From: Shiyang Ruan To: , , , , , CC: , , , , , Subject: [PATCH RESEND v6 3/9] mm: factor helpers for memory_failure_dev_pagemap Date: Fri, 30 Jul 2021 18:01:52 +0800 Message-ID: <20210730100158.3117319-4-ruansy.fnst@fujitsu.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210730100158.3117319-1-ruansy.fnst@fujitsu.com> References: <20210730100158.3117319-1-ruansy.fnst@fujitsu.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-yoursite-MailScanner-ID: 3679D4D0D498.AFC39 X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: ruansy.fnst@fujitsu.com X-Spam-Status: No Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org memory_failure_dev_pagemap code is a bit complex before introduce RMAP feature for fsdax. So it is needed to factor some helper functions to simplify these code. Signed-off-by: Shiyang Ruan --- mm/memory-failure.c | 101 +++++++++++++++++++++++++------------------- 1 file changed, 57 insertions(+), 44 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index eefd823deb67..3bdfcb45f66e 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1423,6 +1423,60 @@ static int try_to_split_thp_page(struct page *page, const char *msg) return 0; } +static void unmap_and_kill(struct list_head *to_kill, unsigned long pfn, + struct address_space *mapping, pgoff_t index, int flags) +{ + struct to_kill *tk; + unsigned long size = 0; + + list_for_each_entry(tk, to_kill, nd) + if (tk->size_shift) + size = max(size, 1UL << tk->size_shift); + if (size) { + /* + * Unmap the largest mapping to avoid breaking up device-dax + * mappings which are constant size. The actual size of the + * mapping being torn down is communicated in siginfo, see + * kill_proc() + */ + loff_t start = (index << PAGE_SHIFT) & ~(size - 1); + + unmap_mapping_range(mapping, start, size, 0); + } + + kill_procs(to_kill, flags & MF_MUST_KILL, false, pfn, flags); +} + +static int mf_generic_kill_procs(unsigned long long pfn, int flags) +{ + struct page *page = pfn_to_page(pfn); + LIST_HEAD(to_kill); + dax_entry_t cookie; + + /* + * Prevent the inode from being freed while we are interrogating + * the address_space, typically this would be handled by + * lock_page(), but dax pages do not use the page lock. This + * also prevents changes to the mapping of this pfn until + * poison signaling is complete. + */ + cookie = dax_lock_page(page); + if (!cookie) + return -EBUSY; + /* + * Unlike System-RAM there is no possibility to swap in a + * different physical page at a given virtual address, so all + * userspace consumption of ZONE_DEVICE memory necessitates + * SIGBUS (i.e. MF_MUST_KILL) + */ + flags |= MF_ACTION_REQUIRED | MF_MUST_KILL; + collect_procs(page, &to_kill, true); + + unmap_and_kill(&to_kill, pfn, page->mapping, page->index, flags); + dax_unlock_page(page, cookie); + return 0; +} + static int memory_failure_hugetlb(unsigned long pfn, int flags) { struct page *p = pfn_to_page(pfn); @@ -1512,13 +1566,8 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, struct dev_pagemap *pgmap) { struct page *page = pfn_to_page(pfn); - const bool unmap_success = true; - unsigned long size = 0; - struct to_kill *tk; LIST_HEAD(tokill); int rc = -EBUSY; - loff_t start; - dax_entry_t cookie; if (flags & MF_COUNT_INCREASED) /* @@ -1532,20 +1581,9 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, goto out; } - /* - * Prevent the inode from being freed while we are interrogating - * the address_space, typically this would be handled by - * lock_page(), but dax pages do not use the page lock. This - * also prevents changes to the mapping of this pfn until - * poison signaling is complete. - */ - cookie = dax_lock_page(page); - if (!cookie) - goto out; - if (hwpoison_filter(page)) { rc = 0; - goto unlock; + goto out; } if (pgmap->type == MEMORY_DEVICE_PRIVATE) { @@ -1553,7 +1591,7 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, * TODO: Handle HMM pages which may need coordination * with device-side memory. */ - goto unlock; + goto out; } /* @@ -1562,32 +1600,7 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, */ SetPageHWPoison(page); - /* - * Unlike System-RAM there is no possibility to swap in a - * different physical page at a given virtual address, so all - * userspace consumption of ZONE_DEVICE memory necessitates - * SIGBUS (i.e. MF_MUST_KILL) - */ - flags |= MF_ACTION_REQUIRED | MF_MUST_KILL; - collect_procs(page, &tokill, flags & MF_ACTION_REQUIRED); - - list_for_each_entry(tk, &tokill, nd) - if (tk->size_shift) - size = max(size, 1UL << tk->size_shift); - if (size) { - /* - * Unmap the largest mapping to avoid breaking up - * device-dax mappings which are constant size. The - * actual size of the mapping being torn down is - * communicated in siginfo, see kill_proc() - */ - start = (page->index << PAGE_SHIFT) & ~(size - 1); - unmap_mapping_range(page->mapping, start, size, 0); - } - kill_procs(&tokill, flags & MF_MUST_KILL, !unmap_success, pfn, flags); - rc = 0; -unlock: - dax_unlock_page(page, cookie); + mf_generic_kill_procs(pfn, flags); out: /* drop pgmap ref acquired in caller */ put_dev_pagemap(pgmap); -- 2.32.0