Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S941062AbcKNXWM (ORCPT ); Mon, 14 Nov 2016 18:22:12 -0500 Received: from mail-pg0-f67.google.com ([74.125.83.67]:35814 "EHLO mail-pg0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754630AbcKNXWJ (ORCPT ); Mon, 14 Nov 2016 18:22:09 -0500 Subject: Re: [PATCH v2 09/12] mm: hwpoison: soft offline supports thp migration To: Naoya Horiguchi References: <1478561517-4317-1-git-send-email-n-horiguchi@ah.jp.nec.com> <1478561517-4317-10-git-send-email-n-horiguchi@ah.jp.nec.com> <6e9aa943-31ea-5b08-8459-2e6a85940546@gmail.com> <20161110235853.GB22792@hori1.linux.bs1.fc.nec.co.jp> Cc: "linux-mm@kvack.org" , "Kirill A. Shutemov" , Hugh Dickins , Andrew Morton , Dave Hansen , Andrea Arcangeli , Mel Gorman , Michal Hocko , Vlastimil Babka , Pavel Emelyanov , Zi Yan , "linux-kernel@vger.kernel.org" , Naoya Horiguchi From: Balbir Singh Message-ID: Date: Tue, 15 Nov 2016 10:22:02 +1100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.0 MIME-Version: 1.0 In-Reply-To: <20161110235853.GB22792@hori1.linux.bs1.fc.nec.co.jp> Content-Type: text/plain; charset=iso-2022-jp Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2028 Lines: 51 On 11/11/16 10:58, Naoya Horiguchi wrote: > On Thu, Nov 10, 2016 at 09:31:10PM +1100, Balbir Singh wrote: >> >> >> On 08/11/16 10:31, Naoya Horiguchi wrote: >>> This patch enables thp migration for soft offline. >>> >>> Signed-off-by: Naoya Horiguchi >>> --- >>> mm/memory-failure.c | 31 ++++++++++++------------------- >>> 1 file changed, 12 insertions(+), 19 deletions(-) >>> >>> diff --git v4.9-rc2-mmotm-2016-10-27-18-27/mm/memory-failure.c v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/memory-failure.c >>> index 19e796d..6cc8157 100644 >>> --- v4.9-rc2-mmotm-2016-10-27-18-27/mm/memory-failure.c >>> +++ v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/memory-failure.c >>> @@ -1485,7 +1485,17 @@ static struct page *new_page(struct page *p, unsigned long private, int **x) >>> if (PageHuge(p)) >>> return alloc_huge_page_node(page_hstate(compound_head(p)), >>> nid); >>> - else >>> + else if (thp_migration_supported() && PageTransHuge(p)) { >>> + struct page *thp; >>> + >>> + thp = alloc_pages_node(nid, >>> + (GFP_TRANSHUGE | __GFP_THISNODE) & ~__GFP_RECLAIM, >>> + HPAGE_PMD_ORDER); >>> + if (!thp) >>> + return NULL; >> >> Just wondering if new_page() fails, migration of that entry fails. Do we then >> split and migrate? I guess this applies to THP migration in general. > > Yes, that's not implemented yet, but can be helpful. > > I think that there are 2 types of callers of page migration, > one is a caller that specifies the target pages individually (like move_pages > and soft offline), and another is a caller that specifies the target pages > by (physical/virtual) address range basis. > Maybe the former ones want to fall back immediately to split and retry if > thp migration fails, and the latter ones want to retry thp migration more. > If this makes sense, we can make some more changes on retry logic to fit > the situation. > I think we definitely need the retry with split option, but may be we can build it on top of this series. Balbir Singh.