Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965281AbcKJX7X convert rfc822-to-8bit (ORCPT ); Thu, 10 Nov 2016 18:59:23 -0500 Received: from TYO201.gate.nec.co.jp ([210.143.35.51]:46856 "EHLO tyo201.gate.nec.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934620AbcKJX7W (ORCPT ); Thu, 10 Nov 2016 18:59:22 -0500 From: Naoya Horiguchi To: Balbir Singh CC: "linux-mm@kvack.org" , "Kirill A. Shutemov" , Hugh Dickins , "Andrew Morton" , Dave Hansen , Andrea Arcangeli , Mel Gorman , Michal Hocko , "Vlastimil Babka" , Pavel Emelyanov , Zi Yan , "linux-kernel@vger.kernel.org" , Naoya Horiguchi Subject: Re: [PATCH v2 09/12] mm: hwpoison: soft offline supports thp migration Thread-Topic: [PATCH v2 09/12] mm: hwpoison: soft offline supports thp migration Thread-Index: AQHSOU89YJHaKTeN1kK0mKg08nwAVaDRcZwAgADhrYA= Date: Thu, 10 Nov 2016 23:58:54 +0000 Message-ID: <20161110235853.GB22792@hori1.linux.bs1.fc.nec.co.jp> References: <1478561517-4317-1-git-send-email-n-horiguchi@ah.jp.nec.com> <1478561517-4317-10-git-send-email-n-horiguchi@ah.jp.nec.com> <6e9aa943-31ea-5b08-8459-2e6a85940546@gmail.com> In-Reply-To: <6e9aa943-31ea-5b08-8459-2e6a85940546@gmail.com> Accept-Language: en-US, ja-JP Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.128.101.7] Content-Type: text/plain; charset="iso-2022-jp" Content-ID: <4D9D0E56B3B998459004EF294D3939B4@gisp.nec.co.jp> Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1859 Lines: 45 On Thu, Nov 10, 2016 at 09:31:10PM +1100, Balbir Singh wrote: > > > On 08/11/16 10:31, Naoya Horiguchi wrote: > > This patch enables thp migration for soft offline. > > > > Signed-off-by: Naoya Horiguchi > > --- > > mm/memory-failure.c | 31 ++++++++++++------------------- > > 1 file changed, 12 insertions(+), 19 deletions(-) > > > > diff --git v4.9-rc2-mmotm-2016-10-27-18-27/mm/memory-failure.c v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/memory-failure.c > > index 19e796d..6cc8157 100644 > > --- v4.9-rc2-mmotm-2016-10-27-18-27/mm/memory-failure.c > > +++ v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/memory-failure.c > > @@ -1485,7 +1485,17 @@ static struct page *new_page(struct page *p, unsigned long private, int **x) > > if (PageHuge(p)) > > return alloc_huge_page_node(page_hstate(compound_head(p)), > > nid); > > - else > > + else if (thp_migration_supported() && PageTransHuge(p)) { > > + struct page *thp; > > + > > + thp = alloc_pages_node(nid, > > + (GFP_TRANSHUGE | __GFP_THISNODE) & ~__GFP_RECLAIM, > > + HPAGE_PMD_ORDER); > > + if (!thp) > > + return NULL; > > Just wondering if new_page() fails, migration of that entry fails. Do we then > split and migrate? I guess this applies to THP migration in general. Yes, that's not implemented yet, but can be helpful. I think that there are 2 types of callers of page migration, one is a caller that specifies the target pages individually (like move_pages and soft offline), and another is a caller that specifies the target pages by (physical/virtual) address range basis. Maybe the former ones want to fall back immediately to split and retry if thp migration fails, and the latter ones want to retry thp migration more. If this makes sense, we can make some more changes on retry logic to fit the situation. Thanks, Naoya Horiguchi