Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753443AbdLNU6J (ORCPT ); Thu, 14 Dec 2017 15:58:09 -0500 Received: from aserp2130.oracle.com ([141.146.126.79]:59965 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753312AbdLNU6E (ORCPT ); Thu, 14 Dec 2017 15:58:04 -0500 Subject: Re: [RFC PATCH 3/5] mm, hugetlb: do not rely on overcommit limit during migration To: Michal Hocko Cc: linux-mm@kvack.org, Naoya Horiguchi , Andrew Morton , LKML References: <20171204140117.7191-1-mhocko@kernel.org> <20171204140117.7191-4-mhocko@kernel.org> <20171214074053.GC16951@dhcp22.suse.cz> From: Mike Kravetz Message-ID: <1ce15f58-4b39-3e03-d0e3-4cd30bcc69b9@oracle.com> Date: Thu, 14 Dec 2017 12:57:54 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: <20171214074053.GC16951@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8745 signatures=668648 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=916 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1712140288 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2599 Lines: 66 On 12/13/2017 11:40 PM, Michal Hocko wrote: > On Wed 13-12-17 15:35:33, Mike Kravetz wrote: >> On 12/04/2017 06:01 AM, Michal Hocko wrote: > [...] >>> Before migration >>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/free_hugepages:0 >>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages:1 >>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/surplus_hugepages:0 >>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/free_hugepages:0 >>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages:0 >>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/surplus_hugepages:0 >>> >>> After >>> >>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/free_hugepages:0 >>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages:0 >>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/surplus_hugepages:0 >>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/free_hugepages:0 >>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages:1 >>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/surplus_hugepages:0 >>> >>> with the previous implementation, both nodes would have nr_hugepages:1 >>> until the page is freed. >> >> With the previous implementation, the migration would have failed unless >> nr_overcommit_hugepages was explicitly set. Correct? > > yes > > [...] > >> In the previous version of this patch, I asked about handling of 'free' huge >> pages. I did a little digging and IIUC, we do not attempt migration of >> free huge pages. The routine isolate_huge_page() has this check: >> >> if (!page_huge_active(page) || !get_page_unless_zero(page)) { >> ret = false; >> goto unlock; >> } >> >> I believe one of your motivations for this effort was memory offlining. >> So, this implies that a memory area can not be offlined if it contains >> a free (not in use) huge page? > > do_migrate_range will ignore this free huge page and then we will free > it up in dissolve_free_huge_pages > >> Just FYI and may be something we want to address later. > > Maybe yes. The free pool might be reserved which would make > dissolve_free_huge_pages to fail. Maybe we can be more clever and > allocate a new huge page in that case. Don't think we need to try and do anything more clever right now. I was just a little confused about the hot plug code. Thanks for the explanation. -- Mike Kravetz > >> My other issues were addressed. >> >> Reviewed-by: Mike Kravetz > > Thanks! >