Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3888113pxb; Tue, 26 Jan 2021 07:15:35 -0800 (PST) X-Google-Smtp-Source: ABdhPJy7dMEJVf/Go/8Ob/Z6a/VI8GatSzfjJ4k3feLWhx/5zRXYk5vEnniBQdcYUwNusZa/Y3vM X-Received: by 2002:a05:6402:1701:: with SMTP id y1mr4845003edu.251.1611674135538; Tue, 26 Jan 2021 07:15:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611674135; cv=none; d=google.com; s=arc-20160816; b=gsHdOn7atkULDaPWP6XayWx3p57Du3iJp3A/VtSjWdR81Qox4pEPjwypSqrqdg6q54 MV5WH9GjR+zY2w49KU5ofhR773Arj+mom3ob5DdxbzZNGCmHeSgYYBeLkjbkFoSX3JHW ZGrtXw9Tyw3jCMrP/2+erRoJoIvJCpPx+B3DLEWFobJaEuM3GxDG5TwhW2VT5AYp2kgR SbxUuX9XjZ2NbRbtdM6ibC4Bac9FXtl1L/Xif/YuZjVHWZdBUoyaljj3sfkXOj58B/rW kNXBAM8FhqOB6EWXJA4iK5oNIwv6h4+KJlwRndjnzsajqbasXwblLLgeataqxrefMaRP kZpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:subject :organization:from:references:cc:to:dkim-signature; bh=rKPAVmdSu3nxsiB9pgAVODRr/xR8Ner0j63a+nV5B7k=; b=Rj+12TlNVMaTLplZdklVrW0bslARWbOZilCFvWIZ8Zr7ptbNw9jsmPtrCtmqK8hL05 8ZnPw+tVcGAhw/mZOwndKy8J9KmV2HsmpVbazIJuiivMf/hE4pDf0j7sVQz1oTfzdTFK Jd66/AAbmNsIs3bLLXW3BgpeFk1842WQukRQtg5e3PqZ9qFVVWD92Q31e45imqNFevjL tpcLu6ISxclMvq49GKYGWggzpKgDAfxWdJKqq4Icpy7KeuCI+UtBB2tw9qxCEpMSN0gB x4cNeoF7wXLPi+RmeNo/iqZxJeWqVvngsc/5FXhY2jZytK9i6N3Ep55WX5E0Dp53I4vB QB5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WRbRte+Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id sa14si5390983ejb.355.2021.01.26.07.15.08; Tue, 26 Jan 2021 07:15:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WRbRte+Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406174AbhAZPNm (ORCPT + 99 others); Tue, 26 Jan 2021 10:13:42 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:47195 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732372AbhAZPMf (ORCPT ); Tue, 26 Jan 2021 10:12:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611673869; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rKPAVmdSu3nxsiB9pgAVODRr/xR8Ner0j63a+nV5B7k=; b=WRbRte+Y8Eb44wCGDpdOM/gmNRZQf7qSm0Biepoyll9muGhmmafGHPzKk3fHYjzYWwjoJ/ 3KEzF6O078PTfKHwptBK+zxAKwzzRbD3N9jWKTgD1as0dvGKqiB9wnHxKdxj3KPcwvfnUs ouUaDugXSE2Q/IJTmDXBbKZeiecO6nk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-254-olBCcwDpMf-_8EC8eT48xQ-1; Tue, 26 Jan 2021 10:11:07 -0500 X-MC-Unique: olBCcwDpMf-_8EC8eT48xQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DCE34193410C; Tue, 26 Jan 2021 15:11:00 +0000 (UTC) Received: from [10.36.114.192] (ovpn-114-192.ams2.redhat.com [10.36.114.192]) by smtp.corp.redhat.com (Postfix) with ESMTP id 624EC1F0; Tue, 26 Jan 2021 15:10:54 +0000 (UTC) To: Oscar Salvador Cc: Muchun Song , corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, mhocko@suse.com, song.bao.hua@hisilicon.com, naoya.horiguchi@nec.com, duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org References: <20210117151053.24600-1-songmuchun@bytedance.com> <20210117151053.24600-6-songmuchun@bytedance.com> <20210126092942.GA10602@linux> <6fe52a7e-ebd8-f5ce-1fcd-5ed6896d3797@redhat.com> <20210126145819.GB16870@linux> From: David Hildenbrand Organization: Red Hat GmbH Subject: Re: [PATCH v13 05/12] mm: hugetlb: allocate the vmemmap pages associated with each HugeTLB page Message-ID: <259b9669-0515-01a2-d714-617011f87194@redhat.com> Date: Tue, 26 Jan 2021 16:10:53 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.5.0 MIME-Version: 1.0 In-Reply-To: <20210126145819.GB16870@linux> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 26.01.21 15:58, Oscar Salvador wrote: > On Tue, Jan 26, 2021 at 10:36:21AM +0100, David Hildenbrand wrote: >> I think either keep it completely simple (only free vmemmap of hugetlb >> pages allocated early during boot - which is what's not sufficient for >> some use cases) or implement the full thing properly (meaning, solve >> most challenging issues to get the basics running). >> >> I don't want to have some easy parts of complex features merged (e.g., >> breaking other stuff as you indicate below), and later finding out "it's >> not that easy" again and being stuck with it forever. > > Well, we could try to do an optimistic allocation, without tricky loopings. > If that fails, refuse to shrink the pool at that moment. > > The user could always try to shrink it later via /proc/sys/vm/nr_hugepages > interface. > > But I am just thinking out loud.. The real issue seems to be discarding the vmemmap on any memory that has movability constraints - CMA and ZONE_MOVABLE; otherwise, as discussed, we can reuse parts of the thingy we're freeing for the vmemmap. Not that it would be ideal: that once-a-huge-page thing will never ever be a huge page again - but if it helps with OOM in corner cases, sure. Possible simplification: don't perform the optimization for now with free huge pages residing on ZONE_MOVABLE or CMA. Certainly not perfect: what happens when migrating a huge page from ZONE_NORMAL to (ZONE_MOVABLE|CMA)? > >>> Of course, this means that e.g: memory-hotplug (hot-remove) will not fully work >>> when this in place, but well. >> >> Can you elaborate? Are we're talking about having hugepages in >> ZONE_MOVABLE that are not migratable (and/or dissolvable) anymore? Than >> a clear NACK from my side. > > Pretty much, yeah. Note that we most likely soon have to tackle migrating/dissolving (free) hugetlbfs pages from alloc_contig_range() context - e.g., for CMA allocations. That's certainly something to keep in mind regarding any approaches that already break offline_pages(). -- Thanks, David / dhildenb