Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp1296806iog; Thu, 16 Jun 2022 03:26:36 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vCVF4I7ZJQk1yVmxR9pQmtmhTPF9MV9zIpJxQU/Mg11qMod5d3ddZ0TLWVLYJGDuVrvciz X-Received: by 2002:a63:6a85:0:b0:3fa:722a:fbdc with SMTP id f127-20020a636a85000000b003fa722afbdcmr3851225pgc.174.1655375195995; Thu, 16 Jun 2022 03:26:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655375195; cv=none; d=google.com; s=arc-20160816; b=WzJgyXMhlAG+VHGluzLtzJNXe+2EEhQQVYfIfmMkiBc90UoNu3hkUpVMM1HiGD9J6g SNNkHB2vlQsOMNqH8Jyh/YYAcLzqfF96JbZdhAtEmJxwuaWCzco1kF4/9AUvMxtjp+0C czJAJr+BJJGHm8i+HZz4xzX9sMbUEivjSNuuHIjDvaI9ENn6Fk4HuhmLR8CjwMYUq6V0 YQ1FZ7cpcggyakL0JlkzgLv4t81ww2+ER3/solky9ced4uMy0qOEDhJjoA8ebFIk9haK XibID8vnG/wYCa8UBuf+8vb3+ppnN23i7RHpojvtJINppHAge0CPqQ63I5esaAotX1q/ YB1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=iPRG78ac7inBmDs1bk7VF2TAFXWPtFDuV+zoozPjimU=; b=zFZlP818iC/Gm3fiqNo+LVggQf6OfCIkOpoHW4aU8/JyMY3YbLKT20WIGOLP9XRFoM t/Gty8pK4SJD3ov0+Bm0KKt1p19myr0xVPHP+VOT1zIf4wtpxU6LSs6P1xhtPXg/VzUD 106ETp0zx2RnJNEAc/D5kdjShjqqaUj6P3e1dEU83ZzlwA5PUKdBmWlNZlAfLY4GOpiN 9wIeMRg9+2PvHgb5ozEcqOAZNYlOEHV4adQrLmdo55wQc4RSAsSbpsOyg3jCeDE79kl1 +c3U45VAA1MjV6j172aX/TysB8AUFcNxqcrIOOPbc62sBDH8korLi2QMmEjBIBNe00sq /Lcw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=A9BdQs8S; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d23-20020a63f257000000b00408c6efc54csi1346521pgk.229.2022.06.16.03.26.22; Thu, 16 Jun 2022 03:26:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=A9BdQs8S; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231517AbiFPKQt (ORCPT + 99 others); Thu, 16 Jun 2022 06:16:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345993AbiFPKQm (ORCPT ); Thu, 16 Jun 2022 06:16:42 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55E9C5D659 for ; Thu, 16 Jun 2022 03:16:41 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id 123so837175pgb.5 for ; Thu, 16 Jun 2022 03:16:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=iPRG78ac7inBmDs1bk7VF2TAFXWPtFDuV+zoozPjimU=; b=A9BdQs8S2qTsKhErtvYF/3G0zmeLlzOdl4btPSNB3FyyFs1x3UC3h3qL9eNtq+dVld VDz/kR9rVZHVnxvn2Kbq3A852qM4i8OWBZ733LuXzRvdM8KgiJQqRx3tfnpE1dd9ABSY 9DcUHpd0B9PibMBgkl/XzYVbSC8odtUH0+d3y9SGO7Ecr23B60+ImxriniFlHXjprAqA 8aJBb7gQ1EYrd9jr2E37p7NxNszZVoklj5chq6ZjrcrlpYAv4HvneGg2sE1gIEi1OuXt 0Ev4M1Lx+dJuVy8mVzT4AnBpc4vJBk8w6UKKNuf1C5/2e147UaWRfmFV6x+lXhMH7ncd jzAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=iPRG78ac7inBmDs1bk7VF2TAFXWPtFDuV+zoozPjimU=; b=oDovJSzztOn+cHSFqubYwzV4JQ6VDSXpyJc2cQYPIN0t+Bod3jyGm4u0MjA9FhOzsV JO8ECARqu5MnAy+TQTFNOWaIvUW7LvKwk0GCztp2QBT01Gawbpaffo3xDJolB9ZFZ7CL wfjZLBDM+7gxz1k9CwQoHd5SjaT9MkKKuiCbmg9MlkSiKYMbXaL5/UkK9OlK5KwsP4NC 54EOT2eCMg3UI+ylR7WwJlIxCWr4GCqG6nLjv5q+M7K573wJfAbS2DfYbeKnXBGi0U19 5uDuKeymjFLQ9nvskX/aLnBFwyccm6RHLBr4Qs0WuZHh5KgNO+EH161DRp6ijqKpProC ulxw== X-Gm-Message-State: AJIora8uuf/bIbCKDmo/0wQ0i9QxqBpIQhfXfROknZY1T79oAkwfb/Vb tgxHwgzXAAEdfXoQZu+af2gutQ== X-Received: by 2002:a05:6a00:2344:b0:51c:157f:83d5 with SMTP id j4-20020a056a00234400b0051c157f83d5mr4056234pfj.5.1655374600828; Thu, 16 Jun 2022 03:16:40 -0700 (PDT) Received: from localhost ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id t5-20020a1709027fc500b001690d398401sm214048plb.88.2022.06.16.03.16.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jun 2022 03:16:40 -0700 (PDT) Date: Thu, 16 Jun 2022 18:16:36 +0800 From: Muchun Song To: David Hildenbrand Cc: corbet@lwn.net, akpm@linux-foundation.org, paulmck@kernel.org, mike.kravetz@oracle.com, osalvador@suse.de, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com Subject: Re: [PATCH v2 2/2] mm: memory_hotplug: introduce SECTION_CANNOT_OPTIMIZE_VMEMMAP Message-ID: References: <20220520025538.21144-1-songmuchun@bytedance.com> <20220520025538.21144-3-songmuchun@bytedance.com> <53024884-0182-df5f-9ca2-00652c64ce36@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 16, 2022 at 09:21:35AM +0200, David Hildenbrand wrote: > On 16.06.22 04:45, Muchun Song wrote: > > On Wed, Jun 15, 2022 at 11:51:49AM +0200, David Hildenbrand wrote: > >> On 20.05.22 04:55, Muchun Song wrote: > >>> For now, the feature of hugetlb_free_vmemmap is not compatible with the > >>> feature of memory_hotplug.memmap_on_memory, and hugetlb_free_vmemmap > >>> takes precedence over memory_hotplug.memmap_on_memory. However, someone > >>> wants to make memory_hotplug.memmap_on_memory takes precedence over > >>> hugetlb_free_vmemmap since memmap_on_memory makes it more likely to > >>> succeed memory hotplug in close-to-OOM situations. So the decision > >>> of making hugetlb_free_vmemmap take precedence is not wise and elegant. > >>> The proper approach is to have hugetlb_vmemmap.c do the check whether > >>> the section which the HugeTLB pages belong to can be optimized. If > >>> the section's vmemmap pages are allocated from the added memory block > >>> itself, hugetlb_free_vmemmap should refuse to optimize the vmemmap, > >>> otherwise, do the optimization. Then both kernel parameters are > >>> compatible. So this patch introduces SECTION_CANNOT_OPTIMIZE_VMEMMAP > >>> to indicate whether the section could be optimized. > >>> > >> > >> In theory, we have that information stored in the relevant memory block, > >> but I assume that lookup in the xarray + locking is impractical. > >> > >> I wonder if we can derive that information simply from the vmemmap pages > >> themselves, because *drumroll* > >> > >> For one vmemmap page (the first one), the vmemmap corresponds to itself > >> -- what?! > >> > >> > >> [ hotplugged memory ] > >> [ memmap ][ usable memory ] > >> | | | > >> ^--- | | > >> ^------- | > >> ^---------------------- > >> > >> The memmap of the first page of hotplugged memory falls onto itself. > >> We'd have to derive from actual "usable memory" that condition. > >> > >> > >> We currently support memmap_on_memory memory only within fixed-size > >> memory blocks. So "hotplugged memory" is guaranteed to be aligned to > >> memory_block_size_bytes() and the size is memory_block_size_bytes(). > >> > >> If we'd have a page falling into usbale memory, we'd simply lookup the > >> first page and test if the vmemmap maps to itself. > >> > > > > I think this can work. Should we use this approach in next version? > > > > Either that or more preferable, flagging the vmemmap pages eventually. > That's might be future proof. > All right. I think we can go with the above approach, we can improve it to flagging-base approach in the future if needed. > >> > >> Of course, once we'd support variable-sized memory blocks, it would be > >> different. > >> > >> > >> An easier/future-proof approach might simply be flagging the vmemmap > >> pages as being special. We reuse page flags for that, which don't have > >> semantics yet (i.e., PG_reserved indicates a boot-time allocation via > >> memblock). > >> > > > > I think you mean flag vmemmap pages' struct page as PG_reserved if it > > can be optimized, right? When the vmemmap pages are allocated in > > hugetlb_vmemmap_alloc(), is it valid to flag them as PG_reserved (they > > are allocated from buddy allocator not memblock)? > > > > Sorry I wasn't clear. I'd flag them with some other > not-yet-used-for-vmemmap-pages flag. Reusing PG_reserved could result in > trouble. > Sorry. I thought you suggest reusing "PG_reserved". My bad misreading. Thanks.