Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp3447598rwb; Fri, 9 Dec 2022 14:50:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf5ZKN6X2IXX0POJQCcpc92Cp+VbPIeKqzzDc+ZyzKwmeA4R0EMxHnbx0X3vic8h5/LO4o7d X-Received: by 2002:a17:906:d922:b0:7c1:31c:e884 with SMTP id rn2-20020a170906d92200b007c1031ce884mr6140912ejb.17.1670626248408; Fri, 09 Dec 2022 14:50:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670626248; cv=none; d=google.com; s=arc-20160816; b=vZVRHr1NxRXezTXV2Kckk7qhrmLX9T5C1JwFHNF0evmCq+s4UXHrXokGdSdisXNSrd CM3L38+TUrzvnnw9xgaqmXFGTsf2ZiePk7xl9NW9ZLHxBrhEMy+2DVZkl+OxCZZJYTbW aqIe9TjQ2IPLpUL0kibj03gTNcp/flUQ7nZOjZUEmsCjE+5Ubsgdo+EwtSLsXhlEqUBW ltgzzTFq9iw10bb55O92viVT9WdYg4Vp5/vxRWyj2scRpQyNNjgo87pl+4RbMIsjIrZ9 1C60EnUB8IiBioMSJKMlj5Ah+9VQK13oFwroJCe0nopS2Z5uLBQIEqWaKsDfQ25d43Wu k0yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :content-language:references:cc:to:subject:user-agent:mime-version :date:message-id:dkim-signature:dkim-signature; bh=Yazt+l+wO+0PmZJrbTp0vGTCDSc87Dt5hYFTptn2yHM=; b=ChJJj66z/zlUYek+suUp4QfO3Styfs5ymoKe7JN4HrxUZTm8ZHg7bf6p//TGriou8G nxEETTFlXCHSOf/+QPMAOXGU2mx6FEKHV3S6z8BUV+ZdC4ill4RdfBFIg48G2jPB15d5 CzSqPiEmr96GZ5My89H+plPTfIFy/qzkt4gD4Ni0nTXm8q9x6R1VFYGM33RH32w7nsCR huq0BjOE99iQ09f1vVE4Rzj5WU++yqoXfhmdzyNkxfxz7zTjZhMGT+5Q7GoUj5LOpXdG O11m+yD2CET/3bqtUNJz3KDDn16kRSJ2cNLEcOFqwAp6pXiAbG5Gxbf7WcRq7cIFpIyK Ztaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=z+Jq+hOa; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dt11-20020a170907728b00b007c10b655906si721887ejc.445.2022.12.09.14.50.30; Fri, 09 Dec 2022 14:50:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=z+Jq+hOa; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229759AbiLIWX4 (ORCPT + 74 others); Fri, 9 Dec 2022 17:23:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229468AbiLIWXx (ORCPT ); Fri, 9 Dec 2022 17:23:53 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7803413F60; Fri, 9 Dec 2022 14:23:52 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id DD4FE1FD74; Fri, 9 Dec 2022 22:23:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1670624630; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Yazt+l+wO+0PmZJrbTp0vGTCDSc87Dt5hYFTptn2yHM=; b=z+Jq+hOaStbFPtaNKlg3FRxlSBaJXPh+oQnblB5JeLK9QCSD5hkCKokZJHkzC4ZBGcEoS8 vLH8TNiP0LfBe/WMywCdbj5Tvhbr7KprdSH0e6iBWZPfpgudPSkMaotO3kiXvRyWmVlLrC l+y1sHZ68X32TZgeSWYxn6wkOV3kK4I= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1670624630; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Yazt+l+wO+0PmZJrbTp0vGTCDSc87Dt5hYFTptn2yHM=; b=3bULmWys6kgqDJazRM50lrTiCrUAe0nUMdboobiW6WAV1/JQD9e6r6NUpUFhCOxFGJf6gX 6LWv0XsEnE0/tOAA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7889F13597; Fri, 9 Dec 2022 22:23:50 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id UJS+HHa1k2NPUQAAMHmgww (envelope-from ); Fri, 09 Dec 2022 22:23:50 +0000 Message-ID: <3ab6ea38-5a9b-af4f-3c94-b75dce682bc1@suse.cz> Date: Fri, 9 Dec 2022 23:23:50 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1 Subject: Re: [PATCHv8 02/14] mm: Add support for unaccepted memory To: "Kirill A. Shutemov" Cc: "Kirill A. Shutemov" , Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, Mike Rapoport References: <20221207014933.8435-1-kirill.shutemov@linux.intel.com> <20221207014933.8435-3-kirill.shutemov@linux.intel.com> <20221209192616.dg4cbe7mgh3axv5h@box.shutemov.name> Content-Language: en-US From: Vlastimil Babka In-Reply-To: <20221209192616.dg4cbe7mgh3axv5h@box.shutemov.name> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_SOFTFAIL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/9/22 20:26, Kirill A. Shutemov wrote: >> > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT >> > /* >> > * Watermark failed for this zone, but see if we can >> > @@ -4299,6 +4411,9 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, >> > >> > return page; >> > } else { >> > + if (try_to_accept_memory(zone)) >> > + goto try_this_zone; >> >> On the other hand, here we failed the full rmqueue(), including the >> potentially fragmenting fallbacks, so I'm worried that before we finally >> fail all of that and resort to accepting more memory, we already fragmented >> the already accepted memory, more than necessary. > > I'm not sure I follow. We accept memory in pageblock chunks. Do we want to > allocate from a free pageblock if we have other memory to tap from? It > doesn't make sense to me. The fragmentation avoidance based on migratetype does work with pageblock granularity, so yeah, if you accept a single pageblock worth of memory and then (through __rmqueue_fallback()) end up serving both movable and unmovable allocations from it, the whole fragmentation avoidance mechanism is defeated and you end up with unmovable allocations (e.g. page tables) scattered over many pageblocks and inability to allocate any huge pages. >> So one way to prevent would be to move the acceptance into rmqueue() to >> happen before __rmqueue_fallback(), which I originally had in mind and maybe >> suggested that previously. > > I guess it should be pretty straight forward to fail __rmqueue_fallback() > if there's non-empty unaccepted_pages list and steer to > try_to_accept_memory() this way. That could be a way indeed. We do have ALLOC_NOFRAGMENT which could be possible to employ here. But maybe the zone_watermark_fast() modification would be simpler yet sufficient. It makes sense to me that we'd try to keep a high watermark worth of pre-accepted memory. zone_watermark_fast() would fail at low watermark, so we could try accepting (high-low) at a time instead of single pageblock. > But I still don't understand why. To avoid what I described above. >> But maybe less intrusive and more robust way would be to track how much >> memory is unaccepted, and actually decrement that amount from free memory >> in zone_watermark_fast() in order to force earlier failure of that check and >> thus to accept more memory and give us a buffer of truly accepted and >> available memory up to high watermark, which should hopefully prevent most >> of the fallbacks. Then the code I flagged above as currently unecessary >> would make perfect sense. > > The next patch adds per-node unaccepted memory accounting. We can move it > per-zone if it would help. Right. >> And maybe Mel will have some ideas as well. > > I don't have much expertise in page allocator. Any input is valuable. > >> > + >> > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT >> > /* Try again if zone has deferred pages */ >> > if (static_branch_unlikely(&deferred_pages)) { >> > @@ -6935,6 +7050,10 @@ static void __meminit zone_init_free_lists(struct zone *zone) >> > INIT_LIST_HEAD(&zone->free_area[order].free_list[t]); >> > zone->free_area[order].nr_free = 0; >> > } >> > + >> > +#ifdef CONFIG_UNACCEPTED_MEMORY >> > + INIT_LIST_HEAD(&zone->unaccepted_pages); >> > +#endif >> > } >> > >> > /* >> >