Received: by 2002:a05:6358:7058:b0:131:369:b2a3 with SMTP id 24csp6771489rwp; Tue, 18 Jul 2023 05:50:19 -0700 (PDT) X-Google-Smtp-Source: APBJJlFEyeRNrZo8DZq0wob4+iNca7RAOHWC03GXgqZNW5dW9VIFfKvyTkvSyt727dl3bqJ4Vyw4 X-Received: by 2002:a17:90a:13c6:b0:262:cef9:84f6 with SMTP id s6-20020a17090a13c600b00262cef984f6mr12675841pjf.22.1689684619302; Tue, 18 Jul 2023 05:50:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689684619; cv=none; d=google.com; s=arc-20160816; b=YlbLh9k9AOl3NZ2JCJAncDt6WyvFnxMjOfuF9Sukz5yp7MBnqPk44ssPTRURV7gfDF MjcQGdhlilpeuqhsL6gQQ40zPE2MkkHDe0Z/W4xg4vf69j7dVkyNzx0r5hHfmlzd6jzr 7Z6Q0cr55rJ3ok7DGSdimprNbWCv4bckG2Zp9F6jwwfa4FVSHhOMeI3KzWsviNP6zt2D 1Q7G6EpMa8JseMmgLakIiCqTtmH+OYaxeki3zq3LQ4hBDbEYkVmBxqQ3lwDPAHER/Z9V tygtN5Mz9fz0VUQ633WIyGR9z0eF0oakCQsBYD1d5VJfvVHzJP2cqrIX37KMovwRzMaZ 3m3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=Eab4/Ty/Yw21V4h9xjirvlOdqc32FEXhfpD0JB8ypZw=; fh=JOEvKw9NcG31fWAmRC6CLJS1txpNfvmn25hzKmt6oZk=; b=UxzF/d6NzGR36ejRsccI12tz5OjNNELCMOxvU2HrnjrVC2mGCbUfPny6V5+3nXixYA nDfYuKsCqwdXG0mBrWhwmk8M6w6VrCyoYLXESEZMo9Fwdp8cFdnOEvElY9qMr+jUoHLR QwxXhPZVvMwRftI8DX2ZCSdf+k3MEerS3plOoitEfRAS89hUBYyVzaj47QkSYWbvkioa mCmVW8nq98Moaqex1XAXhLVpt8knn/Kkgz2q2O0Ya5cqs8i9SmTIA42Ubhc54mnprofV yWXIptAIoq8n0IYact5hFf7StfNZgiFX6lD5LA5Y1RSfTm79aJVw3twgr0BaITluIcOj Dspw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r72-20020a632b4b000000b00547b25ea099si1535421pgr.682.2023.07.18.05.50.06; Tue, 18 Jul 2023 05:50:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231719AbjGRMei (ORCPT + 99 others); Tue, 18 Jul 2023 08:34:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231184AbjGRMeh (ORCPT ); Tue, 18 Jul 2023 08:34:37 -0400 Received: from outbound-smtp62.blacknight.com (outbound-smtp62.blacknight.com [46.22.136.251]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E01F3AC for ; Tue, 18 Jul 2023 05:34:32 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp62.blacknight.com (Postfix) with ESMTPS id 356EBFA79B for ; Tue, 18 Jul 2023 13:34:31 +0100 (IST) Received: (qmail 26163 invoked from network); 18 Jul 2023 12:34:31 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.20.191]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 18 Jul 2023 12:34:30 -0000 Date: Tue, 18 Jul 2023 13:34:28 +0100 From: Mel Gorman To: "Huang, Ying" Cc: Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Andrew Morton , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Pavel Tatashin , Matthew Wilcox Subject: Re: [RFC 2/2] mm: alloc/free depth based PCP high auto-tuning Message-ID: <20230718123428.jcy4avtjg3rhuh7i@techsingularity.net> References: <20230710065325.290366-1-ying.huang@intel.com> <20230710065325.290366-3-ying.huang@intel.com> <20230712090526.thk2l7sbdcdsllfi@techsingularity.net> <871qhcdwa1.fsf@yhuang6-desk2.ccr.corp.intel.com> <20230714140710.5xbesq6xguhcbyvi@techsingularity.net> <87pm4qdhk4.fsf@yhuang6-desk2.ccr.corp.intel.com> <20230717135017.7ro76lsaninbazvf@techsingularity.net> <87lefeca2z.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <87lefeca2z.fsf@yhuang6-desk2.ccr.corp.intel.com> X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE, T_SPF_TEMPERROR autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 18, 2023 at 08:55:16AM +0800, Huang, Ying wrote: > Mel Gorman writes: > > > On Mon, Jul 17, 2023 at 05:16:11PM +0800, Huang, Ying wrote: > >> Mel Gorman writes: > >> > >> > Batch should have a much lower maximum than high because it's a deferred cost > >> > that gets assigned to an arbitrary task. The worst case is where a process > >> > that is a light user of the allocator incurs the full cost of a refill/drain. > >> > > >> > Again, intuitively this may be PID Control problem for the "Mix" case > >> > to estimate the size of high required to minimise drains/allocs as each > >> > drain/alloc is potentially a lock contention. The catchall for corner > >> > cases would be to decay high from vmstat context based on pcp->expires. The > >> > decay would prevent the "high" being pinned at an artifically high value > >> > without any zone lock contention for prolonged periods of time and also > >> > mitigate worst-case due to state being per-cpu. The downside is that "high" > >> > would also oscillate for a continuous steady allocation pattern as the PID > >> > control might pick an ideal value suitable for a long period of time with > >> > the "decay" disrupting that ideal value. > >> > >> Maybe we can track the minimal value of pcp->count. If it's small > >> enough recently, we can avoid to decay pcp->high. Because the pages in > >> PCP are used for allocations instead of idle. > > > > Implement as a separate patch. I suspect this type of heuristic will be > > very benchmark specific and the complexity may not be worth it in the > > general case. > > OK. > > >> Another question is as follows. > >> > >> For example, on CPU A, a large number of pages are freed, and we > >> maximize batch and high. So, a large number of pages are put in PCP. > >> Then, the possible situations may be, > >> > >> a) a large number of pages are allocated on CPU A after some time > >> b) a large number of pages are allocated on another CPU B > >> > >> For a), we want the pages are kept in PCP of CPU A as long as possible. > >> For b), we want the pages are kept in PCP of CPU A as short as possible. > >> I think that we need to balance between them. What is the reasonable > >> time to keep pages in PCP without many allocations? > >> > > > > This would be a case where you're relying on vmstat to drain the PCP after > > a period of time as it is a corner case. > > Yes. The remaining question is how long should "a period of time" be? Match the time used for draining "remote" pages from the PCP lists. The choice is arbitrary and no matter what value is chosen, it'll be possible to build an adverse workload. > If it's long, the pages in PCP can be used for allocation after some > time. If it's short the pages can be put in buddy, so can be used by > other workloads if needed. > Assume that the main reason to expire pages and put them back on the buddy list is to avoid premature allocation failures due to pages pinned on the PCP. Once pages are going back onto the buddy list and the expiry is hit, it might as well be assumed that the pages are cache-cold. Some bad corner cases should be mitigated by disabling the adapative sizing when reclaim is active. The big remaaining corner case to watch out for is where the sum of the boosted pcp->high exceeds the low watermark. If that should ever happen then potentially a premature OOM happens because the watermarks are fine so no reclaim is active but no pages are available. It may even be the case that the sum of pcp->high should not exceed *min* as that corner case means that processes may prematurely enter direct reclaim (not as bad as OOM but still bad). > Anyway, I will do some experiment for that. > > > You cannot reasonably detect the pattern on two separate per-cpu lists > > without either inspecting remote CPU state or maintaining global > > state. Either would incur cache miss penalties that probably cost more > > than the heuristic saves. > > Yes. Totally agree. > > Best Regards, > Huang, Ying -- Mel Gorman SUSE Labs