Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp981163iob; Fri, 13 May 2022 18:28:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJypgl3Qf2uaaIbSdyM1qp5uaBpaULG8vDY+LpX3q8GyZ/lkqVwTu2tAPnk/ot/fITbY9F1g X-Received: by 2002:a05:6000:1848:b0:20c:713b:8e1e with SMTP id c8-20020a056000184800b0020c713b8e1emr6152952wri.640.1652491735196; Fri, 13 May 2022 18:28:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652491735; cv=none; d=google.com; s=arc-20160816; b=yr3g5WmKKUfshW1FE9o/mFWRqMhNyNAavj4AZBbYpSwDbweUEERFxjx436KN/8g/OB RFGMqJyLZdfzCBiS/Nan5DNYW4VyrCyd0SMuUiy6b1q1Tq0Rdkx/qluHp/ADk1jNGmBz OKYjJSm6KTeB/WJJLpoWw0AkyEeJBZm/EhNapORFqLehIgU2o6k0I4xYQ28atL14HZvy NUFJM5D4BNwKOPRkBSukuxj0y5OwNtSyhfpEG47Q+yBpDT2+u/YMjEMtEk8vKLCCE9lB PWeIL/PgZSqoMjckncqIsaDSg1cEEFHpZBUrTBGc25fb81O9e0+Y9sKvMDatCiW9P4ek G/+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=yp4a5SbiTbou+A4YXwnbzYve2NvDHwahFNvE+LZ6Y4o=; b=qTANDqa9QoAtITmB0j8fojbPGPG2YbJxcvzbqyBsZXRwFPHVNJ3zg+nS9BJwErqDYL dnuSMtwQIAEw24F/VXcQa5p+qO5cXHtDqTMDSLjVfgu4jMENpkvs2LKp6TRz5fQe32ZC /Ltbk7cLMJ4sgicQ9wLqmlEv7wqiHny74/tRVLbL/iD7cUGx8f4KjWf4aI3a3ePUEygA THNCSmVBuvOl3wRMDo9IqB9vSZZrLsNmoT22tjb//NMHfGv/5Ee9xDdC1+kYZrXtz3J1 aw5GHvN1MhtF3jZN/aU1W1OrrAZYf4ohY05l1WZo36aekemqMzqCoFrOFPrk3lY9nTnv 71Pw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=UYDyTW9R; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id d11-20020adff2cb000000b00205c997f169si3464282wrp.1.2022.05.13.18.28.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 13 May 2022 18:28:55 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=UYDyTW9R; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 134523F318F; Fri, 13 May 2022 16:57:50 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345120AbiELTht (ORCPT + 99 others); Thu, 12 May 2022 15:37:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238899AbiELThr (ORCPT ); Thu, 12 May 2022 15:37:47 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1212F62CC9 for ; Thu, 12 May 2022 12:37:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A0AD361BE2 for ; Thu, 12 May 2022 19:37:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE9A5C34117; Thu, 12 May 2022 19:37:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1652384265; bh=lQpabCphLsIePbatgCbsEpbOuoC7zkFlFeb5DU65TmI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=UYDyTW9RGEvNnX3N0dpBTAO528x9y7IuNfzSzRDBC2Mk0Ks6gaTOiZk8NWCT4N/Xq r4XbMICQYY5ENy20WojmmEBXmIYChvUFqlRtj+PxhjzdXlxbIu/aU3v709AUGOvRXT ajOHnuqlikd883wEH/JTxU1sAv15kCNPcAIbn82Y= Date: Thu, 12 May 2022 12:37:43 -0700 From: Andrew Morton To: Mel Gorman Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , LKML , Linux-MM Subject: Re: [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Message-Id: <20220512123743.5be26b3ad4413f20d5f46564@linux-foundation.org> In-Reply-To: <20220512085043.5234-7-mgorman@techsingularity.net> References: <20220512085043.5234-1-mgorman@techsingularity.net> <20220512085043.5234-7-mgorman@techsingularity.net> X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,RDNS_NONE,SPF_HELO_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 12 May 2022 09:50:43 +0100 Mel Gorman wrote: > From: Nicolas Saenz Julienne > > Some setups, notably NOHZ_FULL CPUs, are too busy to handle the per-cpu > drain work queued by __drain_all_pages(). So introduce a new mechanism to > remotely drain the per-cpu lists. It is made possible by remotely locking > 'struct per_cpu_pages' new per-cpu spinlocks. A benefit of this new scheme > is that drain operations are now migration safe. > > There was no observed performance degradation vs. the previous scheme. > Both netperf and hackbench were run in parallel to triggering the > __drain_all_pages(NULL, true) code path around ~100 times per second. > The new scheme performs a bit better (~5%), although the important point > here is there are no performance regressions vs. the previous mechanism. > Per-cpu lists draining happens only in slow paths. > > Minchan Kim tested this independently and reported; > > My workload is not NOHZ CPUs but run apps under heavy memory > pressure so they goes to direct reclaim and be stuck on > drain_all_pages until work on workqueue run. > > unit: nanosecond > max(dur) avg(dur) count(dur) > 166713013 487511.77786438033 1283 > > From traces, system encountered the drain_all_pages 1283 times and > worst case was 166ms and avg was 487us. > > The other problem was alloc_contig_range in CMA. The PCP draining > takes several hundred millisecond sometimes though there is no > memory pressure or a few of pages to be migrated out but CPU were > fully booked. > > Your patch perfectly removed those wasted time. I'm not getting a sense here of the overall effect upon userspace performance. As Thomas said last year in https://lkml.kernel.org/r/87v92sgt3n.ffs@tglx : The changelogs and the cover letter have a distinct void vs. that which : means this is just another example of 'scratch my itch' changes w/o : proper justification. Is there more to all of this than itchiness and if so, well, you know the rest ;)