Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp1739524ybg; Sat, 19 Oct 2019 01:27:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqwcY9sA/bo2PmHt6Uc4zinvOA0b8PmabeZRjd6kh8jpJBKhX6Zg4ubRlUWc1T4cYP1RFlqu X-Received: by 2002:a05:6402:2022:: with SMTP id ay2mr14071103edb.219.1571473673487; Sat, 19 Oct 2019 01:27:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571473673; cv=none; d=google.com; s=arc-20160816; b=h3NW6eMKYI6R1l29z2FZ2UIpglFwqyx7I92hieDecw/kFoJxHAiBgutt/Or8PvF9cB 1jAqqHtO+O0MgzvkDYagwS2Kw1Wq5ovdn6SEcDvgEP2C00ql8lsixRIxnMXAcN4oHiMb Rw7kmQDnHbFSDa9AQdgGgcyMzMI29NWLUyUCGZ9Whu43fhEdqGhpq2RZ5u2YrA28P1aU 6FOLzKksSTWhRd5FPzf0flXQnN8/DUa2XZGw6ckR415wQ6za9wC53EYQsTigjrBDV5Og W3tP+Kya0cPMaBlvf/IFdVPvHdPwPaSXB8IeOStE+WpbQ0ggbKvVexPhqM46svzq8OoR fJCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=RhD7eqAs4eAUImdQ/sNNUAPROeNmtVd7sRU7oj4fyF0=; b=dpWI7G0kqsoRwG802QErgfPM6NGwQ/JnfTSjrKR7ARL4zGJk61P7i/E1ARyJvRBSvE qvMtW3649gx6+bQ9aQiRUEen9qb7BfWtSZGHmnrJbKTqNESW0wWktBAq2mh3MIiHbkPV stOvl3HnXyFleJbnv3bjCPsZ0/RY9demuq1PB4fJjT8kkfwLapIMJX1hrz1bNgGR2aCN a5rgMCbPqXjO4wPExjIL5MPsFh4IRN0kxk2Lp8E2D6wsBmb6LWrBYO2A0WdxUBxKnplW 2F7Lu7XYlPdo1tjWw/z/a1zQUBM6nk2sBBh6CoWrf3peCdPmFd7jr18tv5wc6kICmEij 4C2w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e50si349781ede.59.2019.10.19.01.27.30; Sat, 19 Oct 2019 01:27:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2633883AbfJROKF (ORCPT + 99 others); Fri, 18 Oct 2019 10:10:05 -0400 Received: from outbound-smtp16.blacknight.com ([46.22.139.233]:52314 "EHLO outbound-smtp16.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727988AbfJROKF (ORCPT ); Fri, 18 Oct 2019 10:10:05 -0400 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp16.blacknight.com (Postfix) with ESMTPS id 312B41C2FD6 for ; Fri, 18 Oct 2019 15:10:02 +0100 (IST) Received: (qmail 31552 invoked from network); 18 Oct 2019 14:10:01 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.19.210]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 18 Oct 2019 14:10:01 -0000 Date: Fri, 18 Oct 2019 15:09:59 +0100 From: Mel Gorman To: Michal Hocko Cc: Andrew Morton , Vlastimil Babka , Thomas Gleixner , Matt Fleming , Borislav Petkov , Linux-MM , Linux Kernel Mailing List Subject: Re: [PATCH 2/3] mm, meminit: Recalculate pcpu batch and high limits after init completes Message-ID: <20191018140959.GK3321@techsingularity.net> References: <20191018105606.3249-1-mgorman@techsingularity.net> <20191018105606.3249-3-mgorman@techsingularity.net> <20191018130127.GP5017@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20191018130127.GP5017@dhcp22.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 18, 2019 at 03:01:27PM +0200, Michal Hocko wrote: > On Fri 18-10-19 11:56:05, Mel Gorman wrote: > > Deferred memory initialisation updates zone->managed_pages during > > the initialisation phase but before that finishes, the per-cpu page > > allocator (pcpu) calculates the number of pages allocated/freed in > > batches as well as the maximum number of pages allowed on a per-cpu list. > > As zone->managed_pages is not up to date yet, the pcpu initialisation > > calculates inappropriately low batch and high values. > > > > This increases zone lock contention quite severely in some cases with the > > degree of severity depending on how many CPUs share a local zone and the > > size of the zone. A private report indicated that kernel build times were > > excessive with extremely high system CPU usage. A perf profile indicated > > that a large chunk of time was lost on zone->lock contention. > > > > This patch recalculates the pcpu batch and high values after deferred > > initialisation completes on each node. It was tested on a 2-socket AMD > > EPYC 2 machine using a kernel compilation workload -- allmodconfig and > > all available CPUs. > > > > mmtests configuration: config-workload-kernbench-max > > Configuration was modified to build on a fresh XFS partition. > > > > kernbench > > 5.4.0-rc3 5.4.0-rc3 > > vanilla resetpcpu-v1r1 > > Amean user-256 13249.50 ( 0.00%) 15928.40 * -20.22%* > > Amean syst-256 14760.30 ( 0.00%) 4551.77 * 69.16%* > > Amean elsp-256 162.42 ( 0.00%) 118.46 * 27.06%* > > Stddev user-256 42.97 ( 0.00%) 50.83 ( -18.30%) > > Stddev syst-256 336.87 ( 0.00%) 33.70 ( 90.00%) > > Stddev elsp-256 2.46 ( 0.00%) 0.81 ( 67.01%) > > > > 5.4.0-rc3 5.4.0-rc3 > > vanillaresetpcpu-v1r1 > > Duration User 39766.24 47802.92 > > Duration System 44298.10 13671.93 > > Duration Elapsed 519.11 387.65 > > > > The patch reduces system CPU usage by 69.16% and total build time by > > 27.06%. The variance of system CPU usage is also much reduced. > > The fix makes sense. It would be nice to see the difference in the batch > sizes from the initial setup compared to the one after the deferred > intialization is done > Before, this was the breakdown of batch and high values over all zones were 256 batch: 1 256 batch: 63 512 batch: 7 256 high: 0 256 high: 378 512 high: 42 i.e. 512 pcpu pagesets had a batch limit of 7 and a high limit of 42. These were for the NORMAL zones on the system. After the patch 256 batch: 1 768 batch: 63 256 high: 0 768 high: 378 > > Cc: stable@vger.kernel.org # v4.15+ > > Hmm, are you sure about 4.15? Doesn't this go all the way down to > deferred initialization? I do not see any recent changes on when > setup_per_cpu_pageset is called. > No, I'm not 100% sure. It looks like this was always an issue from the code but did not happen on at least one 4.12-based distribution kernel for reasons that are non-obvious. Either way, the tag should have been "v4.1+" Thanks. -- Mel Gorman SUSE Labs