Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp620497imm; Fri, 8 Jun 2018 02:25:34 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJEtRh8Ei5FXKJ2zWDYHOm2p8VA1wsIragUtB2jQVm/4tTE9+qszFMlHqUsivHdDYsW1xII X-Received: by 2002:a62:9b57:: with SMTP id r84-v6mr5278082pfd.157.1528449934457; Fri, 08 Jun 2018 02:25:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528449934; cv=none; d=google.com; s=arc-20160816; b=TltnAn4qsgy97SkbkYTYtqCJtbpmyaFTGQHEuSyfbRI3qDkDsJ0qwm3s9mUOnf88Qq 3yNFGwEO+BYguuZYRmnypdbKvNrOS/j1ks/rVU9ItiN8Jl7pLybs+jJYsoFTQEFJ/C79 Au4VzVkDmFVK07hbTeXrtxXnEZd8HCla54DOYdixTxfIJBJvsOo3y3VPvuiU6mkbTqxn ct9esdj0ZNBow8nDhypXzKgRHBHe6tCqjDEli1Gjt/HNnIt7Vjig5+cOPPH4FJEMb4cK EqPG9UzaQYAMTkaKPTGppwnUfxlc+QoEQrFyx0/v52pNTkY6yQo2lbGRu7CfTiF0Wbh+ 5IDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=b6SCh9mUGGIOMu4fP7oorSkp/TxTCl7ZnkQPwpJtiGU=; b=pwm+U35+lHyhk29B7PNTFnqKcgSqSgSzLpCIiveeUePwHSF/70F2EL6UYPJoOVuGeL XxZ408phNTV/obsV4kXK34TMS2gtXa9S27e12+NhYF8oMuxkeCSzk0mRdLjUTP3Wordj hOzPfhYjmLrMHkYrLoyRjj4z9oQ3Q4Rop1gKwjT/PCyLPb5V89zKYMMMaQgcJqYfgL21 VRtKbuv6qE3yY93stwfee0kMvN5Co3dsaaxpZyQR4Xq9QuPACWkaKhpbUzrspN7o9XuO vxu1xEC0B7bcAI6zeNXk4o0nJjqAeBbmQ+3NtOvxMUbwgcElqrPc7uF80NBIUkSJrtUB T+sw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o14-v6si43937018pgc.664.2018.06.08.02.25.20; Fri, 08 Jun 2018 02:25:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751246AbeFHJY4 (ORCPT + 99 others); Fri, 8 Jun 2018 05:24:56 -0400 Received: from outbound-smtp02.blacknight.com ([81.17.249.8]:43017 "EHLO outbound-smtp02.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750993AbeFHJYx (ORCPT ); Fri, 8 Jun 2018 05:24:53 -0400 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp02.blacknight.com (Postfix) with ESMTPS id 62904987C6 for ; Fri, 8 Jun 2018 09:24:52 +0000 (UTC) Received: (qmail 6897 invoked from network); 8 Jun 2018 09:24:52 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[37.228.237.171]) by 81.17.254.9 with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 8 Jun 2018 09:24:52 -0000 Date: Fri, 8 Jun 2018 10:24:51 +0100 From: Mel Gorman To: Jirka Hladky Cc: Jakub Racek , linux-kernel , "Rafael J. Wysocki" , Len Brown , linux-acpi@vger.kernel.org Subject: Re: [4.17 regression] Performance drop on kernel-4.17 visible on Stream, Linpack and NAS parallel benchmarks Message-ID: <20180608092451.mwzr6pvxh2cprzju@techsingularity.net> References: <20180606122731.GB27707@jra-laptop.brq.redhat.com> <20180607123915.avrqbpp4adgj7ck4@techsingularity.net> <20180608074057.jtxczsw3jwx6boti@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 08, 2018 at 10:49:03AM +0200, Jirka Hladky wrote: > Hi Mel, > > automatic NUMA balancing doesn't run long enough to migrate all the > > memory. That would definitely be the case for STREAM. > > This could explain the behavior we observe. stream is running ~20 seconds > at the moment. I can easily change the runtime by changing the number of > iterations. What is the time period when you expect the memory to be fully > migrated? > Unknown and unknowable. It depends entirely on the reference pattern of the different threads. If they are fully parallelised with private buffers that are page-aligned then I expect it to be quick (to pass the 2-reference filter). If threads are sharing data on a 4K (base page case) or 2M boundary (THP enabled) then it may take longer as two or more threads will disagree on what the appropriate placement for a page is. > I have now checked numastat logs and after 15 seconds I see roughly 80MiB > out of 200MiB of the allocated memory migrated for each of 10 processes > which have changed the NUMA CPU node after started. This is on 2 > socket Gold 6126 CPU @ 2.60GHz server with DDR4 2666 MHz. That's 800 MiB of > memory migrated in 15 seconds which is results in the average migration > rate of 50MiB/s - is this an expected value? > I expect that to be far short of the capabilities of the machine. Again, migrations can be delayed indefinitely if threads have buffers that are not page-aligned (4K or 2M depending). -- Mel Gorman SUSE Labs