Received: by 10.192.165.148 with SMTP id m20csp8435imm; Wed, 9 May 2018 08:01:49 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpSq+IhyWygMZ1NQ3Gl92W2nMpS/x4RjKxUvyfeMeGEHSMtOYronWVREmFoxRXKvWa4txe3 X-Received: by 2002:a17:902:40d:: with SMTP id 13-v6mr45981189ple.117.1525878109345; Wed, 09 May 2018 08:01:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525878109; cv=none; d=google.com; s=arc-20160816; b=gSZ2HWoS+lvv4S035sXxqdPp3vN2HVQ0t7FSFB+Rxw7kyBYncQ78brmirYcPXTL2Yb SUrWdChyyhslMrRJrl9kY0fRh/Canh8ASqHA3KpHUBNmnTvnOEK2o/s00+6dz5v783eT qhUtIzVmVv0usGL6EHjBuKjxhNVSE+nLAXC/icbu8YuJKee2Bi6imW/J0HWTPQRHmT5I FM5eaP+wsE5B0fVer1ccgfCNXo/m5G7oSJsTIHl5Ee3Ait+TlZTQtp25CKtUDecWLKfF brqTPU1/OJosuiq2L+Xl2hDYNi78kUgNMPKhBsSjSWpZNBm9Tgszqdb0vZC6ZrYc0IBF qkwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=fKNyHGxyqbZrBj64XM1fMxiKZ6dGhzmH4ncQlt19vrI=; b=fJrBpBLWCPm4EsOrPslhkRX6iAe9/2UcK5g2VYId0SY54NuY7EUh2cMfrCg36flIq/ YVbBIQmZiFT3PKwxXfX/93oBh+qb9en9Rrkc74Mh5/dwg7flWJ0kO/bn6g5L613V+omk avjT8c3NDrR4xrfg4BmlwuwRi2WTUDK3pF2YTob9++x2hHzE3FGCkr8GFp4M6vTruqAU 84CWj4dk9nMO4OXqoUmudIf2onFm0YPAt/VMNq5Z6YeFqGG1j7E0RSbT3G3tVDiXLodP X8GykH3JWnD4CGB7qF17M4bBV0pOmG8ToyC6BVyFvcZR/HgTgKsgJfJZ8jp0Hny8Aa7I A3NQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y11-v6si19067406pgv.473.2018.05.09.08.01.28; Wed, 09 May 2018 08:01:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935323AbeEIPA6 (ORCPT + 99 others); Wed, 9 May 2018 11:00:58 -0400 Received: from mga03.intel.com ([134.134.136.65]:46586 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935065AbeEIPA5 (ORCPT ); Wed, 9 May 2018 11:00:57 -0400 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 May 2018 08:00:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,381,1520924400"; d="scan'208";a="39650549" Received: from aaronlu.sh.intel.com (HELO intel.com) ([10.239.159.135]) by orsmga007.jf.intel.com with ESMTP; 09 May 2018 08:00:55 -0700 Date: Wed, 9 May 2018 23:02:21 +0800 From: Aaron Lu To: Johannes Weiner Cc: kernel test robot , lkp@01.org, linux-kernel@vger.kernel.org Subject: Re: [LKP] [lkp-robot] [mm] e27be240df: will-it-scale.per_process_ops -27.2% regression Message-ID: <20180509150221.GA4848@intel.com> References: <20180508053451.GD30203@yexl-desktop> <20180508172640.GB24175@cmpxchg.org> <20180509023211.GB10016@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180509023211.GB10016@intel.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 09, 2018 at 10:32:11AM +0800, Aaron Lu wrote: > On Tue, May 08, 2018 at 01:26:40PM -0400, Johannes Weiner wrote: > > Hello, > > > > On Tue, May 08, 2018 at 01:34:51PM +0800, kernel test robot wrote: > > > FYI, we noticed a -27.2% regression of will-it-scale.per_process_ops due to commit: > > > > > > > > > commit: e27be240df53f1a20c659168e722b5d9f16cc7f4 ("mm: memcg: make sure memory.events is uptodate when waking pollers") > > > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master > > > > > > in testcase: will-it-scale > > > on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory > > > with following parameters: > > > > > > nr_task: 100% > > > mode: process > > > test: page_fault3 > > > cpufreq_governor: performance > > > > > > test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two. > > > test-url: https://github.com/antonblanchard/will-it-scale > > > > This is surprising. Do you run these tests in a memory cgroup with a > > limit set? Can you dump that cgroup's memory.events after the run? > > There is no cgroup related setup so yes, this is surprising. > But the result is quite stable, I have just confirmed on another > Haswell-EP machine. > > perf shows increased cycles spent for lock_page_memcg and > unlock_page_memcg, maybe this can shed some light. Full profile for this > commit and its parent are attached. > > I have also attached dmesg for both commits in case they are useful, > please feel free to let me know if you need any other information. We > also collected a ton of other information during the run like > /proc/vmstat, /proc/meminfo, /proc/interrupt etc. Test on Broadwell-EP also showed 35% regression, here are a list of functions that take more CPU cycles with this commit according to perf: a38c015f3156895b e27be240df53f1a20c659168e7 ---------------- -------------------------- %stddev %change %stddev \ | \ 58033709 -35.0% 37727244 will-it-scale.workload ... ... 3.82 +6.1 9.97 perf-profile.self.cycles-pp.handle_mm_fault 3.19 +6.2 9.37 perf-profile.self.cycles-pp.page_remove_rmap 0.25 +6.5 6.71 perf-profile.self.cycles-pp.__unlock_page_memcg 3.63 +7.5 11.15 perf-profile.self.cycles-pp.page_add_file_rmap 0.60 +8.1 8.70 perf-profile.self.cycles-pp.lock_page_memcg