Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751539AbcD1FRU (ORCPT ); Thu, 28 Apr 2016 01:17:20 -0400 Received: from mga01.intel.com ([192.55.52.88]:55069 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750761AbcD1FRT (ORCPT ); Thu, 28 Apr 2016 01:17:19 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,545,1455004800"; d="scan'208";a="968139845" Date: Thu, 28 Apr 2016 13:17:08 +0800 From: Aaron Lu To: Michal Hocko Cc: "Huang, Ying" , kernel test robot , Stephen Rothwell , Tetsuo Handa , Hillf Danton , LKML , Johannes Weiner , Mel Gorman , David Rientjes , Andrew Morton , lkp@01.org, KAMEZAWA Hiroyuki , linux-mm@kvack.org Subject: Re: [LKP] [lkp] [mm, oom] faad2185f4: vm-scalability.throughput -11.8% regression Message-ID: <20160428051659.GA10843@aaronlu.sh.intel.com> References: <20160427031556.GD29014@yexl-desktop> <20160427073617.GA2179@dhcp22.suse.cz> <87fuu7iht0.fsf@yhuang-dev.intel.com> <20160427083733.GE2179@dhcp22.suse.cz> <87bn4vigpc.fsf@yhuang-dev.intel.com> <20160427091718.GG2179@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160427091718.GG2179@dhcp22.suse.cz> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3178 Lines: 69 On Wed, Apr 27, 2016 at 11:17:19AM +0200, Michal Hocko wrote: > On Wed 27-04-16 16:44:31, Huang, Ying wrote: > > Michal Hocko writes: > > > > > On Wed 27-04-16 16:20:43, Huang, Ying wrote: > > >> Michal Hocko writes: > > >> > > >> > On Wed 27-04-16 11:15:56, kernel test robot wrote: > > >> >> FYI, we noticed vm-scalability.throughput -11.8% regression with the following commit: > > >> > > > >> > Could you be more specific what the test does please? > > >> > > >> The sub-testcase of vm-scalability is swap-w-rand. An RAM emulated pmem > > >> device is used as a swap device, and a test program will allocate/write > > >> anonymous memory randomly to exercise page allocation, reclaiming, and > > >> swapping in code path. > > > > > > Can I download the test with the setup to play with this? > > > > There are reproduce steps in the original report email. > > > > To reproduce: > > > > git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git > > cd lkp-tests > > bin/lkp install job.yaml # job file is attached in this email > > bin/lkp run job.yaml > > > > > > The job.yaml and kconfig file are attached in the original report email. > > Thanks for the instructions. My bad I have overlooked that in the > initial email. I have checked the configuration file and it seems rather > hardcoded for a particular HW. It expects a machine with 128G and > reserves 96G!4G which might lead to different amount of memory in the > end depending on the particular memory layout. Indeed, the job file needs manual change. The attached job file is the one we used on the test machine. > > Before I go and try to recreate a similar setup, how stable are the > results from this test. Random access pattern sounds like rather > volatile to be consider for a throughput test. Or is there any other > side effect I am missing and something fails which didn't use to > previously. I have the same doubt too, but the results look really stable(only for commit 0da9597ac9c0, see below for more explanation). We did 8 runs for this report and the standard deviation(represented by the %stddev shown in the original report) is used to show exactly this. I just checked the results again and found that the 8 runs for your commit faad2185f482 all OOMed, only 1 of them is able to finish the test before the OOM occur and got a throughput value of 38653. The source code for this test is here: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/tree/usemem.c And it's started as: ./usemem --runtime 300 -n 16 --random 6368538624 which means to fork 16 processes, each dealing with 6GiB around data. By dealing here, I mean the process each will mmap an anonymous region of 6GiB size and then write data to that area at random place, thus will trigger swapouts and swapins after the memory is used up(since the system has 128GiB memory and 96GiB is used by the pmem driver as swap space, the memory will be used up after a little while). So I guess the question here is, after the OOM rework, is the OOM expected for such a case? If so, then we can ignore this report.