Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932303AbaGaIsl (ORCPT ); Thu, 31 Jul 2014 04:48:41 -0400 Received: from cantor2.suse.de ([195.135.220.15]:54287 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932239AbaGaIsk (ORCPT ); Thu, 31 Jul 2014 04:48:40 -0400 Date: Thu, 31 Jul 2014 09:48:32 +0100 From: Mel Gorman To: Aaron Lu Cc: Stephen Rothwell , LKML , lkp@01.org Subject: Re: [LKP] [mm] b72fd1470c9: -41.7% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page Message-ID: <20140731084832.GP10819@suse.de> References: <20140731055035.GB19742@aaronlu.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20140731055035.GB19742@aaronlu.sh.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 31, 2014 at 01:50:35PM +0800, Aaron Lu wrote: > FYI, we noticed the below changes on > > git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master > commit b72fd1470c9735f53485d089aa918dc327a86077 ("mm: rearrange zone fields into read-only, page alloc, statistics and page reclaim lines") > > test case: lkp-st02/dd-write/5m-11HDD-JBOD-cfq-xfs-10dd > > e28c951ff01a805 b72fd1470c9735f53485d089a > --------------- ------------------------- > 1.06 ~ 6% -41.7% 0.62 ~ 3% TOTAL perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page > 1.34 ~ 2% -19.8% 1.07 ~ 2% TOTAL perf-profile.cpu-cycles.__block_write_begin.xfs_vm_write_begin.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter > 1.19 ~ 5% -12.1% 1.05 ~ 4% TOTAL perf-profile.cpu-cycles.copy_from_user_atomic_iovec.iov_iter_copy_from_user_atomic.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter > 2.78 ~ 1% -16.3% 2.32 ~ 4% TOTAL perf-profile.cpu-cycles.__clear_user.read_zero.read_zero.vfs_read.sys_read > 2.96e+09 ~ 4% -5.2% 2.806e+09 ~ 0% TOTAL perf-stat.cache-misses > 3.86e+12 ~ 5% -5.2% 3.658e+12 ~ 1% TOTAL perf-stat.ref-cycles > > Legend: > ~XX% - stddev percent > [+-]XX% - change percent > I'm not exactly sure what I'm reading here. I think it is reporting on cpu cycles and cache misses used in various kernel functions. It's not clear what the units are but it looks like percentages of overall cycles spent in the reported functions. That may or may not be good depending on whether there is a higher cost elsewhere pushing the percentages down but that detail is not in the report. It looks like this is reporting that fewer clock cycles are being spent and incurring fewer cache misses. What is the problem? -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/