Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753435Ab1DRANp (ORCPT ); Sun, 17 Apr 2011 20:13:45 -0400 Received: from mga03.intel.com ([143.182.124.21]:17660 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751582Ab1DRANj (ORCPT ); Sun, 17 Apr 2011 20:13:39 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.64,229,1301900400"; d="scan'208";a="420895894" Date: Mon, 18 Apr 2011 08:13:33 +0800 From: Wu Fengguang To: "sedat.dilek@gmail.com" Cc: Andrew Morton , Jan Kara , Christoph Hellwig , Trond Myklebust , Dave Chinner , "Theodore Ts'o" , Chris Mason , Peter Zijlstra , Mel Gorman , Rik van Riel , KOSAKI Motohiro , Greg Thelen , Minchan Kim , Vivek Goyal , Andrea Righi , Balbir Singh , linux-mm , "linux-fsdevel@vger.kernel.org" , LKML Subject: Re: [PATCH 00/12] IO-less dirty throttling v7 Message-ID: <20110418001333.GA8890@localhost> References: <20110416132546.765212221@intel.com> <20110417014430.GA9419@localhost> <20110417041003.GA17032@localhost> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110417041003.GA17032@localhost> User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3043 Lines: 99 Hi Sedat, > Please revert the last commit. It's not necessary anyway. > > commit 84a9890ddef487d9c6d70934c0a2addc65923bcf > Author: Wu Fengguang > Date: Sat Apr 16 18:38:41 2011 -0600 > > writeback: scale dirty proportions period with writeout bandwidth > > CC: Peter Zijlstra > Signed-off-by: Wu Fengguang Please do revert that commit, because I found a sleep-inside-spinlock bug with it. Here is the fixed one (but you don't have to track this optional patch). Thanks, Fengguang --- Subject: writeback: scale dirty proportions period with writeout bandwidth Date: Sat Apr 16 18:38:41 CST 2011 CC: Peter Zijlstra Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) --- linux-next.orig/mm/page-writeback.c 2011-04-17 20:52:13.000000000 +0800 +++ linux-next/mm/page-writeback.c 2011-04-18 07:57:01.000000000 +0800 @@ -121,20 +121,13 @@ static struct prop_descriptor vm_complet static struct prop_descriptor vm_dirties; /* - * couple the period to the dirty_ratio: + * couple the period to global write throughput: * - * period/2 ~ roundup_pow_of_two(dirty limit) + * period/2 ~ roundup_pow_of_two(write IO throughput) */ static int calc_period_shift(void) { - unsigned long dirty_total; - - if (vm_dirty_bytes) - dirty_total = vm_dirty_bytes / PAGE_SIZE; - else - dirty_total = (vm_dirty_ratio * determine_dirtyable_memory()) / - 100; - return 2 + ilog2(dirty_total - 1); + return 2 + ilog2(default_backing_dev_info.avg_write_bandwidth); } /* @@ -143,6 +136,13 @@ static int calc_period_shift(void) static void update_completion_period(void) { int shift = calc_period_shift(); + + if (shift > PROP_MAX_SHIFT) + shift = PROP_MAX_SHIFT; + + if (abs(shift - vm_completions.pg[0].shift) <= 1) + return; + prop_change_shift(&vm_completions, shift); prop_change_shift(&vm_dirties, shift); } @@ -180,7 +180,6 @@ int dirty_ratio_handler(struct ctl_table ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); if (ret == 0 && write && vm_dirty_ratio != old_ratio) { - update_completion_period(); vm_dirty_bytes = 0; } return ret; @@ -196,7 +195,6 @@ int dirty_bytes_handler(struct ctl_table ret = proc_doulongvec_minmax(table, write, buffer, lenp, ppos); if (ret == 0 && write && vm_dirty_bytes != old_bytes) { - update_completion_period(); vm_dirty_ratio = 0; } return ret; @@ -1044,6 +1042,8 @@ snapshot: bdi->bw_time_stamp = now; unlock: spin_unlock(&dirty_lock); + if (gbdi->bw_time_stamp == now) + update_completion_period(); } static unsigned long max_pause(struct backing_dev_info *bdi, -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/