Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755129Ab2K1OTI (ORCPT ); Wed, 28 Nov 2012 09:19:08 -0500 Received: from mail-oa0-f46.google.com ([209.85.219.46]:47608 "EHLO mail-oa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752711Ab2K1OTG (ORCPT ); Wed, 28 Nov 2012 09:19:06 -0500 MIME-Version: 1.0 In-Reply-To: References: <20121120180949.GG1408@quack.suse.cz> <50AF7901.20401@kernel.dk> <50B46E05.70906@kernel.dk> Date: Wed, 28 Nov 2012 22:19:04 +0800 Message-ID: Subject: Re: [PATCH 1/2] percpu-rwsem: use synchronize_sched_expedited From: Jeff Chua To: Mikulas Patocka Cc: Linus Torvalds , Jens Axboe , Lai Jiangshan , Jan Kara , lkml , linux-fsdevel Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4505 Lines: 120 On Wed, Nov 28, 2012 at 11:59 AM, Mikulas Patocka wrote: > > > On Tue, 27 Nov 2012, Jeff Chua wrote: > >> On Tue, Nov 27, 2012 at 3:38 PM, Jens Axboe wrote: >> > On 2012-11-27 06:57, Jeff Chua wrote: >> >> On Sun, Nov 25, 2012 at 7:23 AM, Jeff Chua wrote: >> >>> On Sun, Nov 25, 2012 at 5:09 AM, Mikulas Patocka wrote: >> >>>> So it's better to slow down mount. >> >>> >> >>> I am quite proud of the linux boot time pitting against other OS. Even >> >>> with 10 partitions. Linux can boot up in just a few seconds, but now >> >>> you're saying that we need to do this semaphore check at boot up. By >> >>> doing so, it's inducing additional 4 seconds during boot up. >> >> >> >> By the way, I'm using a pretty fast SSD (Samsung PM830) and fast CPU >> >> (2.8GHz). I wonder if those on slower hard disk or slower CPU, what >> >> kind of degradation would this cause or just the same? >> > >> > It'd likely be the same slow down time wise, but as a percentage it >> > would appear smaller on a slower disk. >> > >> > Could you please test Mikulas' suggestion of changing >> > synchronize_sched() in include/linux/percpu-rwsem.h to >> > synchronize_sched_expedited()? >> >> Tested. It seems as fast as before, but may be a "tick" slower. Just >> perception. I was getting pretty much 0.012s with everything reverted. >> With synchronize_sched_expedited(), it seems to be 0.012s ~ 0.013s. >> So, it's good. >> >> >> > linux-next also has a re-write of the per-cpu rw sems, out of Andrews >> > tree. It would be a good data point it you could test that, too. >> >> Tested. It's slower. 0.350s. But still faster than 0.500s without the patch. >> >> # time mount /dev/sda1 /mnt; sync; sync; umount /mnt >> >> >> So, here's the comparison ... >> >> 0.500s 3.7.0-rc7 >> 0.168s 3.7.0-rc2 >> 0.012s 3.6.0 >> 0.013s 3.7.0-rc7 + synchronize_sched_expedited() >> 0.350s 3.7.0-rc7 + Oleg's patch. >> >> >> Thanks, >> Jeff. > > OK, I'm seinding two patches to reduce mount times. If it is possible to > put them to 3.7.0, put them there. > > Mikulas > > --- > > percpu-rwsem: use synchronize_sched_expedited > > Use synchronize_sched_expedited() instead of synchronize_sched() > to improve mount speed. > > This patch improves mount time from 0.500s to 0.013s. > > Note: if realtime people complain about the use > synchronize_sched_expedited() and synchronize_rcu_expedited(), I suggest > that they introduce an option CONFIG_REALTIME or > /proc/sys/kernel/realtime and turn off these *_expedited functions if > the option is enabled (i.e. turn synchronize_sched_expedited into > synchronize_sched and synchronize_rcu_expedited into synchronize_rcu). > > Signed-off-by: Mikulas Patocka > > --- > include/linux/percpu-rwsem.h | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > Index: linux-3.7-rc7/include/linux/percpu-rwsem.h > =================================================================== > --- linux-3.7-rc7.orig/include/linux/percpu-rwsem.h 2012-11-28 02:41:03.000000000 +0100 > +++ linux-3.7-rc7/include/linux/percpu-rwsem.h 2012-11-28 02:41:15.000000000 +0100 > @@ -13,7 +13,7 @@ struct percpu_rw_semaphore { > }; > > #define light_mb() barrier() > -#define heavy_mb() synchronize_sched() > +#define heavy_mb() synchronize_sched_expedited() > > static inline void percpu_down_read(struct percpu_rw_semaphore *p) > { > @@ -51,7 +51,7 @@ static inline void percpu_down_write(str > { > mutex_lock(&p->mtx); > p->locked = true; > - synchronize_sched(); /* make sure that all readers exit the rcu_read_lock_sched region */ > + synchronize_sched_expedited(); /* make sure that all readers exit the rcu_read_lock_sched region */ > while (__percpu_count(p->counters)) > msleep(1); > heavy_mb(); /* C, between read of p->counter and write to data, paired with B */ Mikulas, Tested this one and this is good! Back to 3.6.0 behavior. As for the 2nd patch (block_dev.c), it didn't really make any difference for ext2/3/4, but for reiserfs, it does. So, won't just the patch about(synchronize_sched_expedited) be good enough? Thanks, Jeff -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/