Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756478AbbHZAW1 (ORCPT ); Tue, 25 Aug 2015 20:22:27 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:48456 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755474AbbHZAW0 (ORCPT ); Tue, 25 Aug 2015 20:22:26 -0400 X-Helo: d03dlp03.boulder.ibm.com X-MailFrom: paulmck@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Date: Tue, 25 Aug 2015 17:22:20 -0700 From: "Paul E. McKenney" To: Oleg Nesterov Cc: Ingo Molnar , Linus Torvalds , Peter Zijlstra , Tejun Heo , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 0/8] Add rcu_sync infrastructure to avoid _expedited() in percpu-rwsem Message-ID: <20150826002220.GZ11078@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20150821174230.GA17867@redhat.com> <20150822163810.GV11078@linux.vnet.ibm.com> <20150824153431.GB24949@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150824153431.GB24949@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15082600-0013-0000-0000-000017629A08 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5347 Lines: 168 On Mon, Aug 24, 2015 at 05:34:31PM +0200, Oleg Nesterov wrote: > On 08/22, Paul E. McKenney wrote: > > > > Queued for testing, thank you, Oleg! > > Thanks Paul! I cannot really test it, but thus far it has at least not broken anything else. > > Right now, this is mostly relying on 0day and -next testing. Any thoughts > > for a useful torture test for this? > > Right now I do not have any idea how to write the meaningful test for > rcu_sync... Perhaps something like > > struct rcu_sync_struct rss; > spinlock_t lock; > > int A, B; > > void read(void) > { > rcu_read_lock(); > > bool need_lock = !rcu_sync_is_idle(&rss); > if (need_lock) > spin_lock(&lock); > > BUG_ON(A != B); > > if (need_lock) > spin_unlock(&lock); > > rcu_read_unlock(); > } > > void modify(void) > { > rcu_sync_enter(&rss); > > spin_lock(&lock); > A++; B++; > spin_unlock(&lock); > > rcu_sync_exit(&rss); > } > > makes sense... I'll try to think. That looks like a promising start. There would need to be some sleep time in modify() between rcu_sync_exit() and rcu_sync_enter(). > > One approach would be to treat it > > like a reader-writer lock. Other thoughts? > > I booted the kernel with the additional patch below, and nothing bad has > happened, it continues to print > > Writes: Total: 2 Max/Min: 0/0 Fail: 0 > Reads : Total: 2 Max/Min: 0/0 Fail: 0 > > However, I do not know what this code actually does, so currently I have > no idea if this test makes any sense for percpu_rw_semaphore. Actually, unless I am really confused, that does not look good... I would expect something like this, from a run with rwsem_lock: [ 16.336057] Writes: Total: 473 Max/Min: 0/0 Fail: 0 [ 16.337615] Reads : Total: 219 Max/Min: 0/0 Fail: 0 [ 31.338152] Writes: Total: 959 Max/Min: 0/0 Fail: 0 [ 31.339114] Reads : Total: 437 Max/Min: 0/0 Fail: 0 [ 46.340167] Writes: Total: 1365 Max/Min: 0/0 Fail: 0 [ 46.341952] Reads : Total: 653 Max/Min: 0/0 Fail: 0 [ 61.343027] Writes: Total: 1795 Max/Min: 0/0 Fail: 0 [ 61.343968] Reads : Total: 865 Max/Min: 0/0 Fail: 0 [ 76.344034] Writes: Total: 2220 Max/Min: 0/0 Fail: 0 [ 76.345243] Reads : Total: 1071 Max/Min: 0/0 Fail: 0 The "Total" should increase for writes and for reads -- if you are just seeing "Total: 2" over and over, that indicates that either the torture test or rcu_sync got stuck somewhere. Thanx, Paul > Oleg. > --- > > diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c > index ec8cce2..62561ec 100644 > --- a/kernel/locking/locktorture.c > +++ b/kernel/locking/locktorture.c > @@ -55,7 +55,7 @@ torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable"); > torture_param(bool, verbose, true, > "Enable verbose debugging printk()s"); > > -static char *torture_type = "spin_lock"; > +static char *torture_type = "rwsem_lock"; > module_param(torture_type, charp, 0444); > MODULE_PARM_DESC(torture_type, > "Type of lock to torture (spin_lock, spin_lock_irq, mutex_lock, ...)"); > @@ -361,10 +361,12 @@ static struct lock_torture_ops mutex_lock_ops = { > .name = "mutex_lock" > }; > > -static DECLARE_RWSEM(torture_rwsem); > -static int torture_rwsem_down_write(void) __acquires(torture_rwsem) > +#include > +static struct percpu_rw_semaphore pcpu_rwsem; > + > +static int torture_rwsem_down_write(void) __acquires(pcpu_rwsem) > { > - down_write(&torture_rwsem); > + percpu_down_write(&pcpu_rwsem); > return 0; > } > > @@ -384,14 +386,14 @@ static void torture_rwsem_write_delay(struct torture_random_state *trsp) > #endif > } > > -static void torture_rwsem_up_write(void) __releases(torture_rwsem) > +static void torture_rwsem_up_write(void) __releases(pcpu_rwsem) > { > - up_write(&torture_rwsem); > + percpu_up_write(&pcpu_rwsem); > } > > -static int torture_rwsem_down_read(void) __acquires(torture_rwsem) > +static int torture_rwsem_down_read(void) __acquires(pcpu_rwsem) > { > - down_read(&torture_rwsem); > + percpu_down_read(&pcpu_rwsem); > return 0; > } > > @@ -411,9 +413,9 @@ static void torture_rwsem_read_delay(struct torture_random_state *trsp) > #endif > } > > -static void torture_rwsem_up_read(void) __releases(torture_rwsem) > +static void torture_rwsem_up_read(void) __releases(pcpu_rwsem) > { > - up_read(&torture_rwsem); > + percpu_up_read(&pcpu_rwsem); > } > > static struct lock_torture_ops rwsem_lock_ops = { > @@ -645,6 +647,11 @@ static int __init lock_torture_init(void) > &rwsem_lock_ops, > }; > > + /* > + * TODO: DECLARE_PERCPU_RWSEM(). The patch already exists. > + */ > + BUG_ON(percpu_init_rwsem(&pcpu_rwsem)); > + > if (!torture_init_begin(torture_type, verbose, &torture_runnable)) > return -EBUSY; > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/