Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756669AbbHZO3z (ORCPT ); Wed, 26 Aug 2015 10:29:55 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:60500 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756199AbbHZO3y (ORCPT ); Wed, 26 Aug 2015 10:29:54 -0400 X-Helo: d03dlp01.boulder.ibm.com X-MailFrom: paulmck@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Date: Wed, 26 Aug 2015 07:29:48 -0700 From: "Paul E. McKenney" To: Oleg Nesterov Cc: Ingo Molnar , Linus Torvalds , Peter Zijlstra , Tejun Heo , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 0/8] Add rcu_sync infrastructure to avoid _expedited() in percpu-rwsem Message-ID: <20150826142948.GE11078@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20150821174230.GA17867@redhat.com> <20150822163810.GV11078@linux.vnet.ibm.com> <20150824153431.GB24949@redhat.com> <20150826002220.GZ11078@linux.vnet.ibm.com> <20150826121643.GA10831@redhat.com> <20150826125215.GA25142@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150826125215.GA25142@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15082614-0013-0000-0000-000017660999 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2941 Lines: 74 On Wed, Aug 26, 2015 at 02:52:15PM +0200, Oleg Nesterov wrote: > On 08/26, Oleg Nesterov wrote: > > > > On 08/25, Paul E. McKenney wrote: > > > > > > On Mon, Aug 24, 2015 at 05:34:31PM +0200, Oleg Nesterov wrote: > > > > > > > > I booted the kernel with the additional patch below, and nothing bad has > > > > happened, it continues to print > > > > > > > > Writes: Total: 2 Max/Min: 0/0 Fail: 0 > > > > Reads : Total: 2 Max/Min: 0/0 Fail: 0 > > > > > > > > However, I do not know what this code actually does, so currently I have > > > > no idea if this test makes any sense for percpu_rw_semaphore. > > > > > > Actually, unless I am really confused, that does not look good... > > > > > > I would expect something like this, from a run with rwsem_lock: > > > > > > [ 16.336057] Writes: Total: 473 Max/Min: 0/0 Fail: 0 > > > [ 16.337615] Reads : Total: 219 Max/Min: 0/0 Fail: 0 > > > [ 31.338152] Writes: Total: 959 Max/Min: 0/0 Fail: 0 > > > [ 31.339114] Reads : Total: 437 Max/Min: 0/0 Fail: 0 > > > [ 46.340167] Writes: Total: 1365 Max/Min: 0/0 Fail: 0 > > > [ 46.341952] Reads : Total: 653 Max/Min: 0/0 Fail: 0 > > > [ 61.343027] Writes: Total: 1795 Max/Min: 0/0 Fail: 0 > > > [ 61.343968] Reads : Total: 865 Max/Min: 0/0 Fail: 0 > > > [ 76.344034] Writes: Total: 2220 Max/Min: 0/0 Fail: 0 > > > [ 76.345243] Reads : Total: 1071 Max/Min: 0/0 Fail: 0 > > > > > > The "Total" should increase for writes and for reads -- if you are > > > just seeing "Total: 2" over and over, that indicates that either > > > the torture test or rcu_sync got stuck somewhere. > > > > Hmm. I reverted the change in locktorture.c , and I see the same > > numbers when I boot the kernel with > > > > locktorture.verbose=1 locktorture.torture_type=rwsem_lock > > > > parameters. > > > > Writes: Total: 2 Max/Min: 0/0 Fail: 0 > > Reads : Total: 2 Max/Min: 0/0 Fail: 0 > > > > "Total" doesn't grow. Looks like something is wrong with locktorture. > > I'll try to re-check... > > Heh ;) torture threads spin in stutter_wait(). Added another parameter, > > locktorture.torture_runnable=1 > > now I see the similar numbers > > Writes: Total: 1242 Max/Min: 0/0 Fail: 0 > Reads : Total: 892 Max/Min: 0/0 Fail: 0 > Writes: Total: 2485 Max/Min: 0/0 Fail: 0 > Reads : Total: 1796 Max/Min: 0/0 Fail: 0 > Writes: Total: 3786 Max/Min: 0/0 Fail: 0 > Reads : Total: 2713 Max/Min: 0/0 Fail: 0 > Writes: Total: 5045 Max/Min: 0/0 Fail: 0 > Reads : Total: 3636 Max/Min: 0/0 Fail: 0 > > with or without s/rw_semaphore/percpu_rw_semaphore/ change in locktorture.c Whew!!! ;-) Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/