Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161306AbbBCVsL (ORCPT ); Tue, 3 Feb 2015 16:48:11 -0500 Received: from mga09.intel.com ([134.134.136.24]:1378 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161048AbbBCVsH (ORCPT ); Tue, 3 Feb 2015 16:48:07 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,515,1418112000"; d="scan'208";a="646877153" Message-ID: <1423000085.9530.98.camel@schen9-desk2.jf.intel.com> Subject: Re: [PATCH 4/5] locking/rwsem: Avoid deceiving lock spinners From: Tim Chen To: Jason Low Cc: Davidlohr Bueso , Peter Zijlstra , Ingo Molnar , "Paul E. McKenney" , Michel Lespinasse , linux-kernel@vger.kernel.org Date: Tue, 03 Feb 2015 13:48:05 -0800 In-Reply-To: <1422997472.2368.10.camel@j-VirtualBox> References: <1422609267-15102-1-git-send-email-dave@stgolabs.net> <1422609267-15102-5-git-send-email-dave@stgolabs.net> <1422669098.9530.33.camel@schen9-desk2.jf.intel.com> <1422671289.28351.1.camel@stgolabs.net> <1422983812.9530.43.camel@schen9-desk2.jf.intel.com> <1422986041.2368.3.camel@j-VirtualBox> <1422992616.9530.78.camel@schen9-desk2.jf.intel.com> <1422997472.2368.10.camel@j-VirtualBox> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2730 Lines: 66 On Tue, 2015-02-03 at 13:04 -0800, Jason Low wrote: > On Tue, 2015-02-03 at 11:43 -0800, Tim Chen wrote: > > On Tue, 2015-02-03 at 09:54 -0800, Jason Low wrote: > > > On Tue, 2015-02-03 at 09:16 -0800, Tim Chen wrote: > > > > > > > > > > > > > > + if (READ_ONCE(sem->owner)) > > > > > > > + return true; /* new owner, continue spinning */ > > > > > > > + > > > > > > > > > > > > Do you have some comparison data of whether it is more advantageous > > > > > > to continue spinning when owner changes? After the above change, > > > > > > rwsem will behave more like a spin lock for write lock and > > > > > > will keep spinning when the lock changes ownership. > > > > > > > > > > But recall we still abort when need_resched, so the spinning isn't > > > > > infinite. Never has been. > > > > > > > > > > > Now during heavy > > > > > > lock contention, if we don't continue spinning and sleep, we may use the > > > > > > clock cycles for actually running other threads. > > > > > > > > > > Under heavy contention, time spinning will force us to ultimately block > > > > > anyway. > > > > > > > > The question is under heavy contention, if we are going to block anyway, > > > > won't it be more advantageous not to continue spinning so we can use > > > > the cycles for useful task? > > > > > > Hi Tim, > > > > > > Now that we have the OSQ logic, under heavy contention, there will still > > > only be 1 thread that is spinning on owner at a time. > > > > That's true. We cannot have the lock grabbed by a new write > > contender as any new writer contender of the lock will be > > queued by the OSQ logic. Only the > > thread doing the optimistic spin is attempting write lock. > > In other word, switching of write owner of the rwsem to a new > > owner cannot happen. > > Another thread can still obtain the write lock in the fast path though > right? We try to obtain the write lock once before calling > rwsem_down_write_failed(). > > True. The change owner check is still needed then. Thinking more about this, I now agree that continue spinning is the right thing. The possible number of threads contending for write locking has been greatly reduced by OSQ logic. Most of the time any new threads doing write locking attempt will do that only once and then go directly to the OSQ. The probability of success of retrying write lock by the thread at head of OSQ is high so we should do it. Davidlohr, you can add my Ack for this patch. Thanks. Tim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/