Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752290AbbKIQv2 (ORCPT ); Mon, 9 Nov 2015 11:51:28 -0500 Received: from g2t4619.austin.hp.com ([15.73.212.82]:56310 "EHLO g2t4619.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751445AbbKIQv1 (ORCPT ); Mon, 9 Nov 2015 11:51:27 -0500 Message-ID: <5640CF08.2010001@hpe.com> Date: Mon, 09 Nov 2015 11:51:20 -0500 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12 MIME-Version: 1.0 To: Peter Zijlstra CC: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch , Davidlohr Bueso Subject: Re: [PATCH tip/locking/core v9 6/6] locking/pvqspinlock: Queue node adaptive spinning References: <1446247597-61863-1-git-send-email-Waiman.Long@hpe.com> <1446247597-61863-7-git-send-email-Waiman.Long@hpe.com> <20151106150149.GV17308@twins.programming.kicks-ass.net> <563CE93E.6090000@hpe.com> <20151106203709.GD17308@twins.programming.kicks-ass.net> In-Reply-To: <20151106203709.GD17308@twins.programming.kicks-ass.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1847 Lines: 46 On 11/06/2015 03:37 PM, Peter Zijlstra wrote: > On Fri, Nov 06, 2015 at 12:54:06PM -0500, Waiman Long wrote: >>>> +static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev) >>>> { >>>> struct pv_node *pn = (struct pv_node *)node; >>>> + struct pv_node *pp = (struct pv_node *)prev; >>>> int waitcnt = 0; >>>> int loop; >>>> + bool wait_early; >>>> >>>> /* waitcnt processing will be compiled out if !QUEUED_LOCK_STAT */ >>>> for (;; waitcnt++) { >>>> - for (loop = SPIN_THRESHOLD; loop; loop--) { >>>> + for (wait_early = false, loop = SPIN_THRESHOLD; loop; loop--) { >>>> if (READ_ONCE(node->locked)) >>>> return; >>>> + if (pv_wait_early(pp, loop)) { >>>> + wait_early = true; >>>> + break; >>>> + } >>>> cpu_relax(); >>>> } >>>> >>> So if prev points to another node, it will never see vcpu_running. Was >>> that fully intended? >> I had added code in pv_wait_head_or_lock to set the state appropriately for >> the queue head vCPU. > Yes, but that's the head, for nodes we'll always have halted or hashed. The node state was initialized to be vcpu_running. In pv_wait_node(), it will be changed to vcpu_halted before sleeping and back to vcpu_running after that. So it is not true that it is either halted or hashed. In case, it was changed to vcpu_hashed, it will be changed back to vcpu_running in pv_wait_head_lock before entering the active spinning loop. There are definitely a small amount of time where the node state does not reflect the actual vCPU state, but that is the best we can do so far. Cheers, Longman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/