Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752927Ab2KSPKp (ORCPT ); Mon, 19 Nov 2012 10:10:45 -0500 Received: from mx1.redhat.com ([209.132.183.28]:2029 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752219Ab2KSPKn (ORCPT ); Mon, 19 Nov 2012 10:10:43 -0500 Date: Mon, 19 Nov 2012 16:10:50 +0100 From: Oleg Nesterov To: Ivo Sieben Cc: linux-kernel@vger.kernel.org, Andi Kleen , Peter Zijlstra , Ingo Molnar , linux-serial@vger.kernel.org, Alan Cox , Greg KH Subject: Re: [REPOST-v2] sched: Prevent wakeup to enter critical section needlessly Message-ID: <20121119151050.GA4270@redhat.com> References: <1351159974-980-1-git-send-email-meltedpianoman@gmail.com> <1353310211-3011-1-git-send-email-meltedpianoman@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1353310211-3011-1-git-send-email-meltedpianoman@gmail.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1477 Lines: 48 On 11/19, Ivo Sieben wrote: > > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -3090,9 +3090,22 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode, > { > unsigned long flags; > > - spin_lock_irqsave(&q->lock, flags); > - __wake_up_common(q, mode, nr_exclusive, 0, key); > - spin_unlock_irqrestore(&q->lock, flags); > + /* > + * We check for list emptiness outside the lock. This prevents the wake > + * up to enter the critical section needlessly when the task list is > + * empty. > + * > + * Placed a full memory barrier before checking list emptiness to make > + * 100% sure this function sees an up-to-date list administration. > + * Note that other code that manipulates the list uses a spin_lock and > + * therefore doesn't need additional memory barriers. > + */ > + smp_mb(); > + if (!list_empty(&q->task_list)) { waitqueue_active() ? > + spin_lock_irqsave(&q->lock, flags); > + __wake_up_common(q, mode, nr_exclusive, 0, key); > + spin_unlock_irqrestore(&q->lock, flags); > + } I am wondering if it makes sense unconditionally. A lot of callers do if (waitqueue_active(q)) wake_up(...); this patch makes the optimization above pointless and adds mb(). But I won't argue. Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/