Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753930AbZKPUNt (ORCPT ); Mon, 16 Nov 2009 15:13:49 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753741AbZKPUNs (ORCPT ); Mon, 16 Nov 2009 15:13:48 -0500 Received: from mx3.mail.elte.hu ([157.181.1.138]:53883 "EHLO mx3.mail.elte.hu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753643AbZKPUNr (ORCPT ); Mon, 16 Nov 2009 15:13:47 -0500 Date: Mon, 16 Nov 2009 21:13:35 +0100 From: Ingo Molnar To: Stijn Devriendt Cc: Linus Torvalds , Mike Galbraith , Peter Zijlstra , Andrea Arcangeli , Thomas Gleixner , Andrew Morton , peterz@infradead.org, linux-kernel@vger.kernel.org Subject: Re: [RFC] observe and act upon workload parallelism: PERF_TYPE_PARALLELISM (Was: [RFC][PATCH] sched_wait_block: wait for blocked threads) Message-ID: <20091116201335.GC360@elte.hu> References: <1258311859-6189-1-git-send-email-HIGHGuY@gmail.com> <20091116083521.GC20672@elte.hu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-08-17) X-ELTE-SpamScore: -2.0 X-ELTE-SpamLevel: X-ELTE-SpamCheck: no X-ELTE-SpamVersion: ELTE 2.0 X-ELTE-SpamCheck-Details: score=-2.0 required=5.9 tests=BAYES_00 autolearn=no SpamAssassin version=3.2.5 -2.0 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2468 Lines: 60 * Stijn Devriendt wrote: > One extra catch, I didn't even think of in the original approach is > that you still need a way of saying to the kernel: no more work here. > > My original approach fails bluntly and I will happily take credit for > that ;) The perf-approach perfectly allows for this, by waking up the > "controller" thread which does exactly nothing as there's no work > left. Note, the perf approach does not require a 'controller thread'. The most efficient approach using perf-events would be: - have the pool threads block in poll(perf_event_fd). (all threads block in poll() on the same fd). - blocking threads wake_up() the pool and cause them to drop out of poll() (with no intermediary). [if there's less than perf_event::min_concurrency tasks running.] - waking threads observe the event state and only run if we are still below perf_event::max_concurrency - otherwise they re-queue to the poll() waitqueue. Basically the perf-event fd creates the 'group of tasks'. This can be created voluntarily by cooperating threads - or involuntarily as well via PID attach or CPU attach. There's no 'tracing' overhead or notification overhead: we maintain a shared state and the 'notifications' are straight wakeups that bring the pool members out of poll(), to drive the workload further. Such a special sw-event, with min_concurrency==max_concurrency==1 would implement Linus's interface - using standard facilities like poll(). (The only 'special' act is the set up of the group itself.) So various concurrency controls could be implemented that way - including the one Linus suggest - even a HPC workload-queueing daemon could be done as well, which sheperds 100% uncooperative tasks. I dont think this 'fancy' approach is actually a performance drag: it would really do precisely the same thing Linus's facility does (unless i'm missing something subtle - or something less subtle about Linus's scheme), with the two parameters set to '1'. ( It would also enable a lot of other things, and it would not tie the queueing implementation into the scheduler. ) Only trying would tell us for sure though - maybe i'm wrong. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/