Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759550Ab2JYNS3 (ORCPT ); Thu, 25 Oct 2012 09:18:29 -0400 Received: from casper.infradead.org ([85.118.1.10]:51516 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759022Ab2JYNJ3 (ORCPT ); Thu, 25 Oct 2012 09:09:29 -0400 Message-Id: <20121025121617.617683848@chello.nl> User-Agent: quilt/0.48-1 Date: Thu, 25 Oct 2012 14:16:17 +0200 From: Peter Zijlstra To: Rik van Riel , Andrea Arcangeli , Mel Gorman , Johannes Weiner , Thomas Gleixner , Linus Torvalds , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Peter Zijlstra , Ingo Molnar Subject: [PATCH 00/31] numa/core patches Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1256 Lines: 32 Hi all, Here's a re-post of the NUMA scheduling and migration improvement patches that we are working on. These include techniques from AutoNUMA and the sched/numa tree and form a unified basis - it has got all the bits that look good and mergeable. With these patches applied, the mbind system calls expand to new modes of lazy-migration binding, and if the CONFIG_SCHED_NUMA=y .config option is enabled the scheduler will automatically sample the working set of tasks via page faults. Based on that information the scheduler then tries to balance smartly, put tasks on a home node and migrate CPU work and memory on the same node. They are functional in their current state and have had testing on a variety of x86 NUMA hardware. These patches will continue their life in tip:numa/core and unless there are major showstoppers they are intended for the v3.8 merge window. We believe that they provide a solid basis for future work. Please review .. once again and holler if you see anything funny! :-) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/