Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755103Ab2F2SJ7 (ORCPT ); Fri, 29 Jun 2012 14:09:59 -0400 Received: from mail-pb0-f46.google.com ([209.85.160.46]:50679 "EHLO mail-pb0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750945Ab2F2SJ6 (ORCPT ); Fri, 29 Jun 2012 14:09:58 -0400 Message-ID: <4FEDEF68.6000708@gmail.com> Date: Sat, 30 Jun 2012 02:09:44 +0800 From: Nai Xia Reply-To: nai.xia@gmail.com Organization: NJU User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120430 Thunderbird/12.0.1 MIME-Version: 1.0 To: Andrea Arcangeli CC: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Hillf Danton , Dan Smith , Linus Torvalds , Andrew Morton , Thomas Gleixner , Ingo Molnar , Paul Turner , Suresh Siddha , Mike Galbraith , "Paul E. McKenney" , Lai Jiangshan , Bharata B Rao , Lee Schermerhorn , Rik van Riel , Johannes Weiner , Srivatsa Vaddagiri , Christoph Lameter , Alex Shi , Mauricio Faria de Oliveira , Konrad Rzeszutek Wilk , Don Morris , Benjamin Herrenschmidt Subject: Re: [PATCH 13/40] autonuma: CPU follow memory algorithm References: <1340888180-15355-1-git-send-email-aarcange@redhat.com> <1340888180-15355-14-git-send-email-aarcange@redhat.com> <1340894776.28750.44.camel@twins> <4FEDB797.3050804@gmail.com> <20120629163025.GP6676@redhat.com> In-Reply-To: <20120629163025.GP6676@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4383 Lines: 105 On 2012年06月30日 00:30, Andrea Arcangeli wrote: > Hi Nai, > > On Fri, Jun 29, 2012 at 10:11:35PM +0800, Nai Xia wrote: >> If one process do very intensive visit of a small set of pages in this >> node, but occasional visit of a large set of pages in another node. >> Will this algorithm do a very bad judgment? I guess the answer would >> be: it's possible and this judgment depends on the racing pattern >> between the process and your knuma_scand. > > Depending if the knuma_scand/scan_pass_sleep_millisecs is more or less > occasional than the visit of a large set of pages it may behave > differently correct. I bet this racing is more subtle than this, but since you admit this judgment is a racing problem. Then it doesn't matter how subtle it would be. > > Note that every algorithm will have a limit on how smart it can be. > > Just to make a random example: if you lookup some pagecache a million > times and some other pagecache a dozen times, their "aging" > information in the pagecache will end up identical. Yet we know one > set of pages is clearly higher priority than the other. We've only so > many levels of lrus and so many referenced/active bitflags per > page. Once you get at the top, then all is equal. > > Does this mean the "active" list working set detection is useless just > because we can't differentiate a million of lookups on a few pages, vs > a dozen of lookups on lots of pages? I knew you will give us an example of LRU. ;D But unfortunately the approximation of LRU can not justify your case: There are cases when LRU approximation behaves very badly, but enough research in history have told us that 90% of the workloads conforms to this kind of approximation, and even every programmer has been taught to write LRU conforming programs. But we have no idea how well real world workloads will conforms to your algo especially the racing pattern. > > Last but not the least, in the very example you mention it's not even > clear that the process should be scheduled in the CPU where there is > the small set of pages accessed frequently, or the CPU where there's > the large set of pages accessed occasionally. If the small sets of > pages fits in the 8MBytes of the L2 cache, then it's better to put the > process in the other CPU where the large set of pages can't fit in the > L2 cache. Lots of hardware details should be evaluated, to really know > what's the right thing in such case even if it was you having to > decide. That's just why I think it more subtle and why I am feeling not confident about your algo -- if the effectiveness of your algorithm depends on so many uncertain things. > > But the real reason why the above isn't an issue and why we don't need > to solve that problem perfectly: there's not just a CPU follow memory > algorithm in AutoNUMA. There's also the memory follow CPU > algorithm. AutoNUMA will do its best to change the layout of your > example to one that has only one clear solution: the occasional lookup > of the large set of pages, will make those eventually go in the node > together with the small set of pages (or the other way around), and > this is how it's solved. Not sure to follow, if you fall back on this, then why all its complexity? This fall back equals to "just group all the pages to the running" policy. > > In any case, whatever wrong decision it will take, it will at least be > a better decision than the numa/sched where there's absolutely zero > information about what pages the process is accessing. And best of all > with AutoNUMA you also know which pages the _thread_ is accessing so > it will also be able to take optimal decisions if there are more > threads than CPUs in a node (as long as not all thread accesses are > shared). Yeah, we need the information. But how to make best of the information is a big problem. I feel you may not address my question only by word reasoning, if you currently have in your hand no survey of the common page access patterns of real world workloads. Maybe the assumption of your algorithm is right, maybe not... > > Hope this explains things better. > Andrea Thanks, Nai -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/