Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753881Ab0AHSey (ORCPT ); Fri, 8 Jan 2010 13:34:54 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753508Ab0AHSex (ORCPT ); Fri, 8 Jan 2010 13:34:53 -0500 Received: from nlpi157.sbcis.sbc.com ([207.115.36.171]:59359 "EHLO nlpi157.prodigy.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753102Ab0AHSex (ORCPT ); Fri, 8 Jan 2010 13:34:53 -0500 Date: Fri, 8 Jan 2010 12:33:52 -0600 (CST) From: Christoph Lameter X-X-Sender: cl@router.home To: Linus Torvalds cc: Peter Zijlstra , KAMEZAWA Hiroyuki , Minchan Kim , "Paul E. McKenney" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "hugh.dickins" , Nick Piggin , Ingo Molnar Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault() In-Reply-To: Message-ID: References: <20100104182429.833180340@chello.nl> <20100104182813.753545361@chello.nl> <20100105092559.1de8b613.kamezawa.hiroyu@jp.fujitsu.com> <28c262361001042029w4b95f226lf54a3ed6a4291a3b@mail.gmail.com> <20100105134357.4bfb4951.kamezawa.hiroyu@jp.fujitsu.com> <20100105143046.73938ea2.kamezawa.hiroyu@jp.fujitsu.com> <20100105163939.a3f146fb.kamezawa.hiroyu@jp.fujitsu.com> <20100106092212.c8766aa8.kamezawa.hiroyu@jp.fujitsu.com> <20100106115233.5621bd5e.kamezawa.hiroyu@jp.fujitsu.com> <20100106125625.b02c1b3a.kamezawa.hiroyu@jp.fujitsu.com> User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1362 Lines: 32 On Fri, 8 Jan 2010, Linus Torvalds wrote: > I bet it won't be a problem. It's when things go cross-socket that they > suck. So 16 cpu's across two sockets I wouldn't worry about. > > > > Because let's face it - if your workload does several million page faults > > > per second, you're just doing something fundamentally _wrong_. > > > > You may just want to get your app running and its trying to initialize > > its memory in parallel on all threads. Nothing wrong with that. > > Umm. That's going to be limited by the memset/memcpy, not the rwlock, I > bet. That may be true for a system with 2 threads. As the number of threads increases so does the cacheline contention. In larger systems the memset/memcpy is negligible. > The benchmark in question literally did a single byte write to each page > in order to show just the kernel component. That really isn't realistic > for any real load. Each anon fault also includes zeroing the page before its ready to be written to. The cachelines will be hot after a fault and initialization of any variables in the page will be fast due to that warming up effect. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/