Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753345Ab0AHRxf (ORCPT ); Fri, 8 Jan 2010 12:53:35 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753225Ab0AHRxf (ORCPT ); Fri, 8 Jan 2010 12:53:35 -0500 Received: from smtp1.linux-foundation.org ([140.211.169.13]:52537 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753052Ab0AHRxe (ORCPT ); Fri, 8 Jan 2010 12:53:34 -0500 Date: Fri, 8 Jan 2010 09:52:58 -0800 (PST) From: Linus Torvalds X-X-Sender: torvalds@localhost.localdomain To: Christoph Lameter cc: Peter Zijlstra , KAMEZAWA Hiroyuki , Minchan Kim , "Paul E. McKenney" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "hugh.dickins" , Nick Piggin , Ingo Molnar Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault() In-Reply-To: Message-ID: References: <20100104182429.833180340@chello.nl> <20100104182813.753545361@chello.nl> <20100105092559.1de8b613.kamezawa.hiroyu@jp.fujitsu.com> <28c262361001042029w4b95f226lf54a3ed6a4291a3b@mail.gmail.com> <20100105134357.4bfb4951.kamezawa.hiroyu@jp.fujitsu.com> <20100105143046.73938ea2.kamezawa.hiroyu@jp.fujitsu.com> <20100105163939.a3f146fb.kamezawa.hiroyu@jp.fujitsu.com> <20100106092212.c8766aa8.kamezawa.hiroyu@jp.fujitsu.com> <20100106115233.5621bd5e.kamezawa.hiroyu@jp.fujitsu.com> <20100106125625.b02c1b3a.kamezawa.hiroyu@jp.fujitsu.com> <1262969610.4244.36.camel@laptop> User-Agent: Alpine 2.00 (LFD 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1142 Lines: 30 On Fri, 8 Jan 2010, Christoph Lameter wrote: > > Can we at least consider a typical standard business server, dual quad > core hyperthreaded with 16 "cpus"? Cacheline contention will increase > significantly there. I bet it won't be a problem. It's when things go cross-socket that they suck. So 16 cpu's across two sockets I wouldn't worry about. > > Because let's face it - if your workload does several million page faults > > per second, you're just doing something fundamentally _wrong_. > > You may just want to get your app running and its trying to initialize > its memory in parallel on all threads. Nothing wrong with that. Umm. That's going to be limited by the memset/memcpy, not the rwlock, I bet. The benchmark in question literally did a single byte write to each page in order to show just the kernel component. That really isn't realistic for any real load. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/