Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Mon, 26 Nov 2001 14:12:12 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Mon, 26 Nov 2001 14:11:58 -0500 Received: from mx2.elte.hu ([157.181.151.9]:8929 "HELO mx2.elte.hu") by vger.kernel.org with SMTP id ; Mon, 26 Nov 2001 14:11:18 -0500 Date: Mon, 26 Nov 2001 22:09:02 +0100 (CET) From: Ingo Molnar Reply-To: To: Momchil Velikov Cc: , "David S. Miller" , Linus Torvalds Subject: Re: [PATCH] Scalable page cache In-Reply-To: <87vgfxqwd3.fsf@fadata.bg> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org hm, as far as i can see, your patch dirties both cachelines of the target struct page on every lookup (page_splay_tree_find()), correct? If correct, is this just an implementational thing, or something more fundamental? i've been thinking about getting rid of some of the cacheline dirtying in the page lookup code, ie. something like this: - #define SetPageReferenced(page) set_bit(PG_referenced, &(page)->flags) + #define SetPageReferenced(page) \ + if (!test_bit(PG_referenced), &(page)->flags) \ + set_bit(PG_referenced, &(page)->flags) this would have the benefit of not touching the cacheline while doing a simple read(), if the referenced bit is still set. (which is not cleared too eagerly in most no-VM-pressure cases.) And it should not be a problem that this is not race-free - missing to set the referenced bit is not a big failure. Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/