Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1763921AbXJPIOO (ORCPT ); Tue, 16 Oct 2007 04:14:14 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1755657AbXJPIOB (ORCPT ); Tue, 16 Oct 2007 04:14:01 -0400 Received: from smtp102.mail.mud.yahoo.com ([209.191.85.212]:35488 "HELO smtp102.mail.mud.yahoo.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1753873AbXJPIOA (ORCPT ); Tue, 16 Oct 2007 04:14:00 -0400 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com.au; h=Received:X-YMail-OSG:From:To:Subject:Date:User-Agent:Cc:References:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-Disposition:Message-Id; b=h+eYknQmo2oWZomD+zzSMNMF9VhqkmMk2rsg3qZCm38ujRSeOeAYeSk2870eRzYAlhcZESRWubVNQpZzeLypZDoBq8Js6uaqVjW0YKjkSpXTCX6XzWL6K0fALo/ul7WM5T66b3Sfwj8/DAupN9fm4HivtjvBrd2Cy2vtcHTjyBk= ; X-YMail-OSG: Sv2GfUsVM1np.qYM3gvo4As0zQjz7FR2culXpF4bXy25R7GFKDpGMZ7N4qc5N1kwp1o1Y.Ynug-- From: Nick Piggin To: "Eric W. Biederman" Subject: Re: [PATCH] rd: Mark ramdisk buffers heads dirty Date: Tue, 16 Oct 2007 18:19:10 +1000 User-Agent: KMail/1.9.5 Cc: Andrew Morton , Christian Borntraeger , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Martin Schwidefsky , "Theodore Ts'o" References: <200710151028.34407.borntraeger@de.ibm.com> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200710161819.11231.nickpiggin@yahoo.com.au> Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2553 Lines: 66 On Tuesday 16 October 2007 08:42, Eric W. Biederman wrote: > I have not observed this case but it is possible to get a dirty page > cache with clean buffer heads if we get a clean ramdisk page with > buffer heads generated by a filesystem calling __getblk and then write > to that page from user space through the block device. Then we just > need to hit the proper window and try_to_free_buffers() will mark that > page clean and eventually drop it. Ouch! > > To fix this use the generic __set_page_dirty_buffers in the ramdisk > code so that when we mark a page dirty we also mark it's buffer heads > dirty. Hmm, so we can also have some filesystems writing their own buffers out by hand (clear_buffer_dirty, submit buffer for IO). Other places will do similarly dodgy things with filesystem metadata (fsync_buffers_list, for example). So your buffers get cleaned again, then your pages get cleaned. While I said it was a good fix when I saw the patch earlier, I think it's not closing the entire hole, and as such, Christian's patch is probably the way to go for stable. For mainline, *if* we want to keep the old rd.c around at all, I don't see any harm in this patch so long as Christian's is merged as well. Sharing common code is always good. > > Signed-off-by: Eric W. Biederman > --- > drivers/block/rd.c | 13 +------------ > 1 files changed, 1 insertions(+), 12 deletions(-) > > diff --git a/drivers/block/rd.c b/drivers/block/rd.c > index 701ea77..84163da 100644 > --- a/drivers/block/rd.c > +++ b/drivers/block/rd.c > @@ -178,23 +178,12 @@ static int ramdisk_writepages(struct address_space > *mapping, return 0; > } > > -/* > - * ramdisk blockdev pages have their own ->set_page_dirty() because we > don't - * want them to contribute to dirty memory accounting. > - */ > -static int ramdisk_set_page_dirty(struct page *page) > -{ > - if (!TestSetPageDirty(page)) > - return 1; > - return 0; > -} > - > static const struct address_space_operations ramdisk_aops = { > .readpage = ramdisk_readpage, > .prepare_write = ramdisk_prepare_write, > .commit_write = ramdisk_commit_write, > .writepage = ramdisk_writepage, > - .set_page_dirty = ramdisk_set_page_dirty, > + .set_page_dirty = __set_page_dirty_buffers, > .writepages = ramdisk_writepages, > }; - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/