Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754567AbZFZFQZ (ORCPT ); Fri, 26 Jun 2009 01:16:25 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751858AbZFZFQR (ORCPT ); Fri, 26 Jun 2009 01:16:17 -0400 Received: from courier.cs.helsinki.fi ([128.214.9.1]:55245 "EHLO mail.cs.helsinki.fi" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751034AbZFZFQR (ORCPT ); Fri, 26 Jun 2009 01:16:17 -0400 Date: Fri, 26 Jun 2009 08:16:18 +0300 (EEST) From: Pekka J Enberg To: Theodore Tso cc: Eric Sandeen , Andreas Dilger , David Rientjes , Andrew Morton , Linus Torvalds , arjan@infradead.org, linux-kernel@vger.kernel.org, cl@linux-foundation.org, npiggin@suse.de, linux-ext4@vger.kernel.org Subject: Re: upcoming kerneloops.org item: get_page_from_freelist In-Reply-To: <20090626011155.GI6472@mit.edu> Message-ID: References: <20090624150714.c7264768.akpm@linux-foundation.org> <20090625132544.GB9995@mit.edu> <20090625193806.GA6472@mit.edu> <20090625194423.GB6472@mit.edu> <20090625203743.GD6472@mit.edu> <20090625212628.GO3385@webber.adilger.int> <20090625220504.GG6472@mit.edu> <4A43F60D.2040801@redhat.com> <20090626011155.GI6472@mit.edu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1534 Lines: 38 Hi Ted, On Thu, Jun 25, 2009 at 05:11:25PM -0500, Eric Sandeen wrote: > > ecryptfs used to do kmalloc(PAGE_CACHE_SIZE) & virt_to_page on that, and > > with SLUB + slub debug, that gave back non-aligned memory, causing > > eventual corruption ... On Thu, 25 Jun 2009, Theodore Tso wrote: > Grumble. Any chance we could add an kmem_cache option which requires > the memory to be aligned? Otherwise we could rewrite our own sub-page > allocator in ext4 that only handled aligned filesystem block sizes > (i.e., 1k, 2k, 4k, etc.) but that would be really silly and be extra > code that really should be done once at core functionality. We alredy have SLAB_HW_ALIGN but I wonder if this is a plain old bug in SLUB. Christoph, Nick, don't we need something like this in the allocator? Eric, does this fix your case? Pekka diff --git a/mm/slub.c b/mm/slub.c index 819f056..7cd1e69 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2400,7 +2400,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) * user specified and the dynamic determination of cache line size * on bootup. */ - align = calculate_alignment(flags, align, s->objsize); + align = calculate_alignment(flags, align, size); /* * SLUB stores one object immediately after another beginning from -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/