Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Wed, 3 Oct 2001 08:30:43 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Wed, 3 Oct 2001 08:30:33 -0400 Received: from eventhorizon.antefacto.net ([193.120.245.3]:26045 "EHLO eventhorizon.antefacto.net") by vger.kernel.org with ESMTP id ; Wed, 3 Oct 2001 08:30:16 -0400 Message-ID: <3BBB03B3.6000003@antefacto.com> Date: Wed, 03 Oct 2001 13:25:23 +0100 From: Padraig Brady User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.4) Gecko/20010913 X-Accept-Language: en-us MIME-Version: 1.0 To: Pierre PEIFFER CC: linux-kernel@vger.kernel.org Subject: Re: e2compress in kernel 2.4 In-Reply-To: <3BBACF29.7BB980C4@sxb.bsf.alcatel.fr> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Would it not be better to do all the (de)compression @ the page cache level (http://linuxcompressed.sourceforge.net/) and then you get other advantages. Then you would just use the compression bit in ext2 to mark blocks that should not be decompressed before passing down towards the disk and vive versa. Note also since ramfs uses the page cache directly it would get transparent compression for free? Padraig. Pierre PEIFFER wrote: >Hi ! > > We are willing to port e2compress from 2.2 kernel series to 2.4 and >we are looking for the right way for porting the compression on the >write part. > > For the read operation, we can adapt the original design: the 2.2 >part of e2compress can be easily integrated in the 2.4 version; for the >write, it is a little bit more complicated... > > As we understand, in the 2.2 kernel, the compression is integrated >between the page cache and the buffer cache, i.e. data pointed by the >pages remain always uncompressed, but the compression occurs on buffers >=> data pointed by the buffers become compressed when the system decide >to. > What we also saw is that in 2.2, in ext2_file_write, the writes >occurs on buffers, and after that, the system looks for the >corresponding page, and if it is present, it also update the data in >this page. > > But, under 2.4, as we see in the "generic_file_write", the write >operation occurs on pages, and no more on buffers as in 2.2. And the >needed buffers are created and associated to the page, i.e. the b_data >field of the buffers points on the data of the considered page. > > So, here, we are a little bit confused because we don't know where >to introduce the compression, if we keep the same idea of the 2.2 >design... In fact, on one hand, once the buffers will be compressed, the >pages will also become compressed, but on the other hand, we don't want >the pages to be compressed, because, the pages, once registered and >linked to the inode are supposed to be uncompressed... > > So our idea was to introduce the notion of "cluster of pages", as >the notion of cluster of blocks, i.e. performs the write on several >pages at a time, then compress the buffers corresponding to these pages, >but here the data of the buffers should be splitted up from the data of >the pages and that's our problem... We don't know how to do this. Is >there a way to do this ? > > And, from a more general point of view, do you think our approach >has a chance to succeed ? > > If you have any questions, feel free to ask more explainations. > > Thanks, > > Pierre & Denis > >PS: Please, cc'ed me personnaly in the answer, I'm not subscribed to the >list. > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/