Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752885Ab3C0Vox (ORCPT ); Wed, 27 Mar 2013 17:44:53 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:38313 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751580Ab3C0Vow convert rfc822-to-8bit (ORCPT ); Wed, 27 Mar 2013 17:44:52 -0400 MIME-Version: 1.0 Message-ID: <026ccf11-82db-4ddf-9882-294ee578775f@default> Date: Wed, 27 Mar 2013 14:42:08 -0700 (PDT) From: Dan Magenheimer To: Seth Jennings , Konrad Wilk , Minchan Kim , Bob Liu , Robert Jennings , Nitin Gupta , Wanpeng Li , Andrew Morton , Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: zsmalloc/lzo compressibility vs entropy X-Priority: 3 X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7 (607090) [OL 12.0.6668.5000 (x86)] Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 8BIT X-Source-IP: ucsinet21.oracle.com [156.151.31.93] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2460 Lines: 56 This might be obvious to those of you who are better mathematicians than I, but I ran some experiments to confirm the relationship between entropy and compressibility and thought I should report the results to the list. Using the LZO code in the kernel via zsmalloc and some hacks in zswap, I measured the compression of pages generated by get_random_bytes and then of pages where half the page is generated by get_random_bytes() and the other half-page is zero-filled. For a fully random page, one would expect the number of zeroes and ones generated to be equal (highest entropy) and that proved true: The mean number of one-bits in the fully random page was 16384 (x86, so PAGE_SIZE=4096 * 8 bits/byte) with a stddev of 93. (sample size > 500000). For this sample of pages, zsize had a mean of 4116 and a stddev of 16. So for fully random pages, LZO compression results in "negative" compression... the size of the compressed page is slightly larger than a page. For a "half random page" -- a fully random page with the first half of the page overwritten with zeros -- zsize mean is 2077 with a stddev of 6. So a half-random page compresses by about a factor of 2. (Just to be sure, I reran the experiment with the first half of the page overwritten with ones instead of zeroes, and the result was approximately the same.) For extra credit, I ran a "quarter random page"... zsize mean is 1052 with a stddev of 45. For more extra credit, I tried a fully-random page with every OTHER byte forced to zero, so half the bytes are random and half are zero. The result: mean zsize is 3841 with a stddev of 33. Then I tried a fully-random page with every other PAIR of bytes forced to zero. The result: zsize mean is 4029 with a stddev of 67. (Worse!) So LZO page compression works better when there are many more zeroes than ones in a page (or vice-versa), but works best when a long sequence of bits (bytes?) are the same. All this still begs the question as to what the page-entropy (and zsize distribution) will be over a large set of pages and over a large set of workloads AND across different classes of data (e.g. frontswap pages vs cleancache pages), but at least we have some theory to guide us. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/