Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754224AbdC1F6l (ORCPT ); Tue, 28 Mar 2017 01:58:41 -0400 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:57845 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753957AbdC1F6j (ORCPT ); Tue, 28 Mar 2017 01:58:39 -0400 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.249.26 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Tue, 28 Mar 2017 14:57:22 +0900 From: Minchan Kim To: Sergey Senozhatsky CC: Joonsoo Kim , Andrew Morton , Sergey Senozhatsky , , , Seth Jennings , Dan Streetman Subject: Re: [PATCH 4/4] zram: make deduplication feature optional Message-ID: <20170328055722.GA11745@bbox> References: <1489632398-31501-1-git-send-email-iamjoonsoo.kim@lge.com> <1489632398-31501-5-git-send-email-iamjoonsoo.kim@lge.com> <20170322000059.GB30149@bbox> <20170323030530.GC17486@js1304-P5Q-DELUXE> <20170327081105.GA390@jagdpanzerIV.localdomain> <20170328010217.GB8462@js1304-P5Q-DELUXE> <20170328022244.GB10573@jagdpanzerIV.localdomain> <20170328025045.GA8573@bbox> <20170328051203.GC10573@jagdpanzerIV.localdomain> MIME-Version: 1.0 In-Reply-To: <20170328051203.GC10573@jagdpanzerIV.localdomain> User-Agent: Mutt/1.5.24 (2015-08-30) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB05/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/03/28 14:57:25, Serialize by Router on LGEKRMHUB05/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/03/28 14:57:25, Serialize complete at 2017/03/28 14:57:25 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2624 Lines: 57 On Tue, Mar 28, 2017 at 02:12:04PM +0900, Sergey Senozhatsky wrote: > Hello Minchan, > > On (03/28/17 11:50), Minchan Kim wrote: > [..] > > > the reason I asked was that both zram and zswap sort of trying to > > > have same optimizations - zero filled pages handling, for example. > > > zram is a bit ahead now (to the best of my knowledge), because of > > > the recent 'same element' filled pages. zswap, probably, will have > > > something like this as well some day. or may be it won't, up to Seth > > > and Dan. de-duplication definitely can improve both zram and zswap, > > > which, once again, suggests that at some point zswap will have its > > > own implementation. well, or it won't. > > > > As I pointed out, at least, dedup was no benefit for the swap case. > > I don't want to disrupt zsmalloc without any *proved* benefit. > > Even though it *might* have benefit, it shouldn't be in allocator > > layer unless it's really huge benefit like performance. > > sure. > > zpool, I meant zpool. I mistakenly used the word 'allocator'. > > I meant some intermediate layer between zram and actual memory allocator, > a common layer which both zram and zswap can use and which can have > common functionality. just an idea. haven't really thought about it yet. > > > It makes hard zram's allocator change in future. > > And please consider zswap is born for the latency in server workload > > while zram is memory efficiency in embedded world. > > may be. I do suspect zswap is used in embedded as well [1]. there is even > a brand new allocator that 'reportedly' uses less memory than zsmalloc > and outperforms zsmalloc in embedded setups [1] (once again, reportedly. > I haven't tried it). > > if z3fold is actually this good (I'm not saying it is not, haven't > tested it), then it makes sense to switch to zpool API in zram and let > zram users to select the allocator that fits their setups better. > > just saying. > > > [1] http://events.linuxfoundation.org/sites/events/files/slides/zram1.pdf I do not want to support multiple allocators in zram. It's really maintainance headache as well as making zram's goal float. If new allocator *saves* much memory compared to zsmalloc, it might be good candidate for replacing zsmalloc. If so, feel free to send patches with test workload without *any noise*. Please, do not tell "it's good" with just simple test. What we need is "why it's good" so that we can investigate what is current problem and if it is caused by zsmalloc's design so it's hard to change, then we might think of new allocator seriously. Anyway, it's off-topic with Joonsoo's patch.