Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935465Ab3DJAuK (ORCPT ); Tue, 9 Apr 2013 20:50:10 -0400 Received: from LGEMRELSE7Q.lge.com ([156.147.1.151]:64205 "EHLO LGEMRELSE7Q.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1762266Ab3DJAuI (ORCPT ); Tue, 9 Apr 2013 20:50:08 -0400 X-AuditID: 9c930197-b7b50ae00000018c-97-5164b73d4f70 Date: Wed, 10 Apr 2013 09:50:05 +0900 From: Minchan Kim To: Dan Magenheimer Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Hugh Dickins , Seth Jennings , Nitin Gupta , Konrad Rzeszutek Wilk , Shaohua Li , Bob Liu , Shuah Khan Subject: Re: zsmalloc defrag (Was: [PATCH] mm: remove compressed copy from zram in-memory) Message-ID: <20130410005004.GF6836@blaptop> References: <1365400862-9041-1-git-send-email-minchan@kernel.org> <20130409012719.GB3467@blaptop> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Brightmail-Tracker: AAAAAA== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6532 Lines: 161 On Tue, Apr 09, 2013 at 01:25:45PM -0700, Dan Magenheimer wrote: > > From: Minchan Kim [mailto:minchan@kernel.org] > > Subject: Re: zsmalloc defrag (Was: [PATCH] mm: remove compressed copy from zram in-memory) > > > > Hi Dan, > > > > On Mon, Apr 08, 2013 at 09:32:38AM -0700, Dan Magenheimer wrote: > > > > From: Minchan Kim [mailto:minchan@kernel.org] > > > > Sent: Monday, April 08, 2013 12:01 AM > > > > Subject: [PATCH] mm: remove compressed copy from zram in-memory > > > > > > (patch removed) > > > > > > > Fragment ratio is almost same but memory consumption and compile time > > > > is better. I am working to add defragment function of zsmalloc. > > > > > > Hi Minchan -- > > > > > > I would be very interested in your design thoughts on > > > how you plan to add defragmentation for zsmalloc. In > > > > What I can say now about is only just a word "Compaction". > > As you know, zsmalloc has a transparent handle so we can do whatever > > under user. Of course, there is a tradeoff between performance > > and memory efficiency. I'm biased to latter for embedded usecase. > > Have you designed or implemented this yet? I have a couple > of concerns: Not yet implemented but just had a time to think about it, simply. So surely, there are some obstacle so I want to uncase the code and number after I make a prototype/test the performance. Of course, if it has a severe problem, will drop it without wasting many guys's time. > > 1) The handle is transparent to the "user", but it is still a form > of a "pointer" to a zpage. Are you planning on walking zram's > tables and changing those pointers? That may be OK for zram > but for more complex data structures than tables (as in zswap > and zcache) it may not be as easy, due to races, or as efficient > because you will have to walk potentially very large trees. Rough concept is following as. I'm considering for zsmalloc to return transparent fake handle but we have to maintain it with real one. It could be done in zsmalloc internal so there isn't any race we should consider. > 2) Compaction in the kernel is heavily dependent on page migration > and page migration is dependent on using flags in the struct page. > There's a lot of code in those two code modules and there > are going to be a lot of implementation differences between > compacting pages vs compacting zpages. Compaction of kernel is never related to zsmalloc's one. > > I'm also wondering if you will be implementing "variable length > zspages". Without that, I'm not sure compaction will help > enough. (And that is a good example of the difference between Why do you think so? variable lengh zspage could be further step to improve but it's not only a solution to solve fragmentation. > the kernel page compaction design/code and zspage compaction.) > > > > particular, I am wondering if your design will also > > > handle the requirements for zcache (especially for > > > cleancache pages) and perhaps also for ramster. > > > > I don't know requirements for cleancache pages but compaction is > > general as you know well so I expect you can get a benefit from it > > if you are concern on memory efficiency but not sure it's valuable > > to compact cleancache pages for getting more slot in RAM. > > Sometime, just discarding would be much better, IMHO. > > Zcache has page reclaim. Zswap has zpage reclaim. I am > concerned that these continue to work in the presence of > compaction. With no reclaim at all, zram is a simpler use > case but if you implement compaction in a way that can't be > used by either zcache or zswap, then zsmalloc is essentially > forking. Don't go too far. If it's really problem for zswap and zcache, maybe, we could add it optionally. > > > > In https://lkml.org/lkml/2013/3/27/501 I suggested it > > > would be good to work together on a common design, but > > > you didn't reply. Are you thinking that zsmalloc > > > > I saw the thread but explicit agreement is really matter? > > I believe everybody want it although they didn't reply. :) > > > > You can make the design/post it or prototyping/post it. > > If there are some conflit with something in my brain, > > I will be happy to feedback. :) > > > > Anyway, I think my above statement "COMPACTION" would be enough to > > express my current thought to avoid duplicated work and you can catch up. > > > > I will get around to it after LSF/MM. > > > > > improvements should focus only on zram, in which case > > > > Just focusing zsmalloc. > > Right. Again, I am asking if you are changing zsmalloc in > a way that helps zram but hurts zswap and makes it impossible > for zcache to ever use the improvements to zsmalloc. As I said, I'm biased to memory efficiency rather than performace. Of course, severe performance drop is disaster but small drop will be acceptable for memory-efficiency concerning systems. > > If so, that's fine, but please make it clear that is your goal. Simple, help memory hungry system. :) > > > > we may -- and possibly should -- end up with a different > > > allocator for frontswap-based/cleancache-based compression > > > in zcache (and possibly zswap)? > > > > > I'm just trying to determine if I should proceed separately > > > with my design (with Bob Liu, who expressed interest) or if > > > it would be beneficial to work together. > > > > Just posting and if it affects zsmalloc/zram/zswap and goes the way > > I don't want, I will involve the discussion because our product uses > > zram heavily and consider zswap, too. > > > > I really appreciate your enthusiastic collaboration model to find > > optimal solution! > > My goal is to have compression be an integral part of Linux > memory management. It may be tied to a config option, but > the goal is that distros turn it on by default. I don't think > zsmalloc meets that objective yet, but it may be fine for > your needs. If so it would be good to understand exactly why > it doesn't meet the other zproject needs. > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/