Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752109AbcKZJKB (ORCPT ); Sat, 26 Nov 2016 04:10:01 -0500 Received: from mail-ua0-f194.google.com ([209.85.217.194]:34520 "EHLO mail-ua0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751987AbcKZJJz (ORCPT ); Sat, 26 Nov 2016 04:09:55 -0500 MIME-Version: 1.0 In-Reply-To: References: <20161103220428.984a8d09d0c9569e6bc6b8cc@gmail.com> From: Vitaly Wool Date: Sat, 26 Nov 2016 10:09:27 +0100 Message-ID: Subject: Re: [PATH] z3fold: extend compaction function To: Dan Streetman Cc: Linux-MM , linux-kernel , Andrew Morton Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4963 Lines: 116 On Fri, Nov 25, 2016 at 10:17 PM, Dan Streetman wrote: > On Fri, Nov 25, 2016 at 9:43 AM, Dan Streetman wrote: >> On Thu, Nov 3, 2016 at 5:04 PM, Vitaly Wool wrote: >>> z3fold_compact_page() currently only handles the situation when >>> there's a single middle chunk within the z3fold page. However it >>> may be worth it to move middle chunk closer to either first or >>> last chunk, whichever is there, if the gap between them is big >>> enough. >>> >>> This patch adds the relevant code, using BIG_CHUNK_GAP define as >>> a threshold for middle chunk to be worth moving. >>> >>> Signed-off-by: Vitaly Wool >> >> with the bikeshedding comments below, looks good. >> >> Acked-by: Dan Streetman >> >>> --- >>> mm/z3fold.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++------------- >>> 1 file changed, 47 insertions(+), 13 deletions(-) >>> >>> diff --git a/mm/z3fold.c b/mm/z3fold.c >>> index 4d02280..fea6791 100644 >>> --- a/mm/z3fold.c >>> +++ b/mm/z3fold.c >>> @@ -250,26 +250,60 @@ static void z3fold_destroy_pool(struct z3fold_pool *pool) >>> kfree(pool); >>> } >>> >>> +static inline void *mchunk_memmove(struct z3fold_header *zhdr, >>> + unsigned short dst_chunk) >>> +{ >>> + void *beg = zhdr; >>> + return memmove(beg + (dst_chunk << CHUNK_SHIFT), >>> + beg + (zhdr->start_middle << CHUNK_SHIFT), >>> + zhdr->middle_chunks << CHUNK_SHIFT); >>> +} >>> + >>> +#define BIG_CHUNK_GAP 3 >>> /* Has to be called with lock held */ >>> static int z3fold_compact_page(struct z3fold_header *zhdr) >>> { >>> struct page *page = virt_to_page(zhdr); >>> - void *beg = zhdr; >>> + int ret = 0; >>> + >>> + if (test_bit(MIDDLE_CHUNK_MAPPED, &page->private)) >>> + goto out; >>> >>> + if (zhdr->middle_chunks != 0) { >> >> bikeshed: this check could be moved up also, as if there's no middle >> chunk there is no compacting to do and we can just return 0. saves a >> tab in all the code below. >> >>> + if (zhdr->first_chunks == 0 && zhdr->last_chunks == 0) { >>> + mchunk_memmove(zhdr, 1); /* move to the beginning */ >>> + zhdr->first_chunks = zhdr->middle_chunks; >>> + zhdr->middle_chunks = 0; >>> + zhdr->start_middle = 0; >>> + zhdr->first_num++; >>> + ret = 1; >>> + goto out; >>> + } >>> >>> - if (!test_bit(MIDDLE_CHUNK_MAPPED, &page->private) && >>> - zhdr->middle_chunks != 0 && >>> - zhdr->first_chunks == 0 && zhdr->last_chunks == 0) { >>> - memmove(beg + ZHDR_SIZE_ALIGNED, >>> - beg + (zhdr->start_middle << CHUNK_SHIFT), >>> - zhdr->middle_chunks << CHUNK_SHIFT); >>> - zhdr->first_chunks = zhdr->middle_chunks; >>> - zhdr->middle_chunks = 0; >>> - zhdr->start_middle = 0; >>> - zhdr->first_num++; >>> - return 1; >>> + /* >>> + * moving data is expensive, so let's only do that if >>> + * there's substantial gain (at least BIG_CHUNK_GAP chunks) >>> + */ >>> + if (zhdr->first_chunks != 0 && zhdr->last_chunks == 0 && >>> + zhdr->start_middle > zhdr->first_chunks + BIG_CHUNK_GAP) { >>> + mchunk_memmove(zhdr, zhdr->first_chunks + 1); >>> + zhdr->start_middle = zhdr->first_chunks + 1; >>> + ret = 1; >>> + goto out; >>> + } >>> + if (zhdr->last_chunks != 0 && zhdr->first_chunks == 0 && >>> + zhdr->middle_chunks + zhdr->last_chunks <= >>> + NCHUNKS - zhdr->start_middle - BIG_CHUNK_GAP) { >>> + unsigned short new_start = NCHUNKS - zhdr->last_chunks - >>> + zhdr->middle_chunks; > > after closer review, I see that this is wrong. NCHUNKS isn't the > total number of page chunks, it's the total number of chunks minus the > header chunk(s). so that calculation of where the new start is, is > wrong. it should use the total page chunks, not the NCHUNKS, because > start_middle already accounts for the header chunk(s). Probably a new > macro would help. > > Also, the num_free_chunks() function makes the same mistake: > > int nfree_after = zhdr->last_chunks ? > 0 : NCHUNKS - zhdr->start_middle - zhdr->middle_chunks; > > that's wrong, it should be something like: > > #define TOTAL_CHUNKS (PAGE_SIZE >> CHUNK_SHIFT) > ... > int nfree_after = zhdr->last_chunks ? > 0 : TOTAL_CHUNKS - zhdr->start_middle - zhdr->middle_chunks; Right, will fix. ~vitaly