Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932458AbcKOQAv (ORCPT ); Tue, 15 Nov 2016 11:00:51 -0500 Received: from mail-pf0-f193.google.com ([209.85.192.193]:36607 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753501AbcKOQAp (ORCPT ); Tue, 15 Nov 2016 11:00:45 -0500 Date: Tue, 15 Nov 2016 17:00:38 +0100 From: Vitaly Wool To: Linux-MM , linux-kernel@vger.kernel.org Cc: Dan Streetman , Andrew Morton Subject: [PATCH 3/3] z3fold: discourage use of pages that weren't compacted Message-Id: <20161115170038.75e127739b66f850e50d7fc1@gmail.com> In-Reply-To: <20161115165538.878698352bd45e212751b57a@gmail.com> References: <20161115165538.878698352bd45e212751b57a@gmail.com> X-Mailer: Sylpheed 3.4.1 (GTK+ 2.24.23; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1988 Lines: 62 If a z3fold page couldn't be compacted, we don't want it to be used for next object allocation in the first place. It makes more sense to add it to the end of the relevant unbuddied list. If that page gets compacted later, it will be added to the beginning of the list then. This simple idea gives 5-7% improvement in randrw fio tests and about 10% improvement in fio sequential read/write. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index ffd9353..e282ba0 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -539,11 +539,19 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) free_z3fold_page(zhdr); atomic64_dec(&pool->pages_nr); } else { - z3fold_compact_page(zhdr); + int compacted = z3fold_compact_page(zhdr); /* Add to the unbuddied list */ spin_lock(&pool->lock); freechunks = num_free_chunks(zhdr); - list_add(&zhdr->buddy, &pool->unbuddied[freechunks]); + /* + * If the page has been compacted, we want to use it + * in the first place. + */ + if (compacted) + list_add(&zhdr->buddy, &pool->unbuddied[freechunks]); + else + list_add_tail(&zhdr->buddy, + &pool->unbuddied[freechunks]); spin_unlock(&pool->lock); z3fold_page_unlock(zhdr); } @@ -672,12 +680,16 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries) spin_lock(&pool->lock); list_add(&zhdr->buddy, &pool->buddied); } else { - z3fold_compact_page(zhdr); + int compacted = z3fold_compact_page(zhdr); /* add to unbuddied list */ spin_lock(&pool->lock); freechunks = num_free_chunks(zhdr); - list_add(&zhdr->buddy, - &pool->unbuddied[freechunks]); + if (compacted) + list_add(&zhdr->buddy, + &pool->unbuddied[freechunks]); + else + list_add_tail(&zhdr->buddy, + &pool->unbuddied[freechunks]); } } -- 2.4.2