Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3978285pxf; Tue, 16 Mar 2021 02:37:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJygbs0x/s02p/V80rYC0yJmmggTZI4eKUVnb3mCjC505VN7GgVlXpHIn+w5HJ+s/jPlxzSl X-Received: by 2002:aa7:c3c4:: with SMTP id l4mr34277045edr.335.1615887448470; Tue, 16 Mar 2021 02:37:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615887448; cv=none; d=google.com; s=arc-20160816; b=QlZKmt9HRdzlxOu3CLkQrJ3r+8n2+ruboG7S/I25lRjhxy5tmQWlCZAN/+oomsmhRj XNfzh4gp/35FzBoLETfoR+Evl0RIi+1jXxrL3iSaee/t4Os5AXmKINPiNM1TPdSm4PSC qMSexUu8V3aDsDjf0zxdmMvEeOn+ZtnT7HdurkpOmNxBrTlXhq5FDRnEjmvcLLZWo5Kq R5Matj8rbLINMMD47Dzyse5lMwNLqLRYGEjuDEojM79maoIrO8olrOJoP1h1AyIoSOli HTKzs+nFmiCgQ3VdtW9GJQDfPWpZp/47uBmb9Sg1jgyC4XmC/yW6QfcSfuLCUJVJnyzB iGMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=b44mMTsUBsBAXJMfakG88rJk9FvTwOHb2CasvCxJocs=; b=OpDq7YMDS7kSYpwfsxrc6ebiDn/8yTyGxJW9UqyihuejIsCzk1OINaY3FKaqQtqTm+ rA3Smt5pVHjgjQVl7UdQmjsWV38XorJrnll8KQTp2KCAhrZJyvWPPsCCzqr580lz77Jj 8MfMFZR/aPtnmIdVZ1/ZdePC4AwAx/lhupfezOVzNKakS4JALVjuxFtBFHAWhJE+ax8R 1645389wGL4agQK6GnWyaV20Nmpj+oDC+Y2ZR/S3wJVaq0Yl8hc1s+d3XiyJi3a7Fizi gQzxjzaUBnNbJrsuO7J8KKRl2lCC8KhqlpGo+LBW6VNZeZWbhl24OoRsrq3LBC+OfZ5r 2X+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id do7si13458758ejc.307.2021.03.16.02.37.05; Tue, 16 Mar 2021 02:37:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233350AbhCPI0z (ORCPT + 99 others); Tue, 16 Mar 2021 04:26:55 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:13626 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232988AbhCPI0m (ORCPT ); Tue, 16 Mar 2021 04:26:42 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4F05tl6x8Zz17Lxp; Tue, 16 Mar 2021 16:24:47 +0800 (CST) Received: from [10.136.110.154] (10.136.110.154) by smtp.huawei.com (10.3.19.201) with Microsoft SMTP Server (TLS) id 14.3.498.0; Tue, 16 Mar 2021 16:26:38 +0800 Subject: Re: [PATCH v6 2/2] erofs: decompress in endio if possible To: Huang Jianan , CC: , , References: <20210316031515.90954-1-huangjianan@oppo.com> <20210316031515.90954-2-huangjianan@oppo.com> From: Chao Yu Message-ID: Date: Tue, 16 Mar 2021 16:26:38 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20210316031515.90954-2-huangjianan@oppo.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.136.110.154] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jianan, On 2021/3/16 11:15, Huang Jianan via Linux-erofs wrote: > z_erofs_decompressqueue_endio may not be executed in the atomic > context, for example, when dm-verity is turned on. In this scenario, > data can be decompressed directly to get rid of additional kworker > scheduling overhead. Also, it makes no sense to apply synchronous > decompression for such case. It looks this patch does more than one things: - combine dm-verity and erofs workqueue - change policy of decompression in context of thread Normally, we do one thing in one patch, by this way, we will be benefit in scenario of when backporting patches and bisecting problematic patch with minimum granularity, and also it will help reviewer to focus on reviewing single code logic by following patch's goal. So IMO, it would be better to separate this patch into two. One more thing is could you explain a little bit more about why we need to change policy of decompression in context of thread? for better performance? BTW, code looks clean to me. :) Thanks, > > Signed-off-by: Huang Jianan > Signed-off-by: Guo Weichao > Reviewed-by: Gao Xiang > --- > fs/erofs/internal.h | 2 ++ > fs/erofs/super.c | 1 + > fs/erofs/zdata.c | 15 +++++++++++++-- > 3 files changed, 16 insertions(+), 2 deletions(-) > > diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h > index 67a7ec945686..fbc4040715be 100644 > --- a/fs/erofs/internal.h > +++ b/fs/erofs/internal.h > @@ -50,6 +50,8 @@ struct erofs_fs_context { > #ifdef CONFIG_EROFS_FS_ZIP > /* current strategy of how to use managed cache */ > unsigned char cache_strategy; > + /* strategy of sync decompression (false - auto, true - force on) */ > + bool readahead_sync_decompress; > > /* threshold for decompression synchronously */ > unsigned int max_sync_decompress_pages; > diff --git a/fs/erofs/super.c b/fs/erofs/super.c > index d5a6b9b888a5..0445d09b6331 100644 > --- a/fs/erofs/super.c > +++ b/fs/erofs/super.c > @@ -200,6 +200,7 @@ static void erofs_default_options(struct erofs_fs_context *ctx) > #ifdef CONFIG_EROFS_FS_ZIP > ctx->cache_strategy = EROFS_ZIP_CACHE_READAROUND; > ctx->max_sync_decompress_pages = 3; > + ctx->readahead_sync_decompress = false; > #endif > #ifdef CONFIG_EROFS_FS_XATTR > set_opt(ctx, XATTR_USER); > diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c > index 6cb356c4217b..25a0c4890d0a 100644 > --- a/fs/erofs/zdata.c > +++ b/fs/erofs/zdata.c > @@ -706,9 +706,12 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe, > goto out; > } > > +static void z_erofs_decompressqueue_work(struct work_struct *work); > static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io, > bool sync, int bios) > { > + struct erofs_sb_info *const sbi = EROFS_SB(io->sb); > + > /* wake up the caller thread for sync decompression */ > if (sync) { > unsigned long flags; > @@ -720,8 +723,15 @@ static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io, > return; > } > > - if (!atomic_add_return(bios, &io->pending_bios)) > + if (atomic_add_return(bios, &io->pending_bios)) > + return; > + /* Use workqueue and sync decompression for atomic contexts only */ > + if (in_atomic() || irqs_disabled()) { > queue_work(z_erofs_workqueue, &io->u.work); > + sbi->ctx.readahead_sync_decompress = true; > + return; > + } > + z_erofs_decompressqueue_work(&io->u.work); > } > > static bool z_erofs_page_is_invalidated(struct page *page) > @@ -1333,7 +1343,8 @@ static void z_erofs_readahead(struct readahead_control *rac) > struct erofs_sb_info *const sbi = EROFS_I_SB(inode); > > unsigned int nr_pages = readahead_count(rac); > - bool sync = (nr_pages <= sbi->ctx.max_sync_decompress_pages); > + bool sync = (sbi->ctx.readahead_sync_decompress && > + nr_pages <= sbi->ctx.max_sync_decompress_pages); > struct z_erofs_decompress_frontend f = DECOMPRESS_FRONTEND_INIT(inode); > struct page *page, *head = NULL; > LIST_HEAD(pagepool); >