Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp2611995ybh; Fri, 24 Jul 2020 18:19:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzJ+mAAPoTZcLH4dX3lHQjn8iyig1u3CPm9v4eOTSzWtXWJNNiT7QY7NMWh/qb2FqSrAxsN X-Received: by 2002:aa7:d78b:: with SMTP id s11mr11901934edq.319.1595639962053; Fri, 24 Jul 2020 18:19:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595639962; cv=none; d=google.com; s=arc-20160816; b=TMGRMmkBdg+vzYqlBIiJd9R8LLw8bIszJsjekKmJXYQsgEAJfqh+7x4IsI7i7fehWM rPHHEA4gIjU3Zs4w5ZTTVqoZ97RFjCISB9nc8UKsMMU4juUACpMRZoVTeYSrbW5mtNfi CQBf1K2R0L24s6GsK9sM4D8OdmOwcgGBO6r2yOjn6EbquRcbKLruhuwxF8WQ0Ze6SEjv kvBAR+rTKkzo0VcGeDzGm7C0Ms2ESUDNLIsgk+POAqa3vtf4/kNH51/Cxb1SQjUUJ/l3 jvTIz9AhWo6zUcnhrC+VlJMLUu/UQpdBg17H+AsLBqH8HlVL0B8wLXTp16QjuWDqOPRB dpEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=7VG/RQVYR8+v4W6FSH+LdnxUKwYHsoxCTyLMijCOaac=; b=zcuLER0FeRWf2ZN0nAonfXgD0DgwrRc04GUwb5aYU4b66irS9UfTjlw3Snyzf00vEs d+qzXi7Lxo98Ng3EJ0wXZuaQKOkvDRDcSL7TEsuKgZ111m5UEKWgDT+UX4abS+zJJCDc CyuFmW4jRFN8zC7D1EpmdRRkeEUiAgOrnsbq+vwmPa/3XIN1DFH4XYrL2Q+Dx/5rmdvK bt/7bhY8AnHuUQJwvv7KqoeP91VjoF0ME1qNwno8hJs682F1nGSTxBo+N5U22l9vR6PF 92og7nKSy5M2TKpKMozRpKHECx8DwfMriMtKShFCKxYfsgRT6FsEk7AF+zaCW+YwAGSP fKrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q9si1371197ejx.730.2020.07.24.18.18.58; Fri, 24 Jul 2020 18:19:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726904AbgGYBSH (ORCPT + 99 others); Fri, 24 Jul 2020 21:18:07 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:8806 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726701AbgGYBSH (ORCPT ); Fri, 24 Jul 2020 21:18:07 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id F2E91C7EF3C05A02CFBC; Sat, 25 Jul 2020 09:18:03 +0800 (CST) Received: from szvp000203569.huawei.com (10.120.216.130) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Sat, 25 Jul 2020 09:17:53 +0800 From: Chao Yu To: CC: , , , Chao Yu Subject: [PATCH v2 2/2] f2fs: compress: delay temp page allocation Date: Sat, 25 Jul 2020 09:17:48 +0800 Message-ID: <20200725011748.43688-1-yuchao0@huawei.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.120.216.130] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, we allocate temp pages which is used to pad hole in cluster during read IO submission, it may take long time before releasing them in f2fs_decompress_pages(), since they are only used as temp output buffer in decompression context, so let's just do the allocation in that context to reduce time of memory pool resource occupation. Signed-off-by: Chao Yu --- v2: - fix to assign return value in error path fs/f2fs/compress.c | 37 +++++++++++++++++++++---------------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index a20c9f3272af..6e7db450006c 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -670,6 +670,7 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) const struct f2fs_compress_ops *cops = f2fs_cops[fi->i_compress_algorithm]; int ret; + int i; dec_page_count(sbi, F2FS_RD_DATA); @@ -688,6 +689,26 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) goto out_free_dic; } + dic->tpages = f2fs_kzalloc(sbi, sizeof(struct page *) * + dic->cluster_size, GFP_NOFS); + if (!dic->tpages) { + ret = -ENOMEM; + goto out_free_dic; + } + + for (i = 0; i < dic->cluster_size; i++) { + if (dic->rpages[i]) { + dic->tpages[i] = dic->rpages[i]; + continue; + } + + dic->tpages[i] = f2fs_compress_alloc_page(); + if (!dic->tpages[i]) { + ret = -ENOMEM; + goto out_free_dic; + } + } + if (cops->init_decompress_ctx) { ret = cops->init_decompress_ctx(dic); if (ret) @@ -1449,22 +1470,6 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc) dic->cpages[i] = page; } - dic->tpages = f2fs_kzalloc(sbi, sizeof(struct page *) * - dic->cluster_size, GFP_NOFS); - if (!dic->tpages) - goto out_free; - - for (i = 0; i < dic->cluster_size; i++) { - if (cc->rpages[i]) { - dic->tpages[i] = cc->rpages[i]; - continue; - } - - dic->tpages[i] = f2fs_compress_alloc_page(); - if (!dic->tpages[i]) - goto out_free; - } - return dic; out_free: -- 2.26.2