Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3626098pxk; Tue, 29 Sep 2020 01:46:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwFiaevfymeK4pS1EAYb9twcHIrMEjlq7xuK/+FwqnICbwQSU4e7o/xr7TD8ygOdLSSkguO X-Received: by 2002:a50:e79c:: with SMTP id b28mr2089667edn.371.1601369218097; Tue, 29 Sep 2020 01:46:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601369218; cv=none; d=google.com; s=arc-20160816; b=PxPzkwS3HS8x+yZIznKzsjFcnkTLj5ZZn5A4mwCwgkN4tFN84JPH5thYuGDGrn3+E9 ccqQUBhc22t4EkTOy7lFn9I35Dc+scrhiG3WbHKmaTB/dLf3ObkuAMZfjoBIUkIj23NO UweIZpE7z0DtvwkoTlwLg3x62fF4SyX9ty2pB1x+jbBW3CbspEhBKboM1E7BLl0w2wQg dCGLP1XnkVTRiB0/48vU9c8wtHE5f/yb9t1LqOqmvghJwJhVyKjEg13JAH6PqimtJyad NqTFrnEyyW6guXFuRxdxxK2Ly8aChC+icRLQWZIvWt50jUTvmkhGCRY/vS86vNnXd7+r gWaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=CsvPAPBqfPEsEncvJn9byQS+B1KQ9b6xPgu6A8jVkSQ=; b=nBYRmbTaFhx20dqew1+WCSwQTd/y6KQlJBccE9p5XmLWMRgwBh2sQESNMuttvxTleN EjbEobx1uQVB5HcDeeqRBZdrnCaV95KS+AmiuVxc9OvSIQ+ehRghysDYAKpVl2oUBRn7 KBC1i8OH86XqbdoCS3ENZh6BdU28lP8tzh0u/OEQvJ193lRO3Tlnuek7eVgIMQx8+4J/ wId7FjOaQobyt5vg7qSC313OLWHsZWQaCIuRlNHuDWGZFWCxB/Luyb13XLbkyxEqnXVP g3npg0yn31aIbhley196VSHK82eGkwqLdFRP+S5XQr7DJVSFyfWd4k6aHNpOda/RW1oY uhLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e25si2238031ejb.260.2020.09.29.01.46.35; Tue, 29 Sep 2020 01:46:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727717AbgI2Iox (ORCPT + 99 others); Tue, 29 Sep 2020 04:44:53 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:59772 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725826AbgI2Iox (ORCPT ); Tue, 29 Sep 2020 04:44:53 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 75A84855A3C848ECBCFB; Tue, 29 Sep 2020 16:44:51 +0800 (CST) Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com (10.3.19.214) with Microsoft SMTP Server (TLS) id 14.3.487.0; Tue, 29 Sep 2020 16:44:50 +0800 Subject: Re: [PATCH v2 1/2] f2fs: compress: introduce page array slab cache To: Jaegeuk Kim CC: , , References: <20200914090514.50102-1-yuchao0@huawei.com> <20200929082306.GA1567825@google.com> From: Chao Yu Message-ID: <6e7639db-9120-d406-0a46-ec841845bb28@huawei.com> Date: Tue, 29 Sep 2020 16:44:50 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20200929082306.GA1567825@google.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.136.114.67] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/9/29 16:23, Jaegeuk Kim wrote: > I found a bug related to the number of page pointer allocation related to > nr_cpages. Jaegeuk, If I didn't miss anything, you mean that nr_cpages could be larger than nr_rpages, right? the problematic case here is lzo/lzo-rle: cc->clen = lzo1x_worst_compress(PAGE_SIZE << cc->log_cluster_size); As we can't limited clen as we did for lz4/zstd: cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE; > > diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c > index f086ac43ca825..3a18666725fef 100644 > --- a/fs/f2fs/compress.c > +++ b/fs/f2fs/compress.c > @@ -20,22 +20,20 @@ > static struct kmem_cache *cic_entry_slab; > static struct kmem_cache *dic_entry_slab; > > -static void *page_array_alloc(struct inode *inode) > +static void *page_array_alloc(struct inode *inode, int nr) > { > struct f2fs_sb_info *sbi = F2FS_I_SB(inode); > - unsigned int size = sizeof(struct page *) << > - F2FS_I(inode)->i_log_cluster_size; > + unsigned int size = sizeof(struct page *) * nr; > > if (likely(size == sbi->page_array_slab_size)) > return kmem_cache_zalloc(sbi->page_array_slab, GFP_NOFS); > return f2fs_kzalloc(sbi, size, GFP_NOFS); > } > > -static void page_array_free(struct inode *inode, void *pages) > +static void page_array_free(struct inode *inode, void *pages, int nr) > { > struct f2fs_sb_info *sbi = F2FS_I_SB(inode); > - unsigned int size = sizeof(struct page *) << > - F2FS_I(inode)->i_log_cluster_size; > + unsigned int size = sizeof(struct page *) * nr; > > if (!pages) > return; > @@ -162,13 +160,13 @@ int f2fs_init_compress_ctx(struct compress_ctx *cc) > if (cc->rpages) > return 0; > > - cc->rpages = page_array_alloc(cc->inode); > + cc->rpages = page_array_alloc(cc->inode, cc->cluster_size); > return cc->rpages ? 0 : -ENOMEM; > } > > void f2fs_destroy_compress_ctx(struct compress_ctx *cc) > { > - page_array_free(cc->inode, cc->rpages); > + page_array_free(cc->inode, cc->rpages, cc->cluster_size); > cc->rpages = NULL; > cc->nr_rpages = 0; > cc->nr_cpages = 0; > @@ -602,7 +600,8 @@ static int f2fs_compress_pages(struct compress_ctx *cc) > struct f2fs_inode_info *fi = F2FS_I(cc->inode); > const struct f2fs_compress_ops *cops = > f2fs_cops[fi->i_compress_algorithm]; > - unsigned int max_len, nr_cpages; > + unsigned int max_len, new_nr_cpages; > + struct page **new_cpages; > int i, ret; > > trace_f2fs_compress_pages_start(cc->inode, cc->cluster_idx, > @@ -617,7 +616,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) > max_len = COMPRESS_HEADER_SIZE + cc->clen; > cc->nr_cpages = DIV_ROUND_UP(max_len, PAGE_SIZE); > > - cc->cpages = page_array_alloc(cc->inode); > + cc->cpages = page_array_alloc(cc->inode, cc->nr_cpages); > if (!cc->cpages) { > ret = -ENOMEM; > goto destroy_compress_ctx; > @@ -659,16 +658,28 @@ static int f2fs_compress_pages(struct compress_ctx *cc) > for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++) > cc->cbuf->reserved[i] = cpu_to_le32(0); > > - nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE); > + new_nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE); > + > + /* Now we're going to cut unnecessary tail pages */ > + new_cpages = page_array_alloc(cc->inode, new_nr_cpages); > + if (new_cpages) { > + ret = -ENOMEM; > + goto out_vunmap_cbuf; > + } > > /* zero out any unused part of the last page */ > memset(&cc->cbuf->cdata[cc->clen], 0, > - (nr_cpages * PAGE_SIZE) - (cc->clen + COMPRESS_HEADER_SIZE)); > + (new_nr_cpages * PAGE_SIZE) - > + (cc->clen + COMPRESS_HEADER_SIZE)); > > vm_unmap_ram(cc->cbuf, cc->nr_cpages); > vm_unmap_ram(cc->rbuf, cc->cluster_size); > > - for (i = nr_cpages; i < cc->nr_cpages; i++) { > + for (i = 0; i < cc->nr_cpages; i++) { > + if (i < new_nr_cpages) { > + new_cpages[i] = cc->cpages[i]; > + continue; > + } > f2fs_compress_free_page(cc->cpages[i]); > cc->cpages[i] = NULL; > } > @@ -676,7 +687,9 @@ static int f2fs_compress_pages(struct compress_ctx *cc) > if (cops->destroy_compress_ctx) > cops->destroy_compress_ctx(cc); > > - cc->nr_cpages = nr_cpages; > + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); > + cc->cpages = new_cpages; > + cc->nr_cpages = new_nr_cpages; > > trace_f2fs_compress_pages_end(cc->inode, cc->cluster_idx, > cc->clen, ret); > @@ -691,7 +704,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) > if (cc->cpages[i]) > f2fs_compress_free_page(cc->cpages[i]); > } > - page_array_free(cc->inode, cc->cpages); > + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); > cc->cpages = NULL; > destroy_compress_ctx: > if (cops->destroy_compress_ctx) > @@ -730,7 +743,7 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) > goto out_free_dic; > } > > - dic->tpages = page_array_alloc(dic->inode); > + dic->tpages = page_array_alloc(dic->inode, dic->cluster_size); > if (!dic->tpages) { > ret = -ENOMEM; > goto out_free_dic; > @@ -1203,7 +1216,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, > cic->magic = F2FS_COMPRESSED_PAGE_MAGIC; > cic->inode = inode; > atomic_set(&cic->pending_pages, cc->nr_cpages); > - cic->rpages = page_array_alloc(cc->inode); > + cic->rpages = page_array_alloc(cc->inode, cc->cluster_size); > if (!cic->rpages) > goto out_put_cic; > > @@ -1301,7 +1314,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, > return 0; > > out_destroy_crypt: > - page_array_free(cc->inode, cic->rpages); > + page_array_free(cc->inode, cic->rpages, cc->cluster_size); > > for (--i; i >= 0; i--) > fscrypt_finalize_bounce_page(&cc->cpages[i]); > @@ -1345,7 +1358,7 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page) > end_page_writeback(cic->rpages[i]); > } > > - page_array_free(cic->inode, cic->rpages); > + page_array_free(cic->inode, cic->rpages, cic->nr_rpages); > kmem_cache_free(cic_entry_slab, cic); > } > > @@ -1442,7 +1455,7 @@ int f2fs_write_multi_pages(struct compress_ctx *cc, > > err = f2fs_write_compressed_pages(cc, submitted, > wbc, io_type); > - page_array_free(cc->inode, cc->cpages); > + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); > cc->cpages = NULL; > if (!err) > return 0; > @@ -1468,7 +1481,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc) > if (!dic) > return ERR_PTR(-ENOMEM); > > - dic->rpages = page_array_alloc(cc->inode); > + dic->rpages = page_array_alloc(cc->inode, cc->cluster_size); > if (!dic->rpages) { > kmem_cache_free(dic_entry_slab, dic); > return ERR_PTR(-ENOMEM); > @@ -1487,7 +1500,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc) > dic->rpages[i] = cc->rpages[i]; > dic->nr_rpages = cc->cluster_size; > > - dic->cpages = page_array_alloc(dic->inode); > + dic->cpages = page_array_alloc(dic->inode, dic->nr_cpages); > if (!dic->cpages) > goto out_free; > > @@ -1522,7 +1535,7 @@ void f2fs_free_dic(struct decompress_io_ctx *dic) > continue; > f2fs_compress_free_page(dic->tpages[i]); > } > - page_array_free(dic->inode, dic->tpages); > + page_array_free(dic->inode, dic->tpages, dic->cluster_size); > } > > if (dic->cpages) { > @@ -1531,10 +1544,10 @@ void f2fs_free_dic(struct decompress_io_ctx *dic) > continue; > f2fs_compress_free_page(dic->cpages[i]); > } > - page_array_free(dic->inode, dic->cpages); > + page_array_free(dic->inode, dic->cpages, dic->nr_cpages); > } > > - page_array_free(dic->inode, dic->rpages); > + page_array_free(dic->inode, dic->rpages, dic->nr_rpages); > kmem_cache_free(dic_entry_slab, dic); > } > >