Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp2990295ybb; Mon, 30 Mar 2020 17:50:09 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtDejuMZ7hBongmoFpea4mEpHbjRCyRBp3w9JeOBKtKMf8zEUWaRAz8tsaqz1QrHl06FjHg X-Received: by 2002:aca:bd46:: with SMTP id n67mr501178oif.120.1585615809322; Mon, 30 Mar 2020 17:50:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585615809; cv=none; d=google.com; s=arc-20160816; b=b1+0l13EZOwF521KXAe9NRgV1P3oY/nCdOv0tjUdiGvkUm1DQ/BimeX3pH+wQwYCEF 4Yg//M+YhagynEX5LTNCTZbNtCgf1z/Vng92cP/zK3V0BO3tQ/Yk4L8pMR7tA3iFgrQ4 kQth5l9zcxLmhTanRiYPvI/4fMxwE+BMko2iazjFTp89ZHdhRo9K4uHC4z2h64Izpsqw j6WiR1RtqfrrN/Z1YVWjD48jcXv+qwjBqQ0YshlFuvF9YjEoSiwH08EPvsrxbJW4LjGQ 7nXtbfKyx/+eK7NuTc7s4oVWPqPT8ViBDrjKQXVWm52O+3YN3Yz/gihl9sZcvQ7LkbbA c4gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=ysuTkyjQPVJD1MwoknUGvDY4OoClAIj7aedwdeFScmo=; b=mLh3U3GbAut2xjSh6O1NvlDak8Sz7UfMTMnowvui1H+1sglK6kcWwsRAbAeVcEsZYK 2iuvKr8X1ThP8gnyGCi6AftNk09LAzDOm/mBXt9z+pL3AFG8qolNZ9+/52ID9wXRedIN SQq8oYGgBSm6NKHKlxlIWTOYJz1qgP3kWdT1tE8nLrIy0s0U79nEovo4S95lzXrK1mem 7c/dklyWoK+uYJLXjTU4JknjSKoDraYkkOV8kxy5mBybnIgCPDlb2ABl8MC/jPt0kKIb raCf+fzZfExo6fsp6JwJkusZ6yhE9+EXLEDlbjcqGIZrHYFJq9/+lZZWVzr6SqoG47A6 pICw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l6si6732753otf.248.2020.03.30.17.49.51; Mon, 30 Mar 2020 17:50:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729372AbgCaAs4 (ORCPT + 99 others); Mon, 30 Mar 2020 20:48:56 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:50580 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729019AbgCaAs4 (ORCPT ); Mon, 30 Mar 2020 20:48:56 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 52B3439396BD4E43F37C; Tue, 31 Mar 2020 08:48:53 +0800 (CST) Received: from [10.134.22.195] (10.134.22.195) by smtp.huawei.com (10.3.19.205) with Microsoft SMTP Server (TLS) id 14.3.487.0; Tue, 31 Mar 2020 08:48:50 +0800 Subject: Re: [PATCH] f2fs: use round_up()/DIV_ROUND_UP() To: Jaegeuk Kim CC: , , References: <20200330100349.56127-1-yuchao0@huawei.com> <20200330184230.GB34947@google.com> From: Chao Yu Message-ID: <2b5ec2a6-218a-a291-a6fc-d87cd40be4db@huawei.com> Date: Tue, 31 Mar 2020 08:48:49 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20200330184230.GB34947@google.com> Content-Type: text/plain; charset="windows-1252" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.134.22.195] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/3/31 2:42, Jaegeuk Kim wrote: > On 03/30, Chao Yu wrote: >> .i_cluster_size should be power of 2, so we can use round_up() instead >> of roundup() to enhance the calculation. >> >> In addition, use DIV_ROUND_UP to clean up codes. >> >> Signed-off-by: Chao Yu >> --- >> fs/f2fs/data.c | 16 ++++++---------- >> fs/f2fs/file.c | 17 +++++------------ >> 2 files changed, 11 insertions(+), 22 deletions(-) >> >> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c >> index 0a829a89f596..8257d5e7aa3b 100644 >> --- a/fs/f2fs/data.c >> +++ b/fs/f2fs/data.c >> @@ -1969,8 +1969,6 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, >> bool is_readahead) >> { >> struct bio *bio = *bio_ret; >> - const unsigned blkbits = inode->i_blkbits; >> - const unsigned blocksize = 1 << blkbits; >> sector_t block_in_file; >> sector_t last_block; >> sector_t last_block_in_file; >> @@ -1979,8 +1977,8 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, >> >> block_in_file = (sector_t)page_index(page); >> last_block = block_in_file + nr_pages; >> - last_block_in_file = (f2fs_readpage_limit(inode) + blocksize - 1) >> >> - blkbits; >> + last_block_in_file = DIV_ROUND_UP(f2fs_readpage_limit(inode), >> + PAGE_SIZE); > > What if PAGE_SIZE is bigger than 4KB? We don't support 8kb+ sized-page, right? static int __init init_f2fs_fs(void) { int err; if (PAGE_SIZE != F2FS_BLKSIZE) { printk("F2FS not supported on PAGE_SIZE(%lu) != %d\n", PAGE_SIZE, F2FS_BLKSIZE); return -EINVAL; } Thanks, > >> if (last_block > last_block_in_file) >> last_block = last_block_in_file; >> >> @@ -2062,7 +2060,7 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, >> */ >> f2fs_wait_on_block_writeback(inode, block_nr); >> >> - if (bio_add_page(bio, page, blocksize, 0) < blocksize) >> + if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) >> goto submit_and_realloc; >> >> inc_page_count(F2FS_I_SB(inode), F2FS_RD_DATA); >> @@ -2091,16 +2089,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, >> struct bio *bio = *bio_ret; >> unsigned int start_idx = cc->cluster_idx << cc->log_cluster_size; >> sector_t last_block_in_file; >> - const unsigned blkbits = inode->i_blkbits; >> - const unsigned blocksize = 1 << blkbits; >> struct decompress_io_ctx *dic = NULL; >> int i; >> int ret = 0; >> >> f2fs_bug_on(sbi, f2fs_cluster_is_empty(cc)); >> >> - last_block_in_file = (f2fs_readpage_limit(inode) + >> - blocksize - 1) >> blkbits; >> + last_block_in_file = DIV_ROUND_UP(f2fs_readpage_limit(inode), >> + PAGE_SIZE); >> >> /* get rid of pages beyond EOF */ >> for (i = 0; i < cc->cluster_size; i++) { >> @@ -2197,7 +2193,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, >> >> f2fs_wait_on_block_writeback(inode, blkaddr); >> >> - if (bio_add_page(bio, page, blocksize, 0) < blocksize) >> + if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) >> goto submit_and_realloc; >> >> inc_page_count(sbi, F2FS_RD_DATA); >> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c >> index c2d38a1c4972..0f8be076620c 100644 >> --- a/fs/f2fs/file.c >> +++ b/fs/f2fs/file.c >> @@ -736,16 +736,9 @@ int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock) >> * for compressed file, only support cluster size >> * aligned truncation. >> */ >> - if (f2fs_compressed_file(inode)) { >> - size_t cluster_shift = PAGE_SHIFT + >> - F2FS_I(inode)->i_log_cluster_size; >> - size_t cluster_mask = (1 << cluster_shift) - 1; >> - >> - free_from = from >> cluster_shift; >> - if (from & cluster_mask) >> - free_from++; >> - free_from <<= cluster_shift; >> - } >> + if (f2fs_compressed_file(inode)) >> + free_from = round_up(from, >> + F2FS_I(inode)->i_cluster_size << PAGE_SHIFT); >> #endif >> >> err = f2fs_do_truncate_blocks(inode, free_from, lock); >> @@ -3537,7 +3530,7 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg) >> >> end_offset = ADDRS_PER_PAGE(dn.node_page, inode); >> count = min(end_offset - dn.ofs_in_node, last_idx - page_idx); >> - count = roundup(count, F2FS_I(inode)->i_cluster_size); >> + count = round_up(count, F2FS_I(inode)->i_cluster_size); >> >> ret = release_compress_blocks(&dn, count); >> >> @@ -3689,7 +3682,7 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg) >> >> end_offset = ADDRS_PER_PAGE(dn.node_page, inode); >> count = min(end_offset - dn.ofs_in_node, last_idx - page_idx); >> - count = roundup(count, F2FS_I(inode)->i_cluster_size); >> + count = round_up(count, F2FS_I(inode)->i_cluster_size); >> >> ret = reserve_compress_blocks(&dn, count); >> >> -- >> 2.18.0.rc1 > . >