Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3032039pxk; Mon, 7 Sep 2020 00:41:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw+2rajdQODV7FNThrt4IkpVwyDrj6NYV/b/NmLjqB6yCMdvWhxgXZV9s/HDALTeKV6jHtP X-Received: by 2002:a17:906:54e:: with SMTP id k14mr19605312eja.59.1599464462245; Mon, 07 Sep 2020 00:41:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599464462; cv=none; d=google.com; s=arc-20160816; b=FaGrf8QXuy6W/qLiDC5hTmqau+NOi3ArWdKsmu26wX5U150YsTHpE/VwsIwwHyyP+u KLatihyy4xn6b78Xy1fntHN5cv8YPQB9A/fHrt9DJR9qf8pNpYGFkrw2deVbUvBGJLUg 1l4xQlQNyu1lzVD9Nivbl0B+dDAsKBcpcVbKBON9cypXhRzmOA55f8DzXZmN/mcWdl0G jRn9jA1HGN1HWUByUoV29nq9HkQDN1SernlB025wyeDdXb7xY9zAZlX1RzBRw1X86Bcf /fslDug6WJ//ml3DBiQrhQKFD1NwSekgOfQhz+f+y/HfGHkra45L/WxuwZ2rSRI6eeov F8Tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=ZIvv/vjbIe8OgdEaKyXjp7Stcef1hBByGV0l9rGeSNI=; b=X4HacdHdpLB5emjZXoYDFQzm92M68gtuD8IY+q9j0a0AnLNlzLJC9CGaNetjNx/hzV etWovEPFshU4TjzwiFqej4fVNEwqdlAQ5NEVMMv+ZlAdMwSLMnGP1kksEqADfCPWl366 ll269vul0mKMqsMrKLmrGQW7xGfSsW0MIGkA2yVXmqWWui2Th7H+LcJTIwkbJXFyw5jT +88BTDG85PDrLu73umY83IdVgZmENAxIl5tMSr301wN2i2B98yAy4Cj2kjbRbZOChLZ4 Zp3WhARKRQCx/9fOH94Td6YsbhpMLCOq4mZMgR4lOwcHl3kpst2Ap4jgAA4FEY6fUK2h GvJw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n5si9869321ejc.171.2020.09.07.00.40.39; Mon, 07 Sep 2020 00:41:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727001AbgIGHkE (ORCPT + 99 others); Mon, 7 Sep 2020 03:40:04 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:38306 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726780AbgIGHkC (ORCPT ); Mon, 7 Sep 2020 03:40:02 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 49C84B6DC479BDDA8B6A; Mon, 7 Sep 2020 15:39:59 +0800 (CST) Received: from [127.0.0.1] (10.174.177.253) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.487.0; Mon, 7 Sep 2020 15:39:54 +0800 Subject: Re: [PATCH 1/1] block: move the PAGE_SECTORS definition into To: Coly Li CC: Jens Axboe , Kent Overstreet , Alasdair Kergon , Mike Snitzer , dm-devel , linux-block , linux-kernel , linux-bcache References: <20200821020345.3358-1-thunder.leizhen@huawei.com> <8aa638b7-6cfd-bf3d-8015-fbe59f28f31f@suse.de> From: "Leizhen (ThunderTown)" Message-ID: Date: Mon, 7 Sep 2020 15:39:53 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <8aa638b7-6cfd-bf3d-8015-fbe59f28f31f@suse.de> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.253] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Jens Axboe, Alasdair Kergon, Mike Snitzer: What's your opinion? On 2020/8/21 15:05, Coly Li wrote: > On 2020/8/21 14:48, Leizhen (ThunderTown) wrote: >> >> >> On 8/21/2020 12:11 PM, Coly Li wrote: >>> On 2020/8/21 10:03, Zhen Lei wrote: >>>> There are too many PAGE_SECTORS definitions, and all of them are the >>>> same. It looks a bit of a mess. So why not move it into , >>>> to achieve a basic and unique definition. >>>> >>>> Signed-off-by: Zhen Lei >>> >>> >>> A lazy question about page size > 4KB: currently in bcache code the >>> sector size is assumed to be 512 sectors, if kernel page > 4KB, it is >>> possible that PAGE_SECTORS in bcache will be a number > 8 ? >> >> Sorry, I don't fully understand your question. I known that the sector size >> can be 512 or 4K, and the PAGE_SIZE can be 4K or 64K. So even if sector size >> is 4K, isn't it greater than 8 for 64K pages? >> >> I'm not sure if the question you're asking is the one Matthew Wilcox has >> answered before: >> https://www.spinics.net/lists/raid/msg64345.html > > Thank you for the above information. Currently bcache code assumes > sector size is always 512 bytes, you may see how many "<< 9" or ">> 9" > are used. Therefore I doubt whether current code may stably work on e.g. > 4Kn SSDs (this is only doubt because I don't have such SSD). > > Anyway your patch is fine to me. This change to bcache doesn't make > thins worse or better, maybe it can be helpful to trigger my above > suspicious early if people do have this kind of problem on 4Kn sector SSDs. > > For the bcache part of this patch, you may add, > Acked-by: Coly Li > > Thanks. > > Coly Li > >>>> --- >>>> drivers/block/brd.c | 1 - >>>> drivers/block/null_blk_main.c | 1 - >>>> drivers/md/bcache/util.h | 2 -- >>>> include/linux/blkdev.h | 5 +++-- >>>> include/linux/device-mapper.h | 1 - >>>> 5 files changed, 3 insertions(+), 7 deletions(-) >>>> >>> >>> [snipped] >>> >>>> diff --git a/drivers/md/bcache/util.h b/drivers/md/bcache/util.h >>>> index c029f7443190805..55196e0f37c32c6 100644 >>>> --- a/drivers/md/bcache/util.h >>>> +++ b/drivers/md/bcache/util.h >>>> @@ -15,8 +15,6 @@ >>>> >>>> #include "closure.h" >>>> >>>> -#define PAGE_SECTORS (PAGE_SIZE / 512) >>>> - >>>> struct closure; >>>> >>>> #ifdef CONFIG_BCACHE_DEBUG >>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >>>> index bb5636cc17b91a7..b068dfc5f2ef0ab 100644 >>>> --- a/include/linux/blkdev.h >>>> +++ b/include/linux/blkdev.h >>>> @@ -949,11 +949,12 @@ static inline struct request_queue *bdev_get_queue(struct block_device *bdev) >>>> * multiple of 512 bytes. Hence these two constants. >>>> */ >>>> #ifndef SECTOR_SHIFT >>>> -#define SECTOR_SHIFT 9 >>>> +#define SECTOR_SHIFT 9 >>>> #endif >>>> #ifndef SECTOR_SIZE >>>> -#define SECTOR_SIZE (1 << SECTOR_SHIFT) >>>> +#define SECTOR_SIZE (1 << SECTOR_SHIFT) >>>> #endif >>>> +#define PAGE_SECTORS (PAGE_SIZE / SECTOR_SIZE) >>>> >>>> /* >>>> * blk_rq_pos() : the current sector >>> [snipped] >>> >>> >> > > > . >