Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3006132ybt; Mon, 29 Jun 2020 12:43:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJysDF5/NIYwZPO8EQLLDVZu9n3ljYyDbDAR5OW+3vHvkNvGnvr5oJNlUcFAGu1R7MPylV3p X-Received: by 2002:a17:906:c1c3:: with SMTP id bw3mr14219579ejb.8.1593459781687; Mon, 29 Jun 2020 12:43:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593459781; cv=none; d=google.com; s=arc-20160816; b=0J6OMBI5Yhm2rZymfXoQ3W0C8SRo4/1RlCIT22fmrsBw7rgR2qEBHiXR2OOrMG9F/c 8DWfTrx59czKlnfCEmKUeZzinoQbX5pmE2g2/t/0QB6KAzy7Px2jXJobz6hyQ3EYEGLK M1PePobU10LV5YF9pUHYxQQ/XeHDYH+WLAApgPqtuagBtHblVSnfwkMn7PNXpSiQ5FT8 an/o4N8Kb/ZbDSrfTT5nn0NceoIUEs/LORdht5bq76O6DFE9V8VS95adPueNhtF+RmF+ loK5/2Uv43KzOWjd+nXKLA4O8OkiH1T55VytTWUaTZcF4TAGNEdtzhDx/uNhdYhGbIGx /G9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=qPRWIOr57ZYTUxHr8TbMRvU0I6oEVGESocMJLPUw7ak=; b=tdePfRZkCOsT5OhDPXHNkCqjkvsZ4HzYDtA/M2tTkeNAt/UQeZUxHmoqJjvPACmS6z Fg3c2IqS2ELiWbOUIc+NsWrWLXg6kCr/cLtrJFHR0hXfb8mNzQ/h5mB2tND/VReWAltv e2eXrYo23vcE5JGduVLk6SWa8SBQUbrEk0Cww8dXAFGIRY1qX72N+0+nKU1Vp/FBwmg3 Zshw+1NnR2dfFsA9lANZhxKBr04XbujI8hblkY88Pb5DgRN2f6UJI8d9Q/MdkXjGbZuO 92znENlv0wM29wZpkoZvuZX/tWRM+N1eqAhQsSjDlrQNbdwfPeKV67SR03JYLcm+0DDd kUUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@javigon-com.20150623.gappssmtp.com header.s=20150623 header.b=IWRI9ZZW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r8si312652edl.19.2020.06.29.12.42.39; Mon, 29 Jun 2020 12:43:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@javigon-com.20150623.gappssmtp.com header.s=20150623 header.b=IWRI9ZZW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387958AbgF2Tma (ORCPT + 99 others); Mon, 29 Jun 2020 15:42:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733127AbgF2TmT (ORCPT ); Mon, 29 Jun 2020 15:42:19 -0400 Received: from mail-ed1-x544.google.com (mail-ed1-x544.google.com [IPv6:2a00:1450:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DAC7C03E97A for ; Mon, 29 Jun 2020 12:42:19 -0700 (PDT) Received: by mail-ed1-x544.google.com with SMTP id d18so8331452edv.6 for ; Mon, 29 Jun 2020 12:42:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=javigon-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=qPRWIOr57ZYTUxHr8TbMRvU0I6oEVGESocMJLPUw7ak=; b=IWRI9ZZWLmSqGaFLlS/Q6+qqYEFfzGMcZO3qmaAQ222cW5zAaZhN5PnKbKVGkZFa9P 1aJerDy40vcxH8EXTXo/GOJj58Rru+hjCZrXQVluj99YOX1GJPZkpk9DjI6INTAuhVsb D6AWuWbM4FmHFpre6LyWjdPXgbAI12W25S66rAFqRfrJEiv0IwdqrG1ar8NQVW3XUKqs oEV36lnW27cU1eFXGVDPYNS1uEOX+5khJ6iktErth6Ud4c3dYDclRZS9upgG4nYCde2E kwP+AA0xNE2iAO3zu/Ah5/tr6V7SEIB3G+SOw4DaRZIwUEsGb+7O7daxFKwPsZ2PBrDN cYRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=qPRWIOr57ZYTUxHr8TbMRvU0I6oEVGESocMJLPUw7ak=; b=dra8KULxmbBQaewc3nZIKJvNzGHx6lQW35VdqJZKGpAwD7q/fT1Te69uZj5Q3P9GD4 RHwPylQEDB4J2pol/98Y1x5XKiZxJ+mnqfpK2jw61Z2LhEf/nHbKUWRtR/O2n4RvhGm1 ZkHwQvCSACelLRgkuTdVWE+KXlr/KaiNeMyhpVdHKtfQPYRz6/tVWK6p9a6gBfIIFsm4 nqBjqreAXuZCmnUcNkWRXbZmWZNHeXkne13ZJ7VkHdcV2IyO1r67wMTmhHnPBsrYh/9C +Z1DZwhNyglh+iQD2Q4SFhMGH584C4rTiEeCSJhyjIf6E5ZJbvhuZSryxR5VcCJVyYkx QPQA== X-Gm-Message-State: AOAM530ASayp4nEfux+BwcM/YSWVu8I05M7n6tV3szUn7pPV6+dBIgOq uz/W+I1Jd86t6G2b2c05zuPc+g== X-Received: by 2002:aa7:da8d:: with SMTP id q13mr15080761eds.339.1593459738252; Mon, 29 Jun 2020 12:42:18 -0700 (PDT) Received: from localhost (5.186.127.235.cgn.fibianet.dk. [5.186.127.235]) by smtp.gmail.com with ESMTPSA id c4sm371603ejb.17.2020.06.29.12.42.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jun 2020 12:42:17 -0700 (PDT) Date: Mon, 29 Jun 2020 21:42:16 +0200 From: Javier =?utf-8?B?R29uesOhbGV6?= To: Niklas Cassel Cc: Jonathan Corbet , Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "James E.J. Bottomley" , "Martin K. Petersen" , linux-scsi@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Subject: Re: [PATCH 2/2] block: add max_active_zones to blk-sysfs Message-ID: <20200629194216.iykvd74rw6d7vqpl@MacBook-Pro.localdomain> References: <20200616102546.491961-1-niklas.cassel@wdc.com> <20200616102546.491961-3-niklas.cassel@wdc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20200616102546.491961-3-niklas.cassel@wdc.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 16.06.2020 12:25, Niklas Cassel wrote: >Add a new max_active zones definition in the sysfs documentation. >This definition will be common for all devices utilizing the zoned block >device support in the kernel. > >Export max_active_zones according to this new definition for NVMe Zoned >Namespace devices, ZAC ATA devices (which are treated as SCSI devices by >the kernel), and ZBC SCSI devices. > >Add the new max_active_zones struct member to the request_queue, rather >than as a queue limit, since this property cannot be split across stacking >drivers. > >For SCSI devices, even though max active zones is not part of the ZBC/ZAC >spec, export max_active_zones as 0, signifying "no limit". > >Signed-off-by: Niklas Cassel >--- > Documentation/block/queue-sysfs.rst | 7 +++++++ > block/blk-sysfs.c | 14 +++++++++++++- > drivers/nvme/host/zns.c | 1 + > drivers/scsi/sd_zbc.c | 1 + > include/linux/blkdev.h | 20 ++++++++++++++++++++ > 5 files changed, 42 insertions(+), 1 deletion(-) > >diff --git a/Documentation/block/queue-sysfs.rst b/Documentation/block/queue-sysfs.rst >index f01cf8530ae4..f261a5c84170 100644 >--- a/Documentation/block/queue-sysfs.rst >+++ b/Documentation/block/queue-sysfs.rst >@@ -117,6 +117,13 @@ Maximum number of elements in a DMA scatter/gather list with integrity > data that will be submitted by the block layer core to the associated > block driver. > >+max_active_zones (RO) >+--------------------- >+For zoned block devices (zoned attribute indicating "host-managed" or >+"host-aware"), the sum of zones belonging to any of the zone states: >+EXPLICIT OPEN, IMPLICIT OPEN or CLOSED, is limited by this value. >+If this value is 0, there is no limit. >+ > max_open_zones (RO) > ------------------- > For zoned block devices (zoned attribute indicating "host-managed" or >diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c >index fa42961e9678..624bb4d85fc7 100644 >--- a/block/blk-sysfs.c >+++ b/block/blk-sysfs.c >@@ -310,6 +310,11 @@ static ssize_t queue_max_open_zones_show(struct request_queue *q, char *page) > return queue_var_show(queue_max_open_zones(q), page); > } > >+static ssize_t queue_max_active_zones_show(struct request_queue *q, char *page) >+{ >+ return queue_var_show(queue_max_active_zones(q), page); >+} >+ > static ssize_t queue_nomerges_show(struct request_queue *q, char *page) > { > return queue_var_show((blk_queue_nomerges(q) << 1) | >@@ -677,6 +682,11 @@ static struct queue_sysfs_entry queue_max_open_zones_entry = { > .show = queue_max_open_zones_show, > }; > >+static struct queue_sysfs_entry queue_max_active_zones_entry = { >+ .attr = {.name = "max_active_zones", .mode = 0444 }, >+ .show = queue_max_active_zones_show, >+}; >+ > static struct queue_sysfs_entry queue_nomerges_entry = { > .attr = {.name = "nomerges", .mode = 0644 }, > .show = queue_nomerges_show, >@@ -776,6 +786,7 @@ static struct attribute *queue_attrs[] = { > &queue_zoned_entry.attr, > &queue_nr_zones_entry.attr, > &queue_max_open_zones_entry.attr, >+ &queue_max_active_zones_entry.attr, > &queue_nomerges_entry.attr, > &queue_rq_affinity_entry.attr, > &queue_iostats_entry.attr, >@@ -803,7 +814,8 @@ static umode_t queue_attr_visible(struct kobject *kobj, struct attribute *attr, > (!q->mq_ops || !q->mq_ops->timeout)) > return 0; > >- if (attr == &queue_max_open_zones_entry.attr && >+ if ((attr == &queue_max_open_zones_entry.attr || >+ attr == &queue_max_active_zones_entry.attr) && > !blk_queue_is_zoned(q)) > return 0; > >diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c >index af156529f3b6..502070763266 100644 >--- a/drivers/nvme/host/zns.c >+++ b/drivers/nvme/host/zns.c >@@ -83,6 +83,7 @@ int nvme_update_zone_info(struct gendisk *disk, struct nvme_ns *ns, > q->limits.zoned = BLK_ZONED_HM; > blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q); > blk_queue_max_open_zones(q, le32_to_cpu(id->mor) + 1); >+ blk_queue_max_active_zones(q, le32_to_cpu(id->mar) + 1); > free_data: > kfree(id); > return status; >diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c >index aa3564139b40..d8b2c49d645b 100644 >--- a/drivers/scsi/sd_zbc.c >+++ b/drivers/scsi/sd_zbc.c >@@ -721,6 +721,7 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) > blk_queue_max_open_zones(q, 0); > else > blk_queue_max_open_zones(q, sdkp->zones_max_open); >+ blk_queue_max_active_zones(q, 0); > nr_zones = round_up(sdkp->capacity, zone_blocks) >> ilog2(zone_blocks); > > /* READ16/WRITE16 is mandatory for ZBC disks */ >diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h >index 2f332f00501d..3776140f8f20 100644 >--- a/include/linux/blkdev.h >+++ b/include/linux/blkdev.h >@@ -521,6 +521,7 @@ struct request_queue { > unsigned long *conv_zones_bitmap; > unsigned long *seq_zones_wlock; > unsigned int max_open_zones; >+ unsigned int max_active_zones; > #endif /* CONFIG_BLK_DEV_ZONED */ > > /* >@@ -741,6 +742,17 @@ static inline unsigned int queue_max_open_zones(const struct request_queue *q) > { > return q->max_open_zones; > } >+ >+static inline void blk_queue_max_active_zones(struct request_queue *q, >+ unsigned int max_active_zones) >+{ >+ q->max_active_zones = max_active_zones; >+} >+ >+static inline unsigned int queue_max_active_zones(const struct request_queue *q) >+{ >+ return q->max_active_zones; >+} > #else /* CONFIG_BLK_DEV_ZONED */ > static inline unsigned int blk_queue_nr_zones(struct request_queue *q) > { >@@ -764,6 +776,14 @@ static inline unsigned int queue_max_open_zones(const struct request_queue *q) > { > return 0; > } >+static inline void blk_queue_max_active_zones(struct request_queue *q, >+ unsigned int max_active_zones) >+{ >+} >+static inline unsigned int queue_max_active_zones(const struct request_queue *q) >+{ >+ return 0; >+} > #endif /* CONFIG_BLK_DEV_ZONED */ > > static inline bool rq_is_sync(struct request *rq) >-- >2.26.2 > > >_______________________________________________ >linux-nvme mailing list >linux-nvme@lists.infradead.org >http://lists.infradead.org/mailman/listinfo/linux-nvme Looks good to me Reviewed-by: Javier González