Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3747616imm; Tue, 29 May 2018 12:53:13 -0700 (PDT) X-Google-Smtp-Source: AB8JxZo5hHZIoPtitELu9AA/58ITR5CdaAjDJA+nqaqOYKWLs9yLIkMzhEpoISQrK0IeNOAQaa6D X-Received: by 2002:a17:902:42a3:: with SMTP id h32-v6mr19147125pld.72.1527623593053; Tue, 29 May 2018 12:53:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527623593; cv=none; d=google.com; s=arc-20160816; b=Vvs6cVGkEFgAYwJTQrp3bkfyaRnMHntObPXa56O1M0wk6TwYDYAXc+pTgKU2znm3lk uv6e+sRGkBi4laWODAXRTZpdySHefILnEivundI2EzGsIaxbHoBjgohaLI0e8Tg9ibtb l0OJzNJV+yAfUwnSYF8W50jE/JwzvGms43CDWxtZZXHXBdv9U6kcpN0KWdsnhUW5C2sx Wp3Q4Y/j+KkvxlVgWVd8VzSfGIn3/LdYTgWal4L5OaRizStXIpeF9x2UKleNpOzKmqY9 fE9kkJyJVRkgiDMazPAbxecPCW+5pzjgtwiP232C58epG3BjA5eesMqIDUgqDIs6mihQ VwUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=TrmkU+EikJslJ+79NOXEkotHbbGFBTwqCs7HuwIq3dI=; b=n9l7/sRfgjNMOfHAMIhhhaGxVEim5Vhn4qUTmQEFltfQ8w4BVL54qBdicGzYPRyHAW wQXBH7vzzesuMD8scdwww5w6ZyD1rf45x2srkpIUR0Zw1+0hOXrjUUWEoRDbYRtsZ/Ic bZtSJqlCvCZhM1MwMGQ5qt2nvlicxVWl0y+6NPqLd8MEywS/LmWs30awoIkYu1jI0uQX pFtGfyJpv0Lb8kddqa+xq19QrI64+yzlU3tN1acmpQPtZs3fOUPf84IKUjlpv8LQ0NIa fKE27OZgBcLryWt8bjFJuJYDbdAmqu76YXRzBEbUqINOm7+GPGxsXUReZRYF2F4IyEcm iQnw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i189-v6si10617664pgd.111.2018.05.29.12.52.59; Tue, 29 May 2018 12:53:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966799AbeE2TwF (ORCPT + 99 others); Tue, 29 May 2018 15:52:05 -0400 Received: from mga06.intel.com ([134.134.136.31]:7357 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966568AbeE2TvQ (ORCPT ); Tue, 29 May 2018 15:51:16 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 May 2018 12:51:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,457,1520924400"; d="scan'208";a="232924737" Received: from theros.lm.intel.com ([10.232.112.164]) by fmsmga006.fm.intel.com with ESMTP; 29 May 2018 12:51:11 -0700 From: Ross Zwisler To: Toshi Kani , Mike Snitzer , dm-devel@redhat.com Cc: Ross Zwisler , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-xfs@vger.kernel.org Subject: [PATCH v2 5/7] dm: remove DM_TYPE_DAX_BIO_BASED dm_queue_mode Date: Tue, 29 May 2018 13:51:04 -0600 Message-Id: <20180529195106.14268-6-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180529195106.14268-1-ross.zwisler@linux.intel.com> References: <20180529195106.14268-1-ross.zwisler@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The DM_TYPE_DAX_BIO_BASED dm_queue_mode was introduced to prevent DM devices that could possibly support DAX from transitioning into DM devices that cannot support DAX. For example, the following transition will currently fail: dm-linear: [fsdax pmem][fsdax pmem] => [fsdax pmem][fsdax raw] DM_TYPE_DAX_BIO_BASED DM_TYPE_BIO_BASED but these will both succeed: dm-linear: [fsdax pmem][brd ramdisk] => [fsdax pmem][fsdax raw] DM_TYPE_DAX_BASED DM_TYPE_BIO_BASED dm-linear: [fsdax pmem][fsdax raw] => [fsdax pmem][fsdax pmem] DM_TYPE_BIO_BASED DM_TYPE_DAX_BIO_BASED This seems arbitrary, as really the choice on whether to use DAX happens at filesystem mount time. There's no guarantee that the in the first case (double fsdax pmem) we were using the dax mount option with our file system. Instead, get rid of DM_TYPE_DAX_BIO_BASED and all the special casing around it, and instead make the request queue's QUEUE_FLAG_DAX be our one source of truth. If this is set, we can use DAX, and if not, not. We keep this up to date in table_load() as the table changes. As with regular block devices the filesystem will then know at mount time whether DAX is a supported mount option or not. Signed-off-by: Ross Zwisler --- drivers/md/dm-ioctl.c | 16 ++++++---------- drivers/md/dm-table.c | 25 ++++++++++--------------- drivers/md/dm.c | 2 -- include/linux/device-mapper.h | 8 ++++++-- 4 files changed, 22 insertions(+), 29 deletions(-) diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c index 5acf77de5945..d1f86d0bb2d0 100644 --- a/drivers/md/dm-ioctl.c +++ b/drivers/md/dm-ioctl.c @@ -1292,15 +1292,6 @@ static int populate_table(struct dm_table *table, return dm_table_complete(table); } -static bool is_valid_type(enum dm_queue_mode cur, enum dm_queue_mode new) -{ - if (cur == new || - (cur == DM_TYPE_BIO_BASED && new == DM_TYPE_DAX_BIO_BASED)) - return true; - - return false; -} - static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_size) { int r; @@ -1343,12 +1334,17 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si DMWARN("unable to set up device queue for new table."); goto err_unlock_md_type; } - } else if (!is_valid_type(dm_get_md_type(md), dm_table_get_type(t))) { + } else if (dm_get_md_type(md) != dm_table_get_type(t)) { DMWARN("can't change device type after initial table load."); r = -EINVAL; goto err_unlock_md_type; } + if (dm_table_supports_dax(t)) + blk_queue_flag_set(QUEUE_FLAG_DAX, md->queue); + else + blk_queue_flag_clear(QUEUE_FLAG_DAX, md->queue); + dm_unlock_md_type(md); /* stage inactive table */ diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 5bb994b012ca..ea5c4a1e6f5b 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -866,7 +866,6 @@ EXPORT_SYMBOL(dm_consume_args); static bool __table_type_bio_based(enum dm_queue_mode table_type) { return (table_type == DM_TYPE_BIO_BASED || - table_type == DM_TYPE_DAX_BIO_BASED || table_type == DM_TYPE_NVME_BIO_BASED); } @@ -888,7 +887,7 @@ static int device_supports_dax(struct dm_target *ti, struct dm_dev *dev, return bdev_dax_supported(dev->bdev, PAGE_SIZE); } -static bool dm_table_supports_dax(struct dm_table *t) +bool dm_table_supports_dax(struct dm_table *t) { struct dm_target *ti; unsigned i; @@ -907,6 +906,7 @@ static bool dm_table_supports_dax(struct dm_table *t) return true; } +EXPORT_SYMBOL_GPL(dm_table_supports_dax); static bool dm_table_does_not_support_partial_completion(struct dm_table *t); @@ -944,7 +944,6 @@ static int dm_table_determine_type(struct dm_table *t) /* possibly upgrade to a variant of bio-based */ goto verify_bio_based; } - BUG_ON(t->type == DM_TYPE_DAX_BIO_BASED); BUG_ON(t->type == DM_TYPE_NVME_BIO_BASED); goto verify_rq_based; } @@ -981,18 +980,14 @@ static int dm_table_determine_type(struct dm_table *t) verify_bio_based: /* We must use this table as bio-based */ t->type = DM_TYPE_BIO_BASED; - if (dm_table_supports_dax(t) || - (list_empty(devices) && live_md_type == DM_TYPE_DAX_BIO_BASED)) { - t->type = DM_TYPE_DAX_BIO_BASED; - } else { - /* Check if upgrading to NVMe bio-based is valid or required */ - tgt = dm_table_get_immutable_target(t); - if (tgt && !tgt->max_io_len && dm_table_does_not_support_partial_completion(t)) { - t->type = DM_TYPE_NVME_BIO_BASED; - goto verify_rq_based; /* must be stacked directly on NVMe (blk-mq) */ - } else if (list_empty(devices) && live_md_type == DM_TYPE_NVME_BIO_BASED) { - t->type = DM_TYPE_NVME_BIO_BASED; - } + + /* Check if upgrading to NVMe bio-based is valid or required */ + tgt = dm_table_get_immutable_target(t); + if (tgt && !tgt->max_io_len && dm_table_does_not_support_partial_completion(t)) { + t->type = DM_TYPE_NVME_BIO_BASED; + goto verify_rq_based; /* must be stacked directly on NVMe (blk-mq) */ + } else if (list_empty(devices) && live_md_type == DM_TYPE_NVME_BIO_BASED) { + t->type = DM_TYPE_NVME_BIO_BASED; } return 0; } diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 9728433362d1..0ce06fa292fd 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -2192,7 +2192,6 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) } break; case DM_TYPE_BIO_BASED: - case DM_TYPE_DAX_BIO_BASED: dm_init_normal_md_queue(md); blk_queue_make_request(md->queue, dm_make_request); break; @@ -2910,7 +2909,6 @@ struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, enum dm_qu switch (type) { case DM_TYPE_BIO_BASED: - case DM_TYPE_DAX_BIO_BASED: case DM_TYPE_NVME_BIO_BASED: pool_size = max(dm_get_reserved_bio_based_ios(), min_pool_size); front_pad = roundup(per_io_data_size, __alignof__(struct dm_target_io)) + offsetof(struct dm_target_io, clone); diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index 31fef7c34185..cbf3d7e7ed33 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -27,8 +27,7 @@ enum dm_queue_mode { DM_TYPE_BIO_BASED = 1, DM_TYPE_REQUEST_BASED = 2, DM_TYPE_MQ_REQUEST_BASED = 3, - DM_TYPE_DAX_BIO_BASED = 4, - DM_TYPE_NVME_BIO_BASED = 5, + DM_TYPE_NVME_BIO_BASED = 4, }; typedef enum { STATUSTYPE_INFO, STATUSTYPE_TABLE } status_type_t; @@ -460,6 +459,11 @@ void dm_table_add_target_callbacks(struct dm_table *t, struct dm_target_callback */ void dm_table_set_type(struct dm_table *t, enum dm_queue_mode type); +/* + * Check to see if this target type and all table devices support DAX. + */ +bool dm_table_supports_dax(struct dm_table *t); + /* * Finally call this to make the table ready for use. */ -- 2.14.3