Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp2739563pxp; Tue, 22 Mar 2022 05:17:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwi3+I1aPPTdUESa3msaWBmqjB9vttIzI8vcZB/j1P/uvfOhdup1RHk54GwGqmMFhpw8A0j X-Received: by 2002:a17:902:7781:b0:153:35ef:e3d1 with SMTP id o1-20020a170902778100b0015335efe3d1mr18118256pll.116.1647951426699; Tue, 22 Mar 2022 05:17:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1647951426; cv=none; d=google.com; s=arc-20160816; b=MCMyZsD+g/UnDo+CHAEdLTnTBUMo/TLlMxyAyC3Te7JaWIlm6gcZiFubdjmXFuIXWN 6OyGUpt6CSDmxJtZQghsnwWmROUjB8ggKdX+n4fGneQWR+7msJWnQGiXf0h6LZzvxBAK 3ina+AQSL2o/QcWfuC56aMaX1B8Qz2Nk56sYjUEK8aJfp+z7mrjc+R4vtB4jJo6t6zXU E2x5L6akTJWCGDptquB/OLIewNk21GcwtEjt4RG+GkIgi2QEaEtWVdWIvYwKM7GOy3PH VyODIbGiUlkGpxmR2vIxYPnttJljKucH9KDb6MEl4qdk0mfB2x+TDVWEjOWxxCIabH1h e2NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=daEiJEIKgOGYBqM/s4A8hXDhUvweohHuTcbc50V4Vp0=; b=YcTzDEaD8Z6qdBlwpNU0QoiZlSxSi+fXNirWj8/kdEXz0znKlgcLbP27l+45IYaDVx WRymT2Y9z1f6regef/4JQrXFZ8u7sMb09qF6Zy505zYzC3W4Z/d2lEsnr/XmFzyYDhMP NVWctXq+qFlYnGeSFrIrJ81x9ZcYtPQKGC/alRjaeTeE0FsMk4bhW1WdzDwXFVt1RbHP nPCxVgiiTf2zHJzKxQA7cvvdsfo3/Mj234MM/q+YN4zUJV8FQEoduZeErPb3gAj6eMaP aesmeNsdJPmZjU3Rl22aVQpzC6vdDE5lynwrt01XXVBVzb1byD2KozIDMdtWV4tS9LeM R/Bg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d7-20020a170902728700b00153b2d16639si13407743pll.577.2022.03.22.05.16.53; Tue, 22 Mar 2022 05:17:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233393AbiCVKrV (ORCPT + 99 others); Tue, 22 Mar 2022 06:47:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233364AbiCVKrN (ORCPT ); Tue, 22 Mar 2022 06:47:13 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C705112752; Tue, 22 Mar 2022 03:45:45 -0700 (PDT) Received: from fraeml703-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KN7Pd2v2vz684m2; Tue, 22 Mar 2022 18:43:33 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml703-chm.china.huawei.com (10.206.15.52) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.24; Tue, 22 Mar 2022 11:45:43 +0100 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Tue, 22 Mar 2022 10:45:39 +0000 From: John Garry To: , , , , , , , CC: , , , , , , , John Garry Subject: [PATCH 01/11] blk-mq: Add blk_mq_init_queue_ops() Date: Tue, 22 Mar 2022 18:39:35 +0800 Message-ID: <1647945585-197349-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1647945585-197349-1-git-send-email-john.garry@huawei.com> References: <1647945585-197349-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add an API to allocate a request queue which accepts a custom set of blk_mq_ops for that request queue. The reason which we may want custom ops is for queuing requests which we don't want to go through the normal queuing path. Signed-off-by: John Garry --- block/blk-mq.c | 23 +++++++++++++++++------ drivers/md/dm-rq.c | 2 +- include/linux/blk-mq.h | 5 ++++- 3 files changed, 22 insertions(+), 8 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f3bf3358a3bb..8ea3447339ca 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3858,7 +3858,7 @@ void blk_mq_release(struct request_queue *q) } static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, - void *queuedata) + void *queuedata, const struct blk_mq_ops *ops) { struct request_queue *q; int ret; @@ -3867,27 +3867,35 @@ static struct request_queue *blk_mq_init_queue_data(struct blk_mq_tag_set *set, if (!q) return ERR_PTR(-ENOMEM); q->queuedata = queuedata; - ret = blk_mq_init_allocated_queue(set, q); + ret = blk_mq_init_allocated_queue(set, q, ops); if (ret) { blk_cleanup_queue(q); return ERR_PTR(ret); } + return q; } struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set) { - return blk_mq_init_queue_data(set, NULL); + return blk_mq_init_queue_data(set, NULL, NULL); } EXPORT_SYMBOL(blk_mq_init_queue); +struct request_queue *blk_mq_init_queue_ops(struct blk_mq_tag_set *set, + const struct blk_mq_ops *custom_ops) +{ + return blk_mq_init_queue_data(set, NULL, custom_ops); +} +EXPORT_SYMBOL(blk_mq_init_queue_ops); + struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, struct lock_class_key *lkclass) { struct request_queue *q; struct gendisk *disk; - q = blk_mq_init_queue_data(set, queuedata); + q = blk_mq_init_queue_data(set, queuedata, NULL); if (IS_ERR(q)) return ERR_CAST(q); @@ -4010,13 +4018,16 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, } int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, - struct request_queue *q) + struct request_queue *q, const struct blk_mq_ops *custom_ops) { WARN_ON_ONCE(blk_queue_has_srcu(q) != !!(set->flags & BLK_MQ_F_BLOCKING)); /* mark the queue as mq asap */ - q->mq_ops = set->ops; + if (custom_ops) + q->mq_ops = custom_ops; + else + q->mq_ops = set->ops; q->poll_cb = blk_stat_alloc_callback(blk_mq_poll_stats_fn, blk_mq_poll_stats_bkt, diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index 3907950a0ddc..9d93f72a3eec 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -560,7 +560,7 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t) if (err) goto out_kfree_tag_set; - err = blk_mq_init_allocated_queue(md->tag_set, md->queue); + err = blk_mq_init_allocated_queue(md->tag_set, md->queue, NULL); if (err) goto out_tag_set; return 0; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index d319ffa59354..e12d17c86c52 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -688,8 +688,11 @@ struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, __blk_mq_alloc_disk(set, queuedata, &__key); \ }) struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *); +struct request_queue *blk_mq_init_queue_ops(struct blk_mq_tag_set *, + const struct blk_mq_ops *custom_ops); + int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, - struct request_queue *q); + struct request_queue *q, const struct blk_mq_ops *custom_ops); void blk_mq_unregister_dev(struct device *, struct request_queue *); int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set); -- 2.26.2