Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp3181977ybd; Mon, 24 Jun 2019 21:09:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqx3IbDNqAQSg6md85lXoH/mVtabIr7BswaMPcCDigYaQYdbVwgZkWlkOYQlgFwd53BeiWdE X-Received: by 2002:a63:b10:: with SMTP id 16mr4238562pgl.336.1561435766961; Mon, 24 Jun 2019 21:09:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561435766; cv=none; d=google.com; s=arc-20160816; b=qDZfCocAn6J47pi7UM+MxyGVkvBI6lDt5NGVHsYmNZz94AIe1Y33Dh344rrS722oNK iMmIHY0QOSagLu0xEXZklba/eqs9VHCqam0Hl6rwOsyXgsGdjYdCOsb5e2ODAeQXszWb 9UGZ5qc/zl66xi9p+GEtcj9OeKcvkme1X/JYE8M9FMzA+AOjZ8hGr5iuC+mDMNlmGS+b XaWPTiu2g0cQisqOphZalkWIMFxLXh4llBXszv1QET3yoJC9kCuEh1ziUfeNyezcl66k 3IJFH9Zrok+UhRhHpoihhGn0MzC8O76YU7M47WSDy//mSzYZWeQJfgmtKENZAK8N5h7G /afQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=xzWudCQ+8i4nLaNS2ZZ42MaVwcl/EpXW3dPwWriyTbU=; b=LbWqAXZH243gNG073RFkKQPS9QrCupbMLY1F2/iCNg8Ore8P74EJU+5OqBAM6mpQtY r3FXswX4A5rvsDxjMhaynfqlDR7zbrHjaki/0W5ikiKZghURSHAQKZA3hhwl+8T+bZ34 DDf/vwQmCT3verjDY7HK9R+NRUtQaeJGKzYDDMz68gRYWLJvITvU/IzUWyrHaboXP355 NMtGaK7HtfxnSde42HdTqXGRDn1/jS142uxqA3dq/lxcgJkWLW3wUS7TdqhZEM3uACP6 FWMssgDYgxwt3t2TeIUCZjiRIY4UM95UrRstSQ/tQdzKZIOovr2ok9ru/k9pCECN8Xy9 8Wfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=SxUColHZ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 15si12591717pgb.546.2019.06.24.21.09.11; Mon, 24 Jun 2019 21:09:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=SxUColHZ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729848AbfFYB3D (ORCPT + 99 others); Mon, 24 Jun 2019 21:29:03 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:34732 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727728AbfFYB3D (ORCPT ); Mon, 24 Jun 2019 21:29:03 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x5P1O3ai122691; Tue, 25 Jun 2019 01:27:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=xzWudCQ+8i4nLaNS2ZZ42MaVwcl/EpXW3dPwWriyTbU=; b=SxUColHZZQSspUhbeR+Sp8Ox1niDMcetEhn/Fi9NTQXA4iENyCSUk7kchWF518srcjxj xOxVI0NtiyfC6MZq75rkL7Dq2rQ7MTBNM6BNCgEI8//uxf8B5AoQ+dLWfEn0wAqTDYFc Bp9FhmA+Bup+b6Dlvk6XbVlEQBqqJ2yr0HzlhREvTSPeapmDgfQ3l2xLLHRQTwwmh1yv G7lrUMcQ6NIUbdBpo0ogZP9NFmaMwBuhePc7NDLtJpQjU62sOOiBNi1oODX9d89JsPUN zp+u07Fe+T+MhjpYI5oSXsHS4bhMM4GeLcy6P2Ser+Z1LC5ctw0NOZZX2jKDxoqJKP1u lw== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by userp2130.oracle.com with ESMTP id 2t9brt1agx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 25 Jun 2019 01:27:29 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x5P1PplU114563; Tue, 25 Jun 2019 01:27:29 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by userp3030.oracle.com with ESMTP id 2t99f3k3qq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 25 Jun 2019 01:27:29 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x5P1RO74020914; Tue, 25 Jun 2019 01:27:25 GMT Received: from [10.182.69.106] (/10.182.69.106) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 24 Jun 2019 18:27:24 -0700 Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug To: Wenbin Zeng Cc: axboe@kernel.dk, keith.busch@intel.com, hare@suse.com, ming.lei@redhat.com, osandov@fb.com, sagi@grimberg.me, bvanassche@acm.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Wenbin Zeng References: <1561389847-30853-1-git-send-email-wenbinzeng@tencent.com> From: Dongli Zhang Message-ID: Date: Tue, 25 Jun 2019 09:30:16 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: <1561389847-30853-1-git-send-email-wenbinzeng@tencent.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9298 signatures=668687 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1906250008 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9298 signatures=668687 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1906250008 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Wenbin, On 6/24/19 11:24 PM, Wenbin Zeng wrote: > Currently hctx->cpumask is not updated when hot-plugging new cpus, > as there are many chances kblockd_mod_delayed_work_on() getting > called with WORK_CPU_UNBOUND, workqueue blk_mq_run_work_fn may run > on the newly-plugged cpus, consequently __blk_mq_run_hw_queue() > reporting excessive "run queue from wrong CPU" messages because > cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) returns false. > > This patch added a cpu-hotplug handler into blk-mq, updating > hctx->cpumask at cpu-hotplug. > > Signed-off-by: Wenbin Zeng > --- > block/blk-mq.c | 29 +++++++++++++++++++++++++++++ > include/linux/blk-mq.h | 1 + > 2 files changed, 30 insertions(+) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index ce0f5f4..2e465fc 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -39,6 +39,8 @@ > #include "blk-mq-sched.h" > #include "blk-rq-qos.h" > > +static enum cpuhp_state cpuhp_blk_mq_online; > + > static void blk_mq_poll_stats_start(struct request_queue *q); > static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb); > > @@ -2215,6 +2217,21 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, > return -ENOMEM; > } > > +static int blk_mq_hctx_notify_online(unsigned int cpu, struct hlist_node *node) > +{ > + struct blk_mq_hw_ctx *hctx; > + > + hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_online); > + > + if (!cpumask_test_cpu(cpu, hctx->cpumask)) { > + mutex_lock(&hctx->queue->sysfs_lock); > + cpumask_set_cpu(cpu, hctx->cpumask); > + mutex_unlock(&hctx->queue->sysfs_lock); > + } > + > + return 0; > +} > + As this callback is registered for each hctx, when a cpu is online, it is called for each hctx. Just taking a 4-queue nvme as example (regardless about other block like loop). Suppose cpu=2 (out of 0, 1, 2 and 3) is offline. When we online cpu=2, blk_mq_hctx_notify_online() called: cpu=2 and blk_mq_hw_ctx->queue_num=3 blk_mq_hctx_notify_online() called: cpu=2 and blk_mq_hw_ctx->queue_num=2 blk_mq_hctx_notify_online() called: cpu=2 and blk_mq_hw_ctx->queue_num=1 blk_mq_hctx_notify_online() called: cpu=2 and blk_mq_hw_ctx->queue_num=0 There is no need to set cpu 2 for blk_mq_hw_ctx->queue_num=[3, 1, 0]. I am afraid this patch would erroneously set cpumask for blk_mq_hw_ctx->queue_num=[3, 1, 0]. I used to submit the below patch explaining above for removing a cpu and it is unfortunately not merged yet. https://patchwork.kernel.org/patch/10889307/ Another thing is during initialization, the hctx->cpumask should already been set and even the cpu is offline. Would you please explain the case hctx->cpumask is not set correctly, e.g., how to reproduce with a kvm guest running scsi/virtio/nvme/loop? Dongli Zhang