Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4540064imu; Tue, 29 Jan 2019 03:26:37 -0800 (PST) X-Google-Smtp-Source: ALg8bN5InVOlz6PPimCG3V6mHctcLD9YwdtX4gf/iy6N0nKOLTVIFUaq83moXSQ8OjW5/T/HceJ4 X-Received: by 2002:a63:83:: with SMTP id 125mr22855572pga.343.1548761197216; Tue, 29 Jan 2019 03:26:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548761197; cv=none; d=google.com; s=arc-20160816; b=YlpNQa0uZ2hbP05/joQ3XyuHc+vKEoz1DuU3KuTgZe1O67fK17FbXaA9PsRpf8kc2v jnqY4UcHF4rE7787FBq9kUYPvXDNP46AlgMRA7M8Ha35u2EQdRQCRy5JuXbrRlTld7ff 541C4D66rVXhavlZXk2c2QJpB360mIPc856MU10LM5AlbSJFYqMlNqy95Gzaa2lRX2Oc QO+/oEfN2LvTfWSS7uE7PJ7kXlfMmazCB69hLNz+/SBgod6kHfslrOx5yGT50IIiJ8J5 EywdZpbgVNmrj6RpJVaILbTP0zupbqJpiKaLS5MNPfG2Nn2UrcfNG5MTvAzprAPORMbr btNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:date:message-id:cc:to:subject:from; bh=YU5Z7G0vkwHKd+1TDrRe39c3PEZb44B0lhs3h5cy1eY=; b=xzWJtq/lQyLGNH56O9ZbkrmmQT9iKU8I1hzQSVe90leuMZElrAbVfUXPJBXWKbelNp vguxCt+F7p7vJ5SnACSn2iE6MT/P5wfm9q8dQtJESPXm2UbiiAVqnttPCj7Nj+MLmyZl 9Nm3JrqmT0w9+0eXfTh3zNK4LtlM7+RXNViJGPpnfjB1Eva8BMmq33MprSGq2cAmOr6G 6ovdFnu4MeUvXEP8gpiQzLTHpmWEVJRvc56aTHiYFCQW5+kOcJDhI/ghiks5vL4tL3G0 E49Ajl5tMBLb22Wms4hqI4RTpwnGh8+yOGCVCN96xY2nMoNWUtH8f5RXZVBWDcyZ1ERN AGig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u184si36109725pgd.262.2019.01.29.03.26.21; Tue, 29 Jan 2019 03:26:37 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728265AbfA2L0F (ORCPT + 99 others); Tue, 29 Jan 2019 06:26:05 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:34862 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725772AbfA2L0E (ORCPT ); Tue, 29 Jan 2019 06:26:04 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 4680B9FE59E39C8241B3; Tue, 29 Jan 2019 19:26:01 +0800 (CST) Received: from [127.0.0.1] (10.202.226.43) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.408.0; Tue, 29 Jan 2019 19:25:54 +0800 From: John Garry Subject: Question on handling managed IRQs when hotplugging CPUs To: , Christoph Hellwig CC: Marc Zyngier , "axboe@kernel.dk" , Keith Busch , Peter Zijlstra , Michael Ellerman , Linuxarm , "linux-kernel@vger.kernel.org" , "Hannes Reinecke" Message-ID: Date: Tue, 29 Jan 2019 11:25:48 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.202.226.43] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, I have a question on $subject which I hope you can shed some light on. According to commit c5cb83bb337c25 ("genirq/cpuhotplug: Handle managed IRQs on CPU hotplug"), if we offline the last CPU in a managed IRQ affinity mask, the IRQ is shutdown. The reasoning is that this IRQ is thought to be associated with a specific queue on a MQ device, and the CPUs in the IRQ affinity mask are the same CPUs associated with the queue. So, if no CPU is using the queue, then no need for the IRQ. However how does this handle scenario of last CPU in IRQ affinity mask being offlined while IO associated with queue is still in flight? Or if we make the decision to use queue associated with the current CPU, and then that CPU (being the last CPU online in the queue's IRQ afffinity mask) goes offline and we finish the delivery with another CPU? In these cases, when the IO completes, it would not be serviced and timeout. I have actually tried this on my arm64 system and I see IO timeouts. Thanks in advance, John