Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4847653imu; Tue, 29 Jan 2019 08:29:23 -0800 (PST) X-Google-Smtp-Source: ALg8bN7CCw6R2CUJPzptwXQl8ffdI+0DFhEpA0kPJVa9Sp1OkOx3DsGP4ZStZQSkwNYB1qDmSlxh X-Received: by 2002:a63:4706:: with SMTP id u6mr22961327pga.95.1548779363780; Tue, 29 Jan 2019 08:29:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548779363; cv=none; d=google.com; s=arc-20160816; b=N2XIs+5c1Bm6eAkYa4r1QewAneQJPgmALv30KCQn56cU1k2Ziu/KgaITA24GpsQVNa 9kjRL422N7ixQpLEyNOXvexUu/MkyKupsVigWeY6oKMeNPpAlQwdhjnRX5HDWd8XtUlJ jNdSi3ngeBE7eG+RObY2740we/0twqkzT2IUcc6sdB+XSYxWchCEo1sUD6ZIVo6yFVRH sobE2QbQNhss3WpTmYL1gzNc1FQfvobtHg5kWKRZIcYSiVVI8YiDzMkO3nwAFgz/Mjll o7dg3FOLoiwBl9eP0ey4sQyGPdSJnNkJsLKbNBmJ9LMUC8rXNOCzVMQiJ5VAm646nbv9 iRSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=cDWi2oyiN6iTGuaN6/rgXovqTe4A5gv0ae/IMlqfOoo=; b=LP9P4U/RYWohK4y5FZ3E0n6kqcLfjovOlSqLnjuC5wIG28sUZwbLo58BCA6Rshs2YJ Ofnt1oIa2kP1g7+2LKbVzJcbGjV7mwZD13DEawlrj8A15t8t9naVi7XGQwo1lqiy0rE1 xOcrtaThOlp9hDo1mR3xBtsoWdY26fVaaMOS3fTVgsTQvoSOStFKkenMn4m+5dkMDqwT YjXeS3/Pxnuppqm923w3fTt/yffkbyYlE29rSYrKRjPHswV7Tedwdr02Cs3yaJccfa7y ijda7IuZkYwZn4KPCkl3JdqyMUFE96MRuXmadDu7HrnzkCXDkMl1v+w6vOYf63urOlVo FzPw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z19si16691717pfc.95.2019.01.29.08.29.08; Tue, 29 Jan 2019 08:29:23 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727968AbfA2Q1o (ORCPT + 99 others); Tue, 29 Jan 2019 11:27:44 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:44827 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726201AbfA2Q1o (ORCPT ); Tue, 29 Jan 2019 11:27:44 -0500 Received: from [5.158.153.52] (helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1goWEO-0001ah-7x; Tue, 29 Jan 2019 17:27:24 +0100 Date: Tue, 29 Jan 2019 17:27:23 +0100 (CET) From: Thomas Gleixner To: John Garry cc: Hannes Reinecke , Christoph Hellwig , Marc Zyngier , "axboe@kernel.dk" , Keith Busch , Peter Zijlstra , Michael Ellerman , Linuxarm , "linux-kernel@vger.kernel.org" , SCSI Mailing List Subject: Re: Question on handling managed IRQs when hotplugging CPUs In-Reply-To: Message-ID: References: <5bff8227-16fd-6bca-c16e-3992ef6bec5a@suse.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 29 Jan 2019, John Garry wrote: > On 29/01/2019 12:01, Thomas Gleixner wrote: > > If the last CPU which is associated to a queue (and the corresponding > > interrupt) goes offline, then the subsytem/driver code has to make sure > > that: > > > > 1) No more requests can be queued on that queue > > > > 2) All outstanding of that queue have been completed or redirected > > (don't know if that's possible at all) to some other queue. > > This may not be possible. For the HW I deal with, we have symmetrical delivery > and completion queues, and a command delivered on DQx will always complete on > CQx. Each completion queue has a dedicated IRQ. So you can stop queueing on DQx and wait for all outstanding ones to come in on CQx, right? > > That has to be done in that order obviously. Whether any of the > > subsystems/drivers actually implements this, I can't tell. > > Going back to c5cb83bb337c25, it seems to me that the change was made with the > idea that we can maintain the affinity for the IRQ as we're shutting it down > as no interrupts should occur. > > However I don't see why we can't instead keep the IRQ up and set the affinity > to all online CPUs in offline path, and restore the original affinity in > online path. The reason we set the queue affinity to specific CPUs is for > performance, but I would not say that this matters for handling residual IRQs. Oh yes it does. The problem is especially on x86, that if you have a large number of queues and you take a large number of CPUs offline, then you run into vector space exhaustion on the remaining online CPUs. In the worst case a single CPU on x86 has only 186 vectors available for device interrupts. So just take a quad socket machine with 144 CPUs and two multiqueue devices with a queue per cpu. ---> FAIL It probably fails already with one device because there are lots of other devices which have regular interrupt which cannot be shut down. Thanks, tglx