Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4572217imu; Tue, 29 Jan 2019 04:02:30 -0800 (PST) X-Google-Smtp-Source: ALg8bN4vXOUXjC5RbBO5NhEPsZlW8ZD4cM0pKnW3FMH3zUyQgrYEG0ua3YcaIeWFixQLOwVShXz+ X-Received: by 2002:a17:902:b494:: with SMTP id y20mr26220432plr.178.1548763350050; Tue, 29 Jan 2019 04:02:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548763350; cv=none; d=google.com; s=arc-20160816; b=i4UOEcSGCFeYVwrfAsbaBhPsXOM3v4AHQTA9gEEbNllSr5Zl3I7UWI/s1uxteP6wjd 0Lg7zNTa/LDixqWd+bmRvFUABfor0ZHJhD/crfI/vTNhDhwATIYO1SrCt/xEBNt4hGL0 BgNnliWQEr+LtvlaFluSRe3Mex6UoYPxvrE00nf3kYjdAj0x1CtKII2Uj569CLhIAXl2 XUj3ycUytz+6TZOq/yiuyCN1Tv+ZcqzBCXDPBmW9J4cPmkqTxZx1h7cCAarRCgNRfyzN 8DaLyxkJ4cB9R+y7IS60EBWSOZx8pe9ihyYpQKFPN8Jn4+y4G6x0vdZVouYKAB6FHD3/ jxHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=8mpMgXfYM5IrkvyloCwb0kcoG+1V6+InkVCt1JnkbD4=; b=BXG+LkDCYSHjxXdJb6syVnGO0olpbEIbwNICUgbeGqAULJMEhYy7PU4KReA2MmPFRv FiGP6AiwI8kK2+7MQllFrVyI2ekeKZNPj0MnR3eOhOuBTC/LkFWDBAsXAbytzvY+VC7R QiXnL2Airx0TJoacsK0oajtkVQSLMdlIJ6CKp8wyrsSxqkWvYJE9jXuJMlhfUvbBmCm8 6/PESsZj8D8M7VHEXYpyafihwdsa+narNRHTC+N1PS+WupO16eAYJocZKmexZYvj/5AV 7aHZe5J/Exh+EYNJ88B8X/V1WT8q+u8hx0eTdUJKQY/awZ6+qMYuEAiKTklpz1edVyOK 8ViQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m5si35836192pfm.149.2019.01.29.04.02.14; Tue, 29 Jan 2019 04:02:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730683AbfA2MCJ (ORCPT + 99 others); Tue, 29 Jan 2019 07:02:09 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:44301 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727589AbfA2MCH (ORCPT ); Tue, 29 Jan 2019 07:02:07 -0500 Received: from [5.158.153.52] (helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1goS5J-0002OX-OX; Tue, 29 Jan 2019 13:01:45 +0100 Date: Tue, 29 Jan 2019 13:01:45 +0100 (CET) From: Thomas Gleixner To: Hannes Reinecke cc: John Garry , Christoph Hellwig , Marc Zyngier , "axboe@kernel.dk" , Keith Busch , Peter Zijlstra , Michael Ellerman , Linuxarm , "linux-kernel@vger.kernel.org" , SCSI Mailing List Subject: Re: Question on handling managed IRQs when hotplugging CPUs In-Reply-To: <5bff8227-16fd-6bca-c16e-3992ef6bec5a@suse.com> Message-ID: References: <5bff8227-16fd-6bca-c16e-3992ef6bec5a@suse.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 29 Jan 2019, Hannes Reinecke wrote: > That actually is a very good question, and I have been wondering about this > for quite some time. > > I find it a bit hard to envision a scenario where the IRQ affinity is > automatically (and, more importantly, atomically!) re-routed to one of the > other CPUs. > And even it it were, chances are that there are checks in the driver > _preventing_ them from handling those requests, seeing that they should have > been handled by another CPU ... > > I guess the safest bet is to implement a 'cleanup' worker queue which is > responsible of looking through all the outstanding commands (on all hardware > queues), and then complete those for which no corresponding CPU / irqhandler > can be found. > > But I defer to the higher authorities here; maybe I'm totally wrong and it's > already been taken care of. TBH, I don't know. I merily was involved in the genirq side of this. But yes, in order to make this work correctly the basic contract for CPU hotplug case must be: If the last CPU which is associated to a queue (and the corresponding interrupt) goes offline, then the subsytem/driver code has to make sure that: 1) No more requests can be queued on that queue 2) All outstanding of that queue have been completed or redirected (don't know if that's possible at all) to some other queue. That has to be done in that order obviously. Whether any of the subsystems/drivers actually implements this, I can't tell. Thanks, tglx