Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp3233785img; Mon, 25 Mar 2019 06:28:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqzAalXDGUbVUjybdA0B7gay32fsrVPlu1djRrpW3HoA8mqKAY+0QHBWqy1+f1PkIp2w2w4Q X-Received: by 2002:a17:902:bccc:: with SMTP id o12mr24154154pls.70.1553520511458; Mon, 25 Mar 2019 06:28:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553520511; cv=none; d=google.com; s=arc-20160816; b=L18DzhKt61UdRRdT0dnMPeHTB1fXrKoIcDfKuMJf43bN1tmqdb6Uw26t+YrQDZFttm iH+3RbFkUmmrIWmIpU//uzKmJthhJU//YWBn4A1QxW2TWuQRsGdmKoegPf5xVqELwYaw E37UuS18VYb8/lOOf1nV5FlRjfDi6WjY5lB+lZLNnVJi3RNgwhEPxXIgHDHTQh9rR/QT z7WV408LctV2ppWJTk3CZuN1QTRmh08bjBilOubjPVRe+OYIgsmXqlgIlWPgtIGFjmX1 bUGbebpbaquhKnzHUxoakzKr7rKEzoMpQ1hOS+1qfABLYYnGo2EhHEXQqHT9LTVN9/94 bk6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=KKsaQ2vOBf4GVU1ldlwIQjpUgyO5oNtgIl+VeF9TIXU=; b=Humo1i3lz1FOHpBpXRQwda7iUxaXlsLuRTZQJXVsj/nlT/mXprsPlo0sVi/MIMOZe9 DBTTu4vmCjc6Xnntt6BBfu10TQxnUPRzNkDe0p1UxI0oG/sCcTp1DWNMoOUCHRUu4qlE t6eCfYfoYPLBect0+IROu1jeTUuLwX5Hin35mh7Chz+sY5NXQ8pVFeJWu5hBRe+vECct 4p2Ynr2+vgrpFEPZyTkOHj3eICK2eLp7G4Y+1PiXqTgGXO42lECy2OxdmbhUWsb9KsEc ejsMTN0zRtAiUXwwNIOww+X8it/Fq6zr93h9Xzkc5OYmMYe/3Abi/UarOKPZlyIW38kv ioog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q4si13005767pgv.338.2019.03.25.06.28.16; Mon, 25 Mar 2019 06:28:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727111AbfCYN11 (ORCPT + 99 others); Mon, 25 Mar 2019 09:27:27 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:45853 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725795AbfCYN11 (ORCPT ); Mon, 25 Mar 2019 09:27:27 -0400 Received: from [5.158.153.52] (helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1h8PdM-0005T7-CQ; Mon, 25 Mar 2019 14:27:24 +0100 Date: Mon, 25 Mar 2019 14:27:23 +0100 (CET) From: Thomas Gleixner To: Peter Xu cc: Ming Lei , Christoph Hellwig , Jason Wang , Luiz Capitulino , Linux Kernel Mailing List , "Michael S. Tsirkin" , minlei@redhat.com Subject: Re: Virtio-scsi multiqueue irq affinity In-Reply-To: <20190325094340.GJ9149@xz-x1> Message-ID: References: <20190318062150.GC6654@xz-x1> <20190325050213.GH9149@xz-x1> <20190325070616.GA9642@ming.t460p> <20190325094340.GJ9149@xz-x1> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Peter, On Mon, 25 Mar 2019, Peter Xu wrote: > Now I understand it can be guaranteed so it should not break > determinism of the real-time applications. But again, I'm curious > whether we can specify how to spread the hardware queues of a block > controller (as I asked in my previous post) instead of the default one > (which is to spread the queues upon all the cores)? I'll try to give > a detailed example on this one this time: Let's assume we've had a > host with 2 nodes and 8 cores (Node 0 with CPUs 0-3, Node 1 with CPUs > 4-7), and a SCSI controller with 4 queues. We want to take the 2nd > node to run the real-time applications so we do isolcpus=4-7. By > default, IIUC the hardware queues will be allocated like this: > > - queue 1: CPU 0,1 > - queue 2: CPU 2,3 > - queue 3: CPU 4,5 > - queue 4: CPU 6,7 > > And the IRQs of the queues will be bound to the same cpuset that the > queue is bound to. > > So my previous question is: since we know that CPU 4-7 won't generate > any IO after all (and they shouldn't), could it be possible that we > configure the system somehow to reflect a mapping like below: > > - queue 1: CPU 0 > - qeueu 2: CPU 1 > - queue 3: CPU 2 > - queue 4: CPU 3 > > Then we disallow the CPUs 4-7 to generate IO and return failure if > they tries to. > > Again, I'm pretty uncertain on whether this case can be anything close > to useful... It just came out of my pure curiosity. I think it at > least has some benefits like: we will guarantee that the realtime CPUs > won't send block IO requests (which could be good because it could > simply break real-time determinism), and we'll save two queues from > being totally idle (so if we run non-real-time block applications on > cores 0-3 we still gain 4 hardware queues's throughput rather than 2). If that _IS_ useful, then the affinity spreading logic can be changed to accomodate that. It's not really hard to do so, but we'd need a proper usecase for justification. Thanks, tglx