Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp955112imm; Sun, 2 Sep 2018 05:04:53 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZtQbD2rX5VmjInpvkW0XTDuy+2GaawIxixwO9Q0qLRqrS3y1VZiwTA8fUrYAz8DsG/Q1h8 X-Received: by 2002:a17:902:70cc:: with SMTP id l12-v6mr23627597plt.132.1535889893047; Sun, 02 Sep 2018 05:04:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535889893; cv=none; d=google.com; s=arc-20160816; b=y7Fx9hktwIcXMRicCMCKYsUczvu8VfKPmiCsjsOmEy09WG/v9HIH73tLhjWuiQcJ6C KFz3jObn4+c8I9Nrl7qTdLleGNpOW2+gnpecFKd2AFaazlkhsCQCR1PtRVNYLohwWTzc ttXBdB23KAmQRkP7yGkfeqtVHSjnDAWiPu/0IYd+nAtBrEu/2K90IG6r6JX07MKE+hW/ cI6+2U7IkvtJiI3tfgbl8TZ9gByoBQ7E7j5HDqeGAVkd706n18f/tuR6RkjNfknniMzO XKy9ayKnm6eYhv3+fHN7g3h3j/Pu7GSl3vO78/vdWGtZP5XrVQVQYUyp4wzEjuaMbWj3 iubQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date :arc-authentication-results; bh=ZRSTkspMcZ0U8ZY6NsV9imneRvCffUKBn/DAjwvVkfY=; b=E8yHNxp0lmCiHXFC3OvlRdP72Fi5G06vo5MAziCbosbOw0pJn9u4mWEPjFnxykaBQM +1BSvSwm1tbR7YcBaFaSoOEYQGzZp8ufdJME4W3+hi91qgj+17jvWHmdw/zSD80V1e8G Am187LM5K+HpubWuzD5elOyOw19JudH2Dpe067zKJotz0lkxeB77yUl+GbIAd/fxJ9bT Fo4IiExagBKPtPkoPlx3GfK7SnRlm2fak/iBYfU7iG0+jqdRFe42DR3IkT0VnfsgBYlV YtUBIhwSmhWQF9dYo6lCLfbEV/3VjdITGF1raUPCwkhyZFg09/QeeCf+3sG76HasvPtd FBLw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n4-v6si15294939pgi.69.2018.09.02.05.04.38; Sun, 02 Sep 2018 05:04:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727159AbeIBQSK (ORCPT + 99 others); Sun, 2 Sep 2018 12:18:10 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:54001 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726014AbeIBQSK (ORCPT ); Sun, 2 Sep 2018 12:18:10 -0400 Received: from p4fea45ac.dip0.t-ipconnect.de ([79.234.69.172] helo=[192.168.0.145]) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fwR5L-000109-QU; Sun, 02 Sep 2018 14:02:31 +0200 Date: Sun, 2 Sep 2018 14:02:30 +0200 (CEST) From: Thomas Gleixner To: Kashyap Desai cc: Ming Lei , Sumit Saxena , Ming Lei , Christoph Hellwig , Linux Kernel Mailing List , Shivasharan Srikanteshwara , linux-block Subject: RE: Affinity managed interrupts vs non-managed interrupts In-Reply-To: <602cee6381b9f435a938bbaf852d07f9@mail.gmail.com> Message-ID: References: <20180829084618.GA24765@ming.t460p> <300d6fef733ca76ced581f8c6304bac6@mail.gmail.com> <615d78004495aebc53807156d04d988c@mail.gmail.com> <486f94a563d63c4779498fe8829a546c@mail.gmail.com> <602cee6381b9f435a938bbaf852d07f9@mail.gmail.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 31 Aug 2018, Kashyap Desai wrote: > > Ok. I misunderstood the whole thing a bit. So your real issue is that you > > want to have reply queues which are instantaneous, the per cpu ones, and > > then the extra 16 which do batching and are shared over a set of CPUs, > > right? > > Yes that is correct. Extra 16 or whatever should be shared over set of > CPUs of *local* numa node of the PCI device. Why restricting it to the local NUMA node of the device? That doesn't really make sense if you queue lots of requests from CPUs on a different node. Why don't you spread these extra interrupts accross all nodes and keep the locality for the request/reply? That also would allow to make them properly managed interrupts as you could shutdown the per node batching interrupts when all CPUs of that node are offlined and you'd avoid the whole affinity hint irq balancer hackery. Thanks, tglx