Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp1791107imm; Mon, 3 Sep 2018 09:30:18 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbUmw/gw9q7vC4+/wYkeM1rhTc4LTzYdA4K5eQGxFlhBJ/GUDsQTLE/1nAi6oboMy+tLO3h X-Received: by 2002:a63:f4b:: with SMTP id 11-v6mr10081205pgp.100.1535992217931; Mon, 03 Sep 2018 09:30:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535992217; cv=none; d=google.com; s=arc-20160816; b=tDCtg/4aCS6t3873/5ZqDsp+D1aXNoUU1BFvPL663lEU6kN44Inob8XqYYOxvIlezI 5COTlEdvYB7qSQZuHD+Rw/L+VHYZ/6hj1ccHqNhL27zeHCfUsxEhFpouXrQMYjYdAEvC xcbr7UB7QGkI8qjTqN4jEb0we1275x2hInowWTomt2J/Ay1tj657il92rL2c+dn5D+Te DO0LPauFUiCULnRTJNgBkb1tDkGVJPHxCuBUzvSHZsTs82lC0t1BibSBUtCJDTj5OO26 GqLkX2ubpma5H6zTYM6b9kHychLQllBB7s381YFnMyPYG/oXtAYNGdAP0QJYKGLqu3oy 9nPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date :arc-authentication-results; bh=Y5EXqPsJNyFPItsx0scYmYVAtxRmXD1T+bNxaMyV+r4=; b=S9Q67epUUHU75hiWeaoYuCgPmWwdcCuWYzaErIyE+iAxxjtgtKBfTPCTatY16IavQr VbJtVXH2w7IvQHis3NucHAo5iMuopX25i0ah1v0oVYiZZ/ZTMyDqkVpmkA7Q1dJEyrWC jvRDuTt8MjIGpW3qVA21MTT5BhiV/meIDh1ycH56mqtKG8nfASfgFvhWdEx9VXrHCO0S 95wrRFoZrOSqFINZ1jxyF9M/qGNUzgV0t20XZNiZ6mYZKKGTfGzNy0Fz3HV+YTKobyAk CwbXdJGYRpLAOy+rFtSRoDy8YkZ/yU3K/K2iOb94SA7NX88gu3dDcym9OYF6/9vGM3YC shNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e68-v6si18298229plb.38.2018.09.03.09.30.02; Mon, 03 Sep 2018 09:30:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727618AbeICUtD (ORCPT + 99 others); Mon, 3 Sep 2018 16:49:03 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:55359 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727592AbeICUtD (ORCPT ); Mon, 3 Sep 2018 16:49:03 -0400 Received: from hsi-kbw-5-158-153-52.hsi19.kabel-badenwuerttemberg.de ([5.158.153.52] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fwrhu-0002j1-Ly; Mon, 03 Sep 2018 18:28:06 +0200 Date: Mon, 3 Sep 2018 18:28:06 +0200 (CEST) From: Thomas Gleixner To: Kashyap Desai cc: Ming Lei , Sumit Saxena , Ming Lei , Christoph Hellwig , Linux Kernel Mailing List , Shivasharan Srikanteshwara , linux-block Subject: RE: Affinity managed interrupts vs non-managed interrupts In-Reply-To: <66256272c020be186becdd7a3f049302@mail.gmail.com> Message-ID: References: <20180829084618.GA24765@ming.t460p> <300d6fef733ca76ced581f8c6304bac6@mail.gmail.com> <615d78004495aebc53807156d04d988c@mail.gmail.com> <486f94a563d63c4779498fe8829a546c@mail.gmail.com> <602cee6381b9f435a938bbaf852d07f9@mail.gmail.com> <66256272c020be186becdd7a3f049302@mail.gmail.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 3 Sep 2018, Kashyap Desai wrote: > I am using " for-4.19/block " and this particular patch "a0c9259 > irq/matrix: Spread interrupts on allocation" is included. Can you please try against 4.19-rc2 or later? > I can see that 16 extra reply queues via pre_vectors are still assigned to > CPU 0 (effective affinity ). > > irq 33, cpu list 0-71 The cpu list is irrelevant because that's the allowed affinity mask. The effective one is what counts. > # cat /sys/kernel/debug/irq/irqs/34 > node: 0 > affinity: 0-71 > effectiv: 0 So if all 16 have their effective affinity set to CPU0 then that's strange at least. Can you please provide the output of /sys/kernel/debug/irq/domains/VECTOR ? > Ideally, what we are looking for 16 extra pre_vector reply queue is > "effective affinity" to be within local numa node as long as that numa > node has online CPUs. If not, we are ok to have effective cpu from any > node. Well, we surely can do the initial allocation and spreading on the local numa node, but once all CPUs are offline on that node, then the whole thing goes down the drain and allocates from where it sees fit. I'll think about it some more, especially how to avoid the proliferation of the affinity hint. Thanks, tglx