Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp656930ybl; Fri, 13 Dec 2019 02:32:17 -0800 (PST) X-Google-Smtp-Source: APXvYqy0rB6QWIqv/HBQybZDODzZpJrlQ1djf7AwQet1FnrGVzBhys5aJPheVCKqdbjVCg6RocXx X-Received: by 2002:a9d:7593:: with SMTP id s19mr13224627otk.219.1576233137431; Fri, 13 Dec 2019 02:32:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1576233137; cv=none; d=google.com; s=arc-20160816; b=C7hk1CRr+RJiPvJ/1QFs1ivoRRQGwwGsiQBo98OpGagciTj9xXCznF0G4lrdgDhpxv JlUpm5q57q16UJGg1vnTLIruM/vXzBTU+r/eyaI2jpFnSg7tJsvlydzc3PoIcP1rdn1u tvI+AuD2db1f67usMw27F90iD3595HrnWIoJJwz8YL+cuv+h7Rz784s1je5Ky5XuaQk9 2kzZRjeHOpCv7tyJx/D2L4vmnQULnCNt8nqe7LVFjkFu4UQgm9YCbSguIKl6nBlX2fPA EJsDZEe+hU9m/aZaK21H/BILz8kNA0Qg9AORH3wFYccRuvMtp7s4JQrMGa2mDFeuzYMS 3a+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:message-id:references :in-reply-to:cc:from:date:content-transfer-encoding:mime-version :subject:to; bh=RgCBgs5tc7lc/MoyHOxjbHJhgUVqm6no9/5YTr+57ho=; b=d5rmgI1tDdOt7X/Bwx3SZ2pzH4CFv+XmvMdMfIWSyg+zCz0oz/LJnX3vaAnyzcEGlT wUJ6yPT/hs81rToCKt+03pOEDs5GtCkruuyGxhaYwcL510UN/lugs5+XU3ReetmyCIqP Gsqj1x1K+V+d6l1tcCwZEREelyyKnJ/9P41YokyNVgn05OKj0YRlZDKPXhV4l3uN9NAd +1O96m0GoxYUaycLPxVJxPJU64SkXef2czIAAuNWLTUkOvyOrXolA3geDpzm0QBG2Fgu AwH3SSGxNpuc47HPHe71Q41IrDlIdoVFuy4AfLM2apDGicZwUiCkZzIbGShpiM/2caic Dhdw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z1si5273123otq.21.2019.12.13.02.32.05; Fri, 13 Dec 2019 02:32:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726170AbfLMKbU (ORCPT + 99 others); Fri, 13 Dec 2019 05:31:20 -0500 Received: from inca-roads.misterjones.org ([213.251.177.50]:38954 "EHLO inca-roads.misterjones.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725747AbfLMKbU (ORCPT ); Fri, 13 Dec 2019 05:31:20 -0500 Received: from www-data by cheepnis.misterjones.org with local (Exim 4.80) (envelope-from ) id 1ifiE2-0003Za-PE; Fri, 13 Dec 2019 11:31:10 +0100 To: John Garry Subject: Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt X-PHP-Originating-Script: 0:main.inc MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Date: Fri, 13 Dec 2019 10:31:10 +0000 From: Marc Zyngier Cc: Ming Lei , , "chenxiang (M)" , , , , , , , , In-Reply-To: <2443e657-2ccd-bf85-072c-284ea0b3ce40@huawei.com> References: <1575642904-58295-1-git-send-email-john.garry@huawei.com> <1575642904-58295-2-git-send-email-john.garry@huawei.com> <20191207080335.GA6077@ming.t460p> <78a10958-fdc9-0576-0c39-6079b9749d39@huawei.com> <20191210014335.GA25022@ming.t460p> <28424a58-1159-c3f9-1efb-f1366993afcf@huawei.com> <048746c22898849d28985c0f65cf2c2a@www.loen.fr> <6e513d25d8b0c6b95d37a64df0c27b78@www.loen.fr> <06d1e2ff-9ec7-2262-25a0-4503cb204b0b@huawei.com> <5caa8414415ab35e74662ac0a30bb4ac@www.loen.fr> <2443e657-2ccd-bf85-072c-284ea0b3ce40@huawei.com> Message-ID: <214947849a681fc702d018383a3f95ac@www.loen.fr> X-Sender: maz@kernel.org User-Agent: Roundcube Webmail/0.7.2 X-SA-Exim-Connect-IP: X-SA-Exim-Rcpt-To: john.garry@huawei.com, ming.lei@redhat.com, tglx@linutronix.de, chenxiang66@hisilicon.com, bigeasy@linutronix.de, linux-kernel@vger.kernel.org, hare@suse.com, hch@lst.de, axboe@kernel.dk, bvanassche@acm.org, peterz@infradead.org, mingo@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on cheepnis.misterjones.org); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi John, On 2019-12-13 10:07, John Garry wrote: > On 11/12/2019 09:41, John Garry wrote: >> On 10/12/2019 18:32, Marc Zyngier wrote: >>>>>> The ITS code will make the lowest online CPU in the affinity >>>>>> mask >>>>>> the >>>>>> target CPU for the interrupt, which may result in some CPUs >>>>>> handling >>>>>> so many interrupts. >>>>> If what you want is for the*default*  affinity to be spread >>>>> around, >>>>> that should be achieved pretty easily. Let me have a think about >>>>> how >>>>> to do that. >>>> Cool, I anticipate that it should help my case. >>>> >>>> I can also seek out some NVMe cards to see how it would help a >>>> more >>>> "generic" scenario. >>> Can you give the following a go? It probably has all kind of warts >>> on >>> top of the quality debug information, but I managed to get my D05 >>> and >>> a couple of guests to boot with it. It will probably eat your data, >>> so use caution!;-) >>> >> Hi Marc, >> Ok, we'll give it a spin. >> Thanks, >> John > > Hi Marc, > > JFYI, we're still testing this and the patch itself seems to work as > intended. > > Here's the kernel log if you just want to see how the interrupts are > getting assigned: > https://pastebin.com/hh3r810g It is a bit hard to make sense of this dump, specially on such a wide machine (I want one!) without really knowing the topology of the system. > For me, I did get a performance boost for NVMe testing, but my > colleague Xiang Chen saw a drop for our storage test of interest - > that's the HiSi SAS controller. We're trying to make sense of it now. One of the difference is that with this patch, the initial affinity is picked inside the NUMA node that matches the ITS. In your case, that's either node 0 or 2. But it is unclear whether which CPUs these map to. Given that I see interrupts mapped to CPUs 0-23 on one side, and 48-71 on the other, it looks like half of your machine gets starved, and that may be because no ITS targets the NUMA nodes they are part of. It would be interesting to see what happens if you manually set the affinity of the interrupts outside of the NUMA node. Thanks, M. -- Jazz is not dead. It just smells funny...