Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp5313982ybl; Tue, 10 Dec 2019 04:06:09 -0800 (PST) X-Google-Smtp-Source: APXvYqyiNlHi5KWdHl9El8NUo2QAwZBak5iMQvJigfZ0w6xp6BdgUM89j5gHCsm3jeHgG1BCZgeO X-Received: by 2002:a05:6808:3ca:: with SMTP id o10mr3502315oie.14.1575979569030; Tue, 10 Dec 2019 04:06:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575979569; cv=none; d=google.com; s=arc-20160816; b=fISBcs/BOgzaSQo6yfiKwIM7hD7E6bB7LgHnV8NP93zmd+q/JaG1YkI42rQJO93+Gz kLU2ofiUNcrQYJk2t2/YfYsnMsPQPlf1blCaBzu4pJAVxiVVMm9GblBPfZ5jfTiK266l lAtxstm6zmOQaOSVe5/5UTBwf0nr9UWuhov85CcA29KCjXLDMCkzNbtk2swi2GdgxliH nF3RlCxNbP1jkA3LKE9NIKFNh2hoVmjpmRw0wcGLBhr4FJ2KXtRX3DlfLziy5mNfkYjJ fWGbZvEH2zk2xtWjyjSyiXl4qf7znfrtRwMkglvETDEyFVDaEH3N3aTa5WQfUapUeFJ2 myKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=sXYo0ALg1MoMnAFAH+FMM7prhrPf8l0YJWbpp6EX5e0=; b=UUgg7g1utYs7IxkoGLN/ic/H+D7Q4Kv+Em4ExBAmUGdVlTKANWz367gb6D7Kbr6G5v 7z7j5i1M8ngkaH9CFGkYdf2qP7eD8caKNJW4jXIZUTA4GyEp7quEVNkuFFHROIFA1eOQ d3ZLZ75xrXu5S4+XTEongfh6Lbw56RjFIMUmG39SrsYMnPxnjAjUC5ZcsuKpeWpukm3y JajFhqQkBKWxzmEmrOQv3Pu4eGJC+ze+twLfR8nnrTP9E3mxwBx5bmf0gxHids5Fxw80 u8wsid6uLgATRlCM/Go7PiRcSYsxcpDZkrpELgvYhVcRsUyawkvr77tviatsOZ9Rio1F oQoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m26si1646134otn.307.2019.12.10.04.05.54; Tue, 10 Dec 2019 04:06:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727295AbfLJMFY (ORCPT + 99 others); Tue, 10 Dec 2019 07:05:24 -0500 Received: from lhrrgout.huawei.com ([185.176.76.210]:2170 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727224AbfLJMFX (ORCPT ); Tue, 10 Dec 2019 07:05:23 -0500 Received: from lhreml708-cah.china.huawei.com (unknown [172.18.7.108]) by Forcepoint Email with ESMTP id C2BBDA83A5038B61B74C; Tue, 10 Dec 2019 12:05:21 +0000 (GMT) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by lhreml708-cah.china.huawei.com (10.201.108.49) with Microsoft SMTP Server (TLS) id 14.3.408.0; Tue, 10 Dec 2019 12:05:21 +0000 Received: from [127.0.0.1] (10.202.226.46) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Tue, 10 Dec 2019 12:05:21 +0000 Subject: Re: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt To: Marc Zyngier CC: Ming Lei , , , , , , , , , , References: <1575642904-58295-1-git-send-email-john.garry@huawei.com> <1575642904-58295-2-git-send-email-john.garry@huawei.com> <20191207080335.GA6077@ming.t460p> <78a10958-fdc9-0576-0c39-6079b9749d39@huawei.com> <20191210014335.GA25022@ming.t460p> <28424a58-1159-c3f9-1efb-f1366993afcf@huawei.com> <048746c22898849d28985c0f65cf2c2a@www.loen.fr> <6e513d25d8b0c6b95d37a64df0c27b78@www.loen.fr> From: John Garry Message-ID: <06d1e2ff-9ec7-2262-25a0-4503cb204b0b@huawei.com> Date: Tue, 10 Dec 2019 12:05:20 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.1.2 MIME-Version: 1.0 In-Reply-To: <6e513d25d8b0c6b95d37a64df0c27b78@www.loen.fr> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.202.226.46] X-ClientProxiedBy: lhreml728-chm.china.huawei.com (10.201.108.79) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/12/2019 11:36, Marc Zyngier wrote: > On 2019-12-10 10:59, John Garry wrote: >>>> >>>> There is no lockup, just a potential performance boost in this change. >>>> >>>> My colleague Xiang Chen can provide specifics of the test, as he is >>>> the one running it. >>>> >>>> But one key bit of info - which I did not think most relevant before >>>> - that is we have 2x SAS controllers running the throughput test on >>>> the same host. >>>> >>>> As such, the completion queue interrupts would be spread identically >>>> over the CPUs for each controller. I notice that ARM GICv3 ITS >>>> interrupt controller (which we use) does not use the generic irq >>>> matrix allocator, which I think would really help with this. >>>> >>>> Hi Marc, >>>> >>>> Is there any reason for which we couldn't utilise of the generic irq >>>> matrix allocator for GICv3? >>> >> >> Hi Marc, >> >>> For a start, the ITS code predates the matrix allocator by about three >>> years. Also, my understanding of this allocator is that it allows >>> x86 to cope with a very small number of possible interrupt vectors >>> per CPU. The ITS doesn't have such issue, as: >>> 1) the namespace is global, and not per CPU >>> 2) the namespace is *huge* >>> Now, what property of the matrix allocator is the ITS code missing? >>> I'd be more than happy to improve it. >> >> I think specifically the property that the matrix allocator will try >> to find a CPU for irq affinity which "has the lowest number of managed >> IRQs allocated" - I'm quoting the comment on >> matrix_find_best_cpu_managed(). > > But that decision is due to allocation constraints. You can have at most > 256 interrupts per CPU, so the allocator tries to balance it. > > On the contrary, the ITS does care about how many interrupt target any > given CPU. The whole 2^24 interrupt namespace can be thrown at a single > CPU. > >> The ITS code will make the lowest online CPU in the affinity mask the >> target CPU for the interrupt, which may result in some CPUs handling >> so many interrupts. > > If what you want is for the *default* affinity to be spread around, > that should be achieved pretty easily. Let me have a think about how > to do that. Cool, I anticipate that it should help my case. I can also seek out some NVMe cards to see how it would help a more "generic" scenario. Cheers, John