Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp580251ybe; Fri, 6 Sep 2019 04:12:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqxQIlaGNkXtmcd3JAahnOQLv8vhm41JsH8Ff10j1eJORHpQHwHRYxFw7vI9k+3g5GZcvWbR X-Received: by 2002:a17:90a:ead5:: with SMTP id ev21mr8567891pjb.105.1567768364997; Fri, 06 Sep 2019 04:12:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567768364; cv=none; d=google.com; s=arc-20160816; b=p58H3U2ZvD99uz7NaTLU0W48FpWPHpFti1gxLLjOatoR8UYjujSqVu9faWiFo/e3YP bZjpDWRqz1BQB666+6zoekeERclmbgcO7uZAzHLa+31wz5qXd5mpQkQQ6xAiOEDzLdpy LXwP60SwFlVdQdHbFmXMyUA4CCExg2bohOTE6UdUc6UoLonx85A/WBy0m1Q3PnlnNYnh MTQMqafzyF4meP6S2dacCxMAtjmYitrptdsDoF52iRs6rK491tLG6/0wy5zQjPTdHVsQ FaeVZsUIe7ZeN14/yrkGTH+b80ZDp/xKZAdf8w+0Z9dyB3VlkeZFvp+ImW69j+xkjoVv QtBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=VsTiCc/Ygf44Ftm0YFvVIuXayFBOg0Fv/xseiKnoIqQ=; b=DIXvDbpQSHbQO/XOwXXebfzvBJtrMru9zIgqHPd4F/ePD+jaPV0nS02hBOj4V1Xvag AoBuMEkAMxgbHJQkblac+Vb6fCAV45Mi2ubIzRZethNE/escjy9hjpKPm6r/yV0WlNDh NzxO5nX5qaOtvSsNwR1SyJ/s4qoo0EZtd428asXhKzsRjmmKrxWdJ0kf4eo50yg9rr/7 VgYcxvjIDvxNd6WzjBCeccuMtVwrOsQerZKX8pUDcKdP38y84j2mKQVamgPAk8xGg7M1 A2377LNn09NxK/4L61WPzN0foFsGVDxzvreDUKGySPVOkX9qmyLxuIQcbN9h9Ztf9wHr aswg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r133si5649176pfc.16.2019.09.06.04.12.29; Fri, 06 Sep 2019 04:12:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392735AbfIFIur (ORCPT + 99 others); Fri, 6 Sep 2019 04:50:47 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:38628 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2392632AbfIFIuq (ORCPT ); Fri, 6 Sep 2019 04:50:46 -0400 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id AB95A56F62A0480BF6C4; Fri, 6 Sep 2019 16:50:44 +0800 (CST) Received: from [127.0.0.1] (10.202.227.238) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.439.0; Fri, 6 Sep 2019 16:50:33 +0800 Subject: Re: [PATCH 4/4] genirq: use irq's affinity for threaded irq with IRQF_RESCUE_THREAD To: Ming Lei , Thomas Gleixner References: <20190827085344.30799-1-ming.lei@redhat.com> <20190827085344.30799-5-ming.lei@redhat.com> CC: , Long Li , "Ingo Molnar" , Peter Zijlstra , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Hannes Reinecke , , , chenxiang , From: John Garry Message-ID: <0214c66d-6496-10b9-7e37-e5b37d3022ef@huawei.com> Date: Fri, 6 Sep 2019 09:50:26 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0 MIME-Version: 1.0 In-Reply-To: <20190827085344.30799-5-ming.lei@redhat.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.202.227.238] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 27/08/2019 09:53, Ming Lei wrote: > In case of IRQF_RESCUE_THREAD, the threaded handler is only used to > handle interrupt when IRQ flood comes, use irq's affinity for this thread > so that scheduler may select other not too busy CPUs for handling the > interrupt. > > Cc: Long Li > Cc: Ingo Molnar , > Cc: Peter Zijlstra > Cc: Keith Busch > Cc: Jens Axboe > Cc: Christoph Hellwig > Cc: Sagi Grimberg > Cc: John Garry > Cc: Thomas Gleixner > Cc: Hannes Reinecke > Cc: linux-nvme@lists.infradead.org > Cc: linux-scsi@vger.kernel.org > Signed-off-by: Ming Lei > --- > kernel/irq/manage.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c > index 1566abbf50e8..03bc041348b7 100644 > --- a/kernel/irq/manage.c > +++ b/kernel/irq/manage.c > @@ -968,7 +968,18 @@ irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) > if (cpumask_available(desc->irq_common_data.affinity)) { > const struct cpumask *m; > > - m = irq_data_get_effective_affinity_mask(&desc->irq_data); > + /* > + * Managed IRQ's affinity is setup gracefull on MUNA locality, gracefully > + * also if IRQF_RESCUE_THREAD is set, interrupt flood has been > + * triggered, so ask scheduler to run the thread on CPUs > + * specified by this interrupt's affinity. > + */ Hi Ming, > + if ((action->flags & IRQF_RESCUE_THREAD) && > + irqd_affinity_is_managed(&desc->irq_data)) This doesn't look to solve the other issue I reported - that being that we handle the interrupt in a threaded handler natively, and the hard irq+threaded handler fully occupies the cpu, limiting throughput. So can we expand the scope to cover that scenario also? I don't think that it?s right to solve that separately. So if we're continuing this approach, can we add separate judgment for spreading the cpumask for the threaded part? Thanks, John > + m = desc->irq_common_data.affinity; > + else > + m = irq_data_get_effective_affinity_mask( > + &desc->irq_data); > cpumask_copy(mask, m); > } else { > valid = false; >