Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758214AbcDAD3c (ORCPT ); Thu, 31 Mar 2016 23:29:32 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:17361 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753675AbcDAD3a (ORCPT ); Thu, 31 Mar 2016 23:29:30 -0400 From: MaJun To: , , , , , , , , , , Subject: [RFC PATCH] genirq: Change the non-balanced irq to balance irq when the cpu of the irq bounded off line Date: Fri, 1 Apr 2016 11:28:11 +0800 Message-ID: <1459481291-10136-1-git-send-email-majun258@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.177.235.245] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.56FDEADB.00D3,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 694c9fa7ada97f91c6c6acd768a13f7b Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1198 Lines: 36 From: Ma Jun When the CPU of a non-balanced irq bounded is off line, the irq will be migrated to other CPUs, usually the first cpu on-line. We can suppose the situation if a system has more than one non-balanced irq. At extreme case, these irqs will be migrated to the same CPU and will cause the CPU run with high irq pressure, even make the system die. So, I think maybe we need to change the non-balanced irq to a irq can be balanced to avoid the problem descried above. Maybe this is not a good solution for this problem, please offer me some suggestion if you have a better one. Signed-off-by: Ma Jun --- kernel/irq/cpuhotplug.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c index 011f8c4..80d54a5 100644 --- a/kernel/irq/cpuhotplug.c +++ b/kernel/irq/cpuhotplug.c @@ -30,6 +30,8 @@ static bool migrate_one_irq(struct irq_desc *desc) return false; if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) { + if (irq_settings_has_no_balance_set(desc)) + irqd_clear(d, IRQD_NO_BALANCING); affinity = cpu_online_mask; ret = true; } -- 1.7.1