Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752586AbaATS6Q (ORCPT ); Mon, 20 Jan 2014 13:58:16 -0500 Received: from mx1.redhat.com ([209.132.183.28]:3506 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750721AbaATS6O (ORCPT ); Mon, 20 Jan 2014 13:58:14 -0500 From: Prarit Bhargava To: linux-kernel@vger.kernel.org Cc: Prarit Bhargava , Andi Kleen , Michel Lespinasse , Seiji Aguchi , Yang Zhang , Paul Gortmaker , Janet Morgan , Tony Luck , Ruiv Wang , Gong Chen , "H. Peter Anvin" , x86@kernel.org, Fengguang Wu Subject: [PATCH] x86, cpu hotplug, use cpumask stack safe variant cpumask_var_t in check_irq_vectors_for_cpu_disable() [v2] Date: Mon, 20 Jan 2014 13:57:58 -0500 Message-Id: <1390244278-30024-1-git-send-email-prarit@redhat.com> In-Reply-To: <20140120085017.GB5243@gchen.bj.intel.com> References: <20140120085017.GB5243@gchen.bj.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org kbuild, 0day kernel build service, outputs the warning: arch/x86/kernel/irq.c:333:1: warning: the frame size of 2056 bytes is larger than 2048 bytes [-Wframe-larger-than=] because check_irq_vectors_for_cpu_disable() allocates two cpumasks on the stack. Fix this by using cpumask_var_t, the cpumask stack safe variant. Signed-off-by: Prarit Bhargava Cc: Andi Kleen Cc: Michel Lespinasse Cc: Seiji Aguchi Cc: Yang Zhang Cc: Paul Gortmaker Cc: Janet Morgan Cc: Tony Luck Cc: Ruiv Wang Cc: Gong Chen Cc: H. Peter Anvin Cc: Gong Chen Cc: x86@kernel.org Cc: Fengguang Wu [v2]: switch from GFP_KERNEL to GFP_ATOMIC --- arch/x86/kernel/irq.c | 35 +++++++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index 4207e8d..c130cfa 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -269,15 +269,25 @@ EXPORT_SYMBOL_GPL(vector_used_by_percpu_irq); */ int check_irq_vectors_for_cpu_disable(void) { - int irq, cpu; + int irq, cpu, ret = 0; unsigned int this_cpu, vector, this_count, count; struct irq_desc *desc; struct irq_data *data; - struct cpumask affinity_new, online_new; + cpumask_var_t affinity_new, online_new; + + if (!alloc_cpumask_var(&online_new, GFP_ATOMIC)){ + ret = -ENOMEM; + goto out; + } + + if (!alloc_cpumask_var(&affinity_new, GFP_ATOMIC)) { + ret = -ENOMEM; + goto free_online_new; + } this_cpu = smp_processor_id(); - cpumask_copy(&online_new, cpu_online_mask); - cpu_clear(this_cpu, online_new); + cpumask_copy(online_new, cpu_online_mask); + __cpu_clear(this_cpu, online_new); this_count = 0; for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) { @@ -285,8 +295,8 @@ int check_irq_vectors_for_cpu_disable(void) if (irq >= 0) { desc = irq_to_desc(irq); data = irq_desc_get_irq_data(desc); - cpumask_copy(&affinity_new, data->affinity); - cpu_clear(this_cpu, affinity_new); + cpumask_copy(affinity_new, data->affinity); + __cpu_clear(this_cpu, affinity_new); /* Do not count inactive or per-cpu irqs. */ if (!irq_has_action(irq) || irqd_is_per_cpu(data)) @@ -307,8 +317,8 @@ int check_irq_vectors_for_cpu_disable(void) * mask is not zero; that is the down'd cpu is the * last online cpu in a user set affinity mask. */ - if (cpumask_empty(&affinity_new) || - !cpumask_subset(&affinity_new, &online_new)) + if (cpumask_empty(affinity_new) || + !cpumask_subset(affinity_new, online_new)) this_count++; } } @@ -327,9 +337,14 @@ int check_irq_vectors_for_cpu_disable(void) if (count < this_count) { pr_warn("CPU %d disable failed: CPU has %u vectors assigned and there are only %u available.\n", this_cpu, this_count, count); - return -ERANGE; + ret = -ERANGE; } - return 0; + + free_cpumask_var(affinity_new); +free_online_new: + free_cpumask_var(online_new); +out: + return ret; } /* A cpu has been removed from cpu_online_mask. Reset irq affinities. */ -- 1.7.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/