Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758665AbbLCEpf (ORCPT ); Wed, 2 Dec 2015 23:45:35 -0500 Received: from mx1.redhat.com ([209.132.183.28]:57322 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758625AbbLCEpe (ORCPT ); Wed, 2 Dec 2015 23:45:34 -0500 From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Ingo Molnar , Steven Rostedt , Xunlei Pang Subject: [PATCH v2 3/3] sched/cpudeadline: Cleanup of duplicate memory initialization Date: Thu, 3 Dec 2015 12:45:01 +0800 Message-Id: <1449117901-9561-3-git-send-email-xlpang@redhat.com> In-Reply-To: <1449117901-9561-1-git-send-email-xlpang@redhat.com> References: <1449117901-9561-1-git-send-email-xlpang@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1274 Lines: 41 There is already a memset clear operation for '*cp', so we can use alloc_cpumask_var() with __GFP_ZERO instead of zalloc_cpumask_var() to avoid duplicate clear for systems without CONFIG_CPUMASK_OFFSTACK set. Also omit "cp->size = 0;". Signed-off-by: Xunlei Pang --- kernel/sched/cpudeadline.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c index 5a75b08..ca780ec 100644 --- a/kernel/sched/cpudeadline.c +++ b/kernel/sched/cpudeadline.c @@ -211,7 +211,6 @@ int cpudl_init(struct cpudl *cp) memset(cp, 0, sizeof(*cp)); raw_spin_lock_init(&cp->lock); - cp->size = 0; cp->elements = kcalloc(nr_cpu_ids, sizeof(struct cpudl_item), @@ -219,7 +218,7 @@ int cpudl_init(struct cpudl *cp) if (!cp->elements) return -ENOMEM; - if (!zalloc_cpumask_var(&cp->free_cpus, GFP_KERNEL)) { + if (!alloc_cpumask_var(&cp->free_cpus, GFP_KERNEL | __GFP_ZERO)) { kfree(cp->elements); return -ENOMEM; } -- 2.5.0 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/