Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752580AbbLCSRV (ORCPT ); Thu, 3 Dec 2015 13:17:21 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:32983 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751981AbbLCSRU (ORCPT ); Thu, 3 Dec 2015 13:17:20 -0500 Date: Thu, 3 Dec 2015 19:17:14 +0100 From: Peter Zijlstra To: bsegall@google.com Cc: Waiman Long , Ingo Molnar , linux-kernel@vger.kernel.org, Yuyang Du , Paul Turner , Morten Rasmussen , Scott J Norton , Douglas Hatch Subject: Re: [PATCH v2 2/3] sched/fair: Move hot load_avg into its own cacheline Message-ID: <20151203181714.GT17308@twins.programming.kicks-ass.net> References: <1449081710-20185-1-git-send-email-Waiman.Long@hpe.com> <1449081710-20185-3-git-send-email-Waiman.Long@hpe.com> <20151203111209.GX3816@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1864 Lines: 55 On Thu, Dec 03, 2015 at 09:56:02AM -0800, bsegall@google.com wrote: > Peter Zijlstra writes: > > @@ -7402,11 +7405,12 @@ void __init sched_init(void) > > #endif /* CONFIG_RT_GROUP_SCHED */ > > > > #ifdef CONFIG_CGROUP_SCHED > > + task_group_cache = KMEM_CACHE(task_group, 0); > > + > > list_add(&root_task_group.list, &task_groups); > > INIT_LIST_HEAD(&root_task_group.children); > > INIT_LIST_HEAD(&root_task_group.siblings); > > autogroup_init(&init_task); > > - > > #endif /* CONFIG_CGROUP_SCHED */ > > > > for_each_possible_cpu(i) { > > --- a/kernel/sched/sched.h > > +++ b/kernel/sched/sched.h > > @@ -248,7 +248,12 @@ struct task_group { > > unsigned long shares; > > > > #ifdef CONFIG_SMP > > - atomic_long_t load_avg; > > + /* > > + * load_avg can be heavily contended at clock tick time, so put > > + * it in its own cacheline separated from the fields above which > > + * will also be accessed at each tick. > > + */ > > + atomic_long_t load_avg ____cacheline_aligned; > > #endif > > #endif > > > > This loses the cacheline-alignment for task_group, is that ok? I'm a bit dense (its late) can you spell that out? Did you mean me killing SLAB_HWCACHE_ALIGN? That should not matter because: #define KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\ sizeof(struct __struct), __alignof__(struct __struct),\ (__flags), NULL) picks up the alignment explicitly. And struct task_group having one cacheline aligned member, means that the alignment of the composite object (the struct proper) must be an integer multiple of this (typically 1). -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/