Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755210AbbEFAVR (ORCPT ); Tue, 5 May 2015 20:21:17 -0400 Received: from mga03.intel.com ([134.134.136.65]:62906 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751394AbbEFAVP (ORCPT ); Tue, 5 May 2015 20:21:15 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,376,1427785200"; d="scan'208";a="721287850" Date: Tue, 5 May 2015 17:19:40 -0700 (PDT) From: Vikas Shivappa X-X-Sender: vikas@vshiva-Udesk To: Peter Zijlstra cc: Vikas Shivappa , vikas.shivappa@intel.com, linux-kernel@vger.kernel.org, x86@kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org, tj@kernel.org, matt.fleming@intel.com, will.auld@intel.com, peter.zijlstra@intel.com, h.peter.anvin@intel.com, kanaka.d.juvva@intel.com Subject: Re: [PATCH 4/7] x86/intel_rdt: Implement scheduling support for Intel RDT In-Reply-To: <20150502185131.GI3007@worktop.Skamania.guest> Message-ID: References: <1430530601-16319-1-git-send-email-vikas.shivappa@linux.intel.com> <1430530601-16319-5-git-send-email-vikas.shivappa@linux.intel.com> <20150502185131.GI3007@worktop.Skamania.guest> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4414 Lines: 136 On Sat, 2 May 2015, Peter Zijlstra wrote: > On Fri, May 01, 2015 at 06:36:38PM -0700, Vikas Shivappa wrote: >> Adds support for IA32_PQR_ASSOC MSR writes during task scheduling. >> >> The high 32 bits in the per processor MSR IA32_PQR_ASSOC represents the >> CLOSid. During context switch kernel implements this by writing the >> CLOSid of the cgroup to which the task belongs to the CPU's >> IA32_PQR_ASSOC MSR. >> >> For Cache Allocation, this would let the task fill in the cache 'subset' >> represented by the cgroup's Cache bit mask(CBM). >> > > Are you guys for real? Have you even looked at the trainwreck this > makes? The h/w tags the cache lines with the closid and hence we associate the id with the tasks scheduled. The following considerations are done for the PQR MSR write to have minimal effect on scheduling hot path: - This would not exist on any non-intel platforms. - On Intel platforms, this would not exist by default unless CGROUP_RDT is enabled. - remains a no-op when CGROUP_RDT is enabled and intel hardware does not support the feature. - When feature is available and RDT is enabled, does not do msr write till the user manually creates a cgroup *and* assigns a new cache mask. Since the child node inherits the parents cache mask , by cgroup creation there is no scheduling hot path impact from the new cgroup. - per cpu PQR values are cached and the MSR write is only done when there is a task with different PQR is scheduled on the CPU. Typically if the task groups are bound to be scheduled on a set of CPUs , the number of MSR writes is greatly reduced. However Matt pointed out I could improve this to if (static_key_false) { rdt_sched_in(); } instead of a static inline which i will update. Will update the commit message to include these details. And can improve to not enable the feature till a user creates a new cgroup and creates a new the bitmask. Thanks, Vikas > >> +static inline bool rdt_enabled(void) >> +{ >> + return static_key_false(&rdt_enable_key); >> +} > >> +/* >> + * rdt_sched_in() - Writes the task's CLOSid to IA32_PQR_MSR >> + * if the current Closid is different than the new one. >> + */ >> +static inline void rdt_sched_in(struct task_struct *task) >> +{ >> + struct intel_rdt *ir; >> + unsigned int clos; >> + >> + if (!rdt_enabled()) >> + return; >> + >> + /* >> + * This needs to be fixed >> + * to cache the whole PQR instead of just CLOSid. >> + * PQR has closid in high 32 bits and CQM-RMID in low 10 bits. >> + * Should not write a 0 to the low 10 bits of PQR >> + * and corrupt RMID. >> + */ >> + clos = this_cpu_read(x86_cpu_clos); >> + >> + rcu_read_lock(); >> + ir = task_rdt(task); >> + if (ir->clos == clos) { >> + rcu_read_unlock(); >> + return; >> + } >> + >> + wrmsr(MSR_IA32_PQR_ASSOC, 0, ir->clos); >> + this_cpu_write(x86_cpu_clos, ir->clos); >> + rcu_read_unlock(); >> +} > > You inject _ALL_ that into the scheduler hot path. Insane much? > >> + >> +#else >> + >> +static inline void rdt_sched_in(struct task_struct *task) {} >> + >> #endif >> #endif >> diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h >> index 751bf4b..82ef4b3 100644 >> --- a/arch/x86/include/asm/switch_to.h >> +++ b/arch/x86/include/asm/switch_to.h >> @@ -8,6 +8,9 @@ struct tss_struct; >> void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, >> struct tss_struct *tss); >> >> +#include >> +#define post_arch_switch(current) rdt_sched_in(current) >> + >> #ifdef CONFIG_X86_32 >> >> #ifdef CONFIG_CC_STACKPROTECTOR > >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index f9123a8..cacb490 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -2241,6 +2241,7 @@ static struct rq *finish_task_switch(struct task_struct *prev) >> prev_state = prev->state; >> vtime_task_switch(prev); >> finish_arch_switch(prev); >> + post_arch_switch(current); >> perf_event_task_sched_in(prev, current); >> finish_lock_switch(rq, prev); >> finish_arch_post_lock_switch(); > > Not a word in the Changelog on this hook; that's double fail. > > Please _THINK_ before writing code. > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/