Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753971AbbG2Tc1 (ORCPT ); Wed, 29 Jul 2015 15:32:27 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54925 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753446AbbG2Tc0 (ORCPT ); Wed, 29 Jul 2015 15:32:26 -0400 Date: Wed, 29 Jul 2015 16:32:08 -0300 From: Marcelo Tosatti To: "Auld, Will" Cc: "Shivappa, Vikas" , Vikas Shivappa , "linux-kernel@vger.kernel.org" , "x86@kernel.org" , "hpa@zytor.com" , "tglx@linutronix.de" , "mingo@kernel.org" , "tj@kernel.org" , "peterz@infradead.org" , "Fleming, Matt" , "Williamson, Glenn P" , "Juvva, Kanaka D" Subject: Re: [PATCH 3/9] x86/intel_rdt: Cache Allocation documentation and cgroup usage guide Message-ID: <20150729193208.GC3201@amt.cnet> References: <1435789270-27010-1-git-send-email-vikas.shivappa@linux.intel.com> <1435789270-27010-4-git-send-email-vikas.shivappa@linux.intel.com> <20150728231516.GA16204@amt.cnet> <96EC5A4F3149B74492D2D9B9B1602C27461EB932@ORSMSX105.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <96EC5A4F3149B74492D2D9B9B1602C27461EB932@ORSMSX105.amr.corp.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2222 Lines: 70 On Wed, Jul 29, 2015 at 01:28:38AM +0000, Auld, Will wrote: > > > Whenever cgroupE has zero tasks, remove exclusivity (by allowing other > > > cgroups to use the exclusive ways of it). > > > > Same comment as above - Cgroup masks can always overlap and other cgroups > > can allocate the same cache , and hence wont have exclusive cache allocation. > > [Auld, Will] You can define all the cbm to provide one clos with an exclusive area > > > > > So natuarally the cgroup with tasks would get to use the cache if it has the same > > mask (say representing 50% of cache in your example) as others . > > [Auld, Will] automatic adjustment of the cbm make me nervous. There are times > when we want to limit the cache for a process independent of whether there is > lots of unused cache. How about this: desiredclos (closid p1 p2 p3 p4) 1 1 0 0 0 2 0 0 0 1 3 0 1 1 0 p means part. closid 1 is a exclusive cgroup. closid 2 is a "cache hog" class. closid 3 is "default closid". Desiredclos is what user has specified. Transition 1: desiredclos --> effectiveclos Clean all bits of unused closid's (that must be updated whenever a closid1 cgroup goes from empty->nonempty and vice-versa). effectiveclos (closid p1 p2 p3 p4) 1 0 0 0 0 2 0 0 0 1 3 0 1 1 0 Transition 2: effectiveclos --> expandedclos expandedclos (closid p1 p2 p3 p4) 1 0 0 0 0 2 0 0 0 1 3 1 1 1 0 Then you have different inplacecos for each CPU (see pseudo-code below): On the following events. - task migration to new pCPU: - task creation: id = smp_processor_id(); for (part = desiredclos.p1; ...; part++) /* if my cosid is set and any other cosid is clear, for the part, synchronize desiredclos --> inplacecos */ if (part[mycosid] == 1 && part[any_othercosid] == 0) wrmsr(part, desiredclos); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/