Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754381AbbHFAYq (ORCPT ); Wed, 5 Aug 2015 20:24:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34919 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754020AbbHFAYp (ORCPT ); Wed, 5 Aug 2015 20:24:45 -0400 Date: Wed, 5 Aug 2015 21:24:04 -0300 From: Marcelo Tosatti To: Matt Fleming Cc: Tejun Heo , Vikas Shivappa , Vikas Shivappa , linux-kernel@vger.kernel.org, x86@kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, matt.fleming@intel.com, will.auld@intel.com, glenn.p.williamson@intel.com, kanaka.d.juvva@intel.com Subject: Re: [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service management Message-ID: <20150806002404.GA24422@amt.cnet> References: <1435789270-27010-1-git-send-email-vikas.shivappa@linux.intel.com> <1435789270-27010-6-git-send-email-vikas.shivappa@linux.intel.com> <20150730194458.GD3504@mtj.duckdns.org> <20150802163157.GB32599@mtj.duckdns.org> <20150805122257.GD4332@codeblueprint.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150805122257.GD4332@codeblueprint.co.uk> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4149 Lines: 108 On Wed, Aug 05, 2015 at 01:22:57PM +0100, Matt Fleming wrote: > On Sun, 02 Aug, at 12:31:57PM, Tejun Heo wrote: > > > > But we're doing it the wrong way around. You can do most of what > > cgroup interface can do with systemcall-like interface with some > > inconvenience. The other way doesn't really work. As I wrote in the > > other reply, cgroups is a horrible programmable interface and we don't > > want individual applications to interact with it directly and CAT's > > use cases most definitely include each application programming its own > > cache mask. > > I wager that this assertion is wrong. Having individual applications > program their own cache mask is not going to be the most common > scenario. What i like about the syscall interface is that it moves the knowledge of cache behaviour close to the application launching (or inside it), which allows the following common scenario, say on a multi purpose desktop: Event: launch high performance application: use cache reservation, finish quickly. Event: cache hog application: do not thrash the cache. The two cache reservations are logically unrelated in terms of configuration, and configured separately do not affect each other. They should be configured separately. Also, data/code reservation is specific to the application, so it should its specification should be close to the application (its just cumbersome to maintain that data somewhere else). > Only in very specific situations would you trust an > application to do that. Perhaps ulimit can be used to allow a certain limit on applications. > A much more likely use case is having the sysadmin carve up the cache > for a workload which may include multiple, uncooperating applications. Sorry, what cooperating means in this context? > Yes, a programmable interface would be useful, but only for a limited > set of workloads. I don't think it's how most people are going to want > to use this hardware technology. It seems syscall interface handles all usecases which the cgroup interface handles. > -- > Matt Fleming, Intel Open Source Technology Center Tentative interface, please comment. The "return key/use key" scheme would allow COSid sharing similarly to shmget. Intra-application, that is functional, but i am not experienced with shmget to judge whether there is a better alternative. Would have to think how cross-application setup would work, and in the simple "cacheset" configuration. Also, the interface should work for other architectures (TODO item, PPC at least has similar functionality). enum cache_rsvt_flags { CACHE_RSVT_ROUND_UP = (1 << 0), /* round "bytes" up */ CACHE_RSVT_ROUND_DOWN = (1 << 1), /* round "bytes" down */ CACHE_RSVT_EXTAGENTS = (1 << 2), /* allow usage of area common with external agents */ }; enum cache_rsvt_type { CACHE_RSVT_TYPE_CODE = 0, /* cache reservation is for code */ CACHE_RSVT_TYPE_DATA, /* cache reservation is for data */ CACHE_RSVT_TYPE_BOTH, /* cache reservation is for code and data */ }; struct cache_reservation { size_t kbytes; u32 type; u32 flags; }; int sys_cache_reservation(struct cache_reservation *cv); returns -ENOMEM if not enough space, -EPERM if no permission. returns keyid > 0 if reservation has been successful, copying actual number of kbytes reserved to "kbytes". ----------------- int sys_use_cache_reservation_key(struct cache_reservation *cv, int key); returns -EPERM if no permission. returns -EINVAL if no such key exists. returns 0 if instantiation of reservation has been successful, copying actual reservation to cv. Backward compatibility for processors with no support for code/data differentiation: by default code and data cache allocation types fallback to CACHE_RSVT_TYPE_BOTH on older processors (and return the information that they done so via "flags"). -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/