Received: by 10.223.185.116 with SMTP id b49csp4803284wrg; Tue, 27 Feb 2018 02:57:28 -0800 (PST) X-Google-Smtp-Source: AH8x226v+8IaX50d2v242wyeevwpr57YTVMTuDFdp5UZYBtMmTo5U++/GwnPoYSpj0MffC5pyqRz X-Received: by 2002:a17:902:720b:: with SMTP id ba11-v6mr12725678plb.148.1519729048207; Tue, 27 Feb 2018 02:57:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519729048; cv=none; d=google.com; s=arc-20160816; b=oqIQDoxRSMigZVv6Dq7S1Fx0cPI+eltm/HlvmRadukjrMKeprlcKctH7lcOPk4OkwN 79Es8wt5iu0hzZH3YQ7dz3fvBBVuLcaUkYvidMLFGyGS6hevzEYPXzbSbkkROWDl52/b jBAm/2eu2aEMY4/0TNaNWuCkBI76a+TOHFavWiXyJuG1OHekxqubqEbYTLFbTVOEHc34 pew0JQCx+WBl1fDYEXNcXIe/Ykal0zq7nZxPgR/Pu4GCPwf9UROh2Olv8ZQFzUoBRvGZ FO0sPcjdV2we83fZCBgkIINKJtsCfDlxSlYFE/CFeEGzJRIORhH/D86KvnMWcCzVMUO/ 9f8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date :arc-authentication-results; bh=fI3dfTxxS7h2Dmttyj1GF21Hx+XCK/cvxLOLSln2rJE=; b=Tn27aqheQeuuYG6AbziX0kcoVnmMJEhR8/gENOmEWUNsgZwzxB2J3bea4ydDCRs8JK QggyBIKONje0e2x+hFHTMA2hM+y/S92wfD34TspYPJCSjkvwmFBvlyJK05O4HrWp+crC fQmMEZWWuhA6DScfZM5dXQW9UdfXl21EuAk33R5HOv1CTT0chX/InAEaw1T5kDNxfzoq J429Oef5qFdt4CPTMvP/D7PM6SqKnWilx1zOYixLafUMg0It4sBqtfneCEQdNWl0mwyk 6rInBrKrqu18QIGe/3EiDWbeku6Als+/yI3X6MWKd1VOkC7zEr6CyNVFVpe0lneziJ+C tFaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i125si8441072pfb.74.2018.02.27.02.57.13; Tue, 27 Feb 2018 02:57:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752848AbeB0Kgu (ORCPT + 99 others); Tue, 27 Feb 2018 05:36:50 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:45950 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752680AbeB0Kgs (ORCPT ); Tue, 27 Feb 2018 05:36:48 -0500 Received: from hsi-kbw-5-158-153-52.hsi19.kabel-badenwuerttemberg.de ([5.158.153.52] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1eqcZA-0007j2-Py; Tue, 27 Feb 2018 11:33:00 +0100 Date: Tue, 27 Feb 2018 11:36:52 +0100 (CET) From: Thomas Gleixner To: Reinette Chatre cc: fenghua.yu@intel.com, tony.luck@intel.com, gavin.hindman@intel.com, vikas.shivappa@linux.intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH V2 13/22] x86/intel_rdt: Support schemata write - pseudo-locking core In-Reply-To: <73fb98d2-ce93-0443-b909-fde75908cc1e@intel.com> Message-ID: References: <73fb98d2-ce93-0443-b909-fde75908cc1e@intel.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Reinette, On Mon, 26 Feb 2018, Reinette Chatre wrote: > I started looking at how this implementation may look and would like to > confirm with you that your intentions behind the new "exclusive" and > "locked" modes can be maintained. I also have a few questions. Phew :) > Focusing on CAT a resource group represents a closid across all domains > (cache instances) of all resources (cache layers) on the system. A full > schemata reflecting the active bitmask associated with this closid for > each domain of each resource is maintained. The current implementation > supports partial writes to the schemata, with the assumption that only > the changed values need to be updated, the others remain as is. For the > current implementation this works well since what is shown by schemata > reflects current hardware settings and what is written to schemata will > change current hardware settings. This is done irrespective of any > overlap between bitmasks of different closids (the "shareable" mode). Right. > A change to start us off with could be to initialize the schemata with > all the shareable and unused bits set for all domains when a new > resource group is created. The new resource group initialization is the least of my worries. The current mode is to use the default group setting, right? > Moving to "exclusive" mode it appears that, when enabled for a resource > group, all domains of all resources are forced to have an "exclusive" > region associated with this resource group (closid). This is because the > schemata reflects the hardware settings of all resources and their > domains and the hardware does not accept a "zero" bitmask. A user thus > cannot just specify a single region of a particular cache instance as > "exclusive". Does this match your intention wrt "exclusive"? Interesting question. I really did not think about that yet. > Moving on to the "locked" mode. We cannot support different > pseudo-locked regions across multiple resources (eg. L2 and L3). In > fact, if we would at some point in the future then a pseudo-locked > region on one resource could implicitly span a second resource. > Additionally, we would like to enable a user to enable a single > pseudo-locked region on a single cache instance. > > From the above it follows that "locked" mode cannot just simply build on > top of "exclusive" mode rules (as I expressed them above) since it > cannot enforce a locked region on each domain of each resource. > > We would like to support something like (as you also have in your example): > > mkdir group > echo "L2:1=0x3" > schemata > echo locked > mode > > The above should only pseudo-lock the indicated region and not touch any > other domain. The problem is that the schemata always contain non-zero > bitmasks for all domains so at the time "locked" is written it is not > known which cache region needs to be locked. I am currently unable to > see a simple way to build on top of the current schemata design to > support the "locked" mode as you intended. It does seem as though the > user's intention to create a pseudo-locked region needs to be > communicated before the schemata is written, but from what I understand > this does not seem to be supported by the mode/schemata combination. > Please do correct me where I am wrong. You could make it: echo locksetup > mode echo $CONF > schemata echo locked > mode Or something like that. > To continue, when we overcome the above obstacle: > A scenario could be where a single resource group will contain all the > pseudo-locked regions (to avoid wasting closids). It is not clear to me > how to easily support such a usage though since the way writes to the > schemata is done is "changes only". If for example, two pseudo-locked > regions exists: > > # mkdir group > # echo "L2:1=0x3" > schemata > # echo locked > mode > # cat schemata > L2:1=0x3 > # echo "L2:0=0xf" > schemata > # cat schemata > L2:0=0xf;1=0x3 > > How can the user remove one of the pseudo-locked regions without > affecting the other? Could we perhaps allow zero bitmask writes when a > region is locked? That might work. Though it looks hacky. > Another point I would like to highlight is that when we talked about > keeping the closid associated with the pseudo-locked region I mentioned > that some resources may have few closids (for example, 4). As discussed > this seems ok when there are only 8 bits in the bitmask. What I did not > highlight at that time is that the closids are limited to the smallest > number supported by all resources. So, if this same platform has a > second resource (with more bits in a bitmask) with more closids, they > would also be limited to 4. In this case it does seem removing a closid > from service would have bigger impact. Is that a real issue or just an academic exercise? Let's assume its real, so you could do the following: mkdir group <- acquires closid echo locksetup > mode <- Creates 'lockarea' file echo L2:0 > lockarea echo 'L2:0=0xf' > schemata echo locked > mode <- locks down all files, does the lock setup and drops closid That would solve quite some of the other issues as well. Hmm? Thanks, tglx