Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751633AbdG1IKV (ORCPT ); Fri, 28 Jul 2017 04:10:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59844 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751045AbdG1IKT (ORCPT ); Fri, 28 Jul 2017 04:10:19 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com CEDBC2683A3 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=berrange@redhat.com Date: Fri, 28 Jul 2017 09:10:10 +0100 From: "Daniel P. Berrange" To: Alex Williamson Cc: "Gao, Ping A" , "Tian, Kevin" , kvm@vger.kernel.org, libvir-list@redhat.com, Jike Song , Zhenyu Wang , linux-kernel@vger.kernel.org, kwankhede@nvidia.com Subject: Re: [libvirt] [RFC]Add new mdev interface for QoS Message-ID: <20170728081010.GA31495@redhat.com> Reply-To: "Daniel P. Berrange" References: <9951f9cf-89dd-afa4-a9f7-9a795e4c01af@intel.com> <20170726104343.5bfa51d5@w520.home> <20170727161748.GA2555@redhat.com> <20170727120158.2a48f9ea@w520.home> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20170727120158.2a48f9ea@w520.home> User-Agent: Mutt/1.8.3 (2017-05-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 28 Jul 2017 08:10:19 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4097 Lines: 80 On Thu, Jul 27, 2017 at 12:01:58PM -0600, Alex Williamson wrote: > On Thu, 27 Jul 2017 17:17:48 +0100 > "Daniel P. Berrange" wrote: > > > On Wed, Jul 26, 2017 at 10:43:43AM -0600, Alex Williamson wrote: > > > [cc +libvir-list] > > > > > > On Wed, 26 Jul 2017 21:16:59 +0800 > > > "Gao, Ping A" wrote: > > > > > > > The vfio-mdev provide the capability to let different guest share the > > > > same physical device through mediate sharing, as result it bring a > > > > requirement about how to control the device sharing, we need a QoS > > > > related interface for mdev to management virtual device resource. > > > > > > > > E.g. In practical use, vGPUs assigned to different quests almost has > > > > different performance requirements, some guests may need higher priority > > > > for real time usage, some other may need more portion of the GPU > > > > resource to get higher 3D performance, corresponding we can define some > > > > interfaces like weight/cap for overall budget control, priority for > > > > single submission control. > > > > > > > > So I suggest to add some common attributes which are vendor agnostic in > > > > mdev core sysfs for QoS purpose. > > > > > > I think what you're asking for is just some standardization of a QoS > > > attribute_group which a vendor can optionally include within the > > > existing mdev_parent_ops.mdev_attr_groups. The mdev core will > > > transparently enable this, but it really only provides the standard, > > > all of the support code is left for the vendor. I'm fine with that, > > > but of course the trouble with and sort of standardization is arriving > > > at an agreed upon standard. Are there QoS knobs that are generic > > > across any mdev device type? Are there others that are more specific > > > to vGPU? Are there existing examples of this that we can steal their > > > specification? > > > > > > Also, mdev devices are not necessarily the exclusive users of the > > > hardware, we can have a native user such as a local X client. They're > > > not an mdev user, so we can't support them via the mdev_attr_group. > > > Does there need to be a per mdev parent QoS attribute_group standard > > > for somehow defining the QoS of all the child mdev devices, or perhaps > > > representing the remaining host QoS attributes? > > > > > > Ultimately libvirt and upper level management tools would be the > > > consumer of these control knobs, so let's immediately get libvirt > > > involved in the discussion. Thanks, > > > > My view on this from libvirt side is pretty much unchanged since the > > last time we discussed this. > > > > We would like the kernel maintainers to define standard sets of properties > > for mdevs, whether global to all mdevs, or scoped to certain classes of > > mdev (eg a class=gpu). These properties would be exported in sysfs, with > > one file per property. > > Yes, I think that much of the mechanics are obvious (standardized > sysfs layout, one property per file, properties under the device node > in sysfs, etc). Are you saying that you don't want to be consulted on > which properties are exposed and how they operate and therefore won't > complain regardless of what we implement in the kernel? ;) Well ultimately the kernel maintainers know what is possible from the hardware / driver POV, so yeah, I think we can mostly leave it upto you what individual things need to be exposed - not much different scenario from all the knobs we already exposed for physical devices. > I'm hoping that libvirt folks have some experience managing basic > scheduling level QoS attributes and might have some input as to what > sorts of things work well vs what seems like a good idea, but falls > apart or isn't useful in practice. Sure, happy to give feedback where desired. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|