Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4609299yba; Wed, 10 Apr 2019 00:45:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqwYS7IiQy39mnTRpoZntvtEGiftoDPhTFt3qiXlXS3yijOgBvK6sICcEcnTJm4Nvz7wSja2 X-Received: by 2002:a17:902:e683:: with SMTP id cn3mr41708325plb.115.1554882325303; Wed, 10 Apr 2019 00:45:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554882325; cv=none; d=google.com; s=arc-20160816; b=RSYTEnpqOOj7Gm0CyOIaiZ3IQJX2KsNp/EjBpwR8LAX4IU62PfEkNxGWj+cKnXXrF+ 8PCJW4wQt+RpkXUiYNQFUJi9miTsxWNeaYUwKYmY8OTT9kNwcpPJ+wTJti+4gRG0b+61 /lOL9fP+20C+1r2Apl+umGCa4TqT16+4FmEzkSLvi/54SkUvuoRWIOePU6HIJSNxOahy DZcQJJ9pof3RYw4MtkDG8VhZG3whUXyqV+y44y0wPXphM+48Ul+f5h9l7frlFFgAe4Zc s3Ql9lN8vOBhYkhEnZOIivzPJFJzHvOqCndEytYlD/pVCwDNAE60XWXkmceOG4ixA+1L Ibzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=haD+pyVJ/wup3eE+efA1av6YAZLaFW8TSl53K+NO7Vo=; b=T+jYmJSvGx7OtSjuBmhvmRYUh4hTiY2i37pxrIVRs4/G26103CPp6fCZdofN6PQdj0 TbJDBzN8KUpk6+2XbHYP/QiiW/QhU01ZQ985zL9iVFFwvkhqbNbL/fBpu8p66rmG4bhM BlgeW0cR84OetPYD2VhWKQ93WGHYz2LPdx1ubgbe/OhPnSm7IBzimWOAIDX68VRsxczn uc8SmXvtfL4Ddz7VxVOInwVdDSfY6plqi8ownDlHZXtibCV7j1vQcZS0BtpQXbsAvtyT pL4NVpVOtroWvW67pR33aA8DnCWU08NNMxFqxEBsOUHEsJyHWaFW6VcuBAUwG2TlsH0i HYRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=sW9wXTAo; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a12si30358014pls.209.2019.04.10.00.45.09; Wed, 10 Apr 2019 00:45:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=sW9wXTAo; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728409AbfDJG5c (ORCPT + 99 others); Wed, 10 Apr 2019 02:57:32 -0400 Received: from aserp2130.oracle.com ([141.146.126.79]:56222 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726894AbfDJG5c (ORCPT ); Wed, 10 Apr 2019 02:57:32 -0400 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x3A6rtSN102389; Wed, 10 Apr 2019 06:56:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=haD+pyVJ/wup3eE+efA1av6YAZLaFW8TSl53K+NO7Vo=; b=sW9wXTAowOdbzBXJNQa3Adh/6gtvdTUeBqZo5166GCZIg5cx3fiU6kaZVOB5PDYYvln/ mdlDvhFyg4u5PWealyrwWi25LZyfZTC8udTwS39B3liU9DynQa4rolK/caJoWONIy4hk xfWNpvhVpJgLc78POVjP4NaJRcPfXhfCZZ15CeL5IZKbLQ8OB9BMsebcBcOb7LkI5/m+ lW+deGv7O3/c08DD3WrEToMBBDzjuDErxi727bgd1EvZxQvutYI+F/MJvdn9ucDLToho On+jLJqAeLlfSUoSAb9cEtpZFEfGJ0yHuOgvUoU/0vN4NYJQu4IjkRXGbjubM4te0RMa +A== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by aserp2130.oracle.com with ESMTP id 2rphmehapj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Apr 2019 06:56:08 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x3A6thaK003155; Wed, 10 Apr 2019 06:56:08 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserp3020.oracle.com with ESMTP id 2rpytc2nvg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 10 Apr 2019 06:56:07 +0000 Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x3A6u4B2018192; Wed, 10 Apr 2019 06:56:04 GMT Received: from [10.159.253.119] (/10.159.253.119) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 09 Apr 2019 23:56:04 -0700 Subject: Re: [PATCH RFC 00/39] x86/KVM: Xen HVM guest support To: Juergen Gross , Joao Martins Cc: Paolo Bonzini , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Boris Ostrovsky , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Stefano Stabellini , xen-devel@lists.xenproject.org References: <20190220201609.28290-1-joao.m.martins@oracle.com> <35051310-c497-8ad5-4434-1b8426a317d2@redhat.com> <8b1f4912-4f92-69ae-ae01-d899d5640572@oracle.com> <3ee91f33-2973-c2db-386f-afbf138081b4@redhat.com> <59676804-786d-3df8-7752-8e45dec6d65b@oracle.com> <94738323-ebdf-d58e-55b6-313e27c923b0@oracle.com> <585163c2-8dea-728d-7556-9cb3559f0eca@suse.com> <97808492-58ee-337f-c894-900b34b7b1a5@oracle.com> <59deb041-2b5d-8451-32c7-644fe36e053b@suse.com> From: Ankur Arora Message-ID: <8e2e1d56-3490-365c-e2de-c3fd262518ba@oracle.com> Date: Tue, 9 Apr 2019 23:55:59 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: <59deb041-2b5d-8451-32c7-644fe36e053b@suse.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9222 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1904100050 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9222 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1904100050 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-04-08 10:04 p.m., Juergen Gross wrote: > On 08/04/2019 19:31, Joao Martins wrote: >> On 4/8/19 11:42 AM, Juergen Gross wrote: >>> On 08/04/2019 12:36, Joao Martins wrote: >>>> On 4/8/19 7:44 AM, Juergen Gross wrote: >>>>> On 12/03/2019 18:14, Joao Martins wrote: >>>>>> On 2/22/19 4:59 PM, Paolo Bonzini wrote: >>>>>>> On 21/02/19 12:45, Joao Martins wrote: >>>>>>>> On 2/20/19 9:09 PM, Paolo Bonzini wrote: >>>>>>>>> On 20/02/19 21:15, Joao Martins wrote: >>>>>>>>>> 2. PV Driver support (patches 17 - 39) >>>>>>>>>> >>>>>>>>>> We start by redirecting hypercalls from the backend to routines >>>>>>>>>> which emulate the behaviour that PV backends expect i.e. grant >>>>>>>>>> table and interdomain events. Next, we add support for late >>>>>>>>>> initialization of xenbus, followed by implementing >>>>>>>>>> frontend/backend communication mechanisms (i.e. grant tables and >>>>>>>>>> interdomain event channels). Finally, introduce xen-shim.ko, >>>>>>>>>> which will setup a limited Xen environment. This uses the added >>>>>>>>>> functionality of Xen specific shared memory (grant tables) and >>>>>>>>>> notifications (event channels). >>>>>>>>> >>>>>>>>> I am a bit worried by the last patches, they seem really brittle and >>>>>>>>> prone to breakage. I don't know Xen well enough to understand if the >>>>>>>>> lack of support for GNTMAP_host_map is fixable, but if not, you have to >>>>>>>>> define a completely different hypercall. >>>>>>>>> >>>>>>>> I guess Ankur already answered this; so just to stack this on top of his comment. >>>>>>>> >>>>>>>> The xen_shim_domain() is only meant to handle the case where the backend >>>>>>>> has/can-have full access to guest memory [i.e. netback and blkback would work >>>>>>>> with similar assumptions as vhost?]. For the normal case, where a backend *in a >>>>>>>> guest* maps and unmaps other guest memory, this is not applicable and these >>>>>>>> changes don't affect that case. >>>>>>>> >>>>>>>> IOW, the PV backend here sits on the hypervisor, and the hypercalls aren't >>>>>>>> actual hypercalls but rather invoking shim_hypercall(). The call chain would go >>>>>>>> more or less like: >>>>>>>> >>>>>>>> >>>>>>>> gnttab_map_refs(map_ops, pages) >>>>>>>> HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,...) >>>>>>>> shim_hypercall() >>>>>>>> shim_hcall_gntmap() >>>>>>>> >>>>>>>> Our reasoning was that given we are already in KVM, why mapping a page if the >>>>>>>> user (i.e. the kernel PV backend) is himself? The lack of GNTMAP_host_map is how >>>>>>>> the shim determines its user doesn't want to map the page. Also, there's another >>>>>>>> issue where PV backends always need a struct page to reference the device >>>>>>>> inflight data as Ankur pointed out. >>>>>>> >>>>>>> Ultimately it's up to the Xen people. It does make their API uglier, >>>>>>> especially the in/out change for the parameter. If you can at least >>>>>>> avoid that, it would alleviate my concerns quite a bit. >>>>>> >>>>>> In my view, we have two options overall: >>>>>> >>>>>> 1) Make it explicit, the changes the PV drivers we have to make in >>>>>> order to support xen_shim_domain(). This could mean e.g. a) add a callback >>>>>> argument to gnttab_map_refs() that is invoked for every page that gets looked up >>>>>> successfully, and inside this callback the PV driver may update it's tracking >>>>>> page. Here we no longer have this in/out parameter in gnttab_map_refs, and all >>>>>> shim_domain specific bits would be a little more abstracted from Xen PV >>>>>> backends. See netback example below the scissors mark. Or b) have sort of a >>>>>> translate_gref() and put_gref() API that Xen PV drivers use which make it even >>>>>> more explicit that there's no grant ops involved. The latter is more invasive. >>>>>> >>>>>> 2) The second option is to support guest grant mapping/unmapping [*] to allow >>>>>> hosting PV backends inside the guest. This would remove the Xen changes in this >>>>>> series completely. But it would require another guest being used >>>>>> as netback/blkback/xenstored, and less performance than 1) (though, in theory, >>>>>> it would be equivalent to what does Xen with grants/events). The only changes in >>>>>> Linux Xen code is adding xenstored domain support, but that is useful on its own >>>>>> outside the scope of this work. >>>>>> >>>>>> I think there's value on both; 1) is probably more familiar for KVM users >>>>>> perhaps (as it is similar to what vhost does?) while 2) equates to implementing >>>>>> Xen disagregation capabilities in KVM. >>>>>> >>>>>> Thoughts? Xen maintainers what's your take on this? >>>>> >>>>> What I'd like best would be a new handle (e.g. xenhost_t *) used as an >>>>> abstraction layer for this kind of stuff. It should be passed to the >>>>> backends and those would pass it on to low-level Xen drivers (xenbus, >>>>> event channels, grant table, ...). >>>>> >>>> So if IIRC backends would use the xenhost layer to access grants or frames >>>> referenced by grants, and that would grok into some of this. IOW, you would have >>>> two implementors of xenhost: one for nested remote/local events+grants and >>>> another for this "shim domain" ? >>> >>> As I'd need that for nested Xen I guess that would make it 3 variants. >>> Probably the xen-shim variant would need more hooks, but that should be >>> no problem. >>> >> I probably messed up in the short description but "nested remote/local >> events+grants" was referring to nested Xen (FWIW remote meant L0 and local L1). >> So maybe only 2 variants are needed? > > I need one xenhost variant for the "normal" case as today: talking to > the single hypervisor (or in nested case: to the L1 hypervisor). > > Then I need a variant for the nested case talking to L0 hypervisor. > > And you need a variant talking to xen-shim. > > The first two variants can be active in the same system in case of > nested Xen: the backends of L2 dom0 are talking to L1 hypervisor, > while its frontends are talking with L0 hypervisor. Thanks this is clarifying. So, essentially backend drivers with a xenhost_t handle, communicate with Xen low-level drivers etc using the same handle, however, if they communicate with frontend drivers for accessing the "real" world, they exclusively use standard mechanisms (Linux or hypercalls)? In this scenario L2 dom0 xen-netback and L2 dom0 xen-netfront should just be able to use Linux interfaces. But if L2 dom0 xenbus-backend needs to talk to L2 dom0 xenbus-frontend then do you see them layered or are they still exclusively talking via the standard mechanisms? Ankur > >> >>>>> I was planning to do that (the xenhost_t * stuff) soon in order to add >>>>> support for nested Xen using PV devices (you need two Xenstores for that >>>>> as the nested dom0 is acting as Xen backend server, while using PV >>>>> frontends for accessing the "real" world outside). >>>>> >>>>> The xenhost_t should be used for: >>>>> >>>>> - accessing Xenstore >>>>> - issuing and receiving events >>>>> - doing hypercalls >>>>> - grant table operations >>>>> >>>> >>>> In the text above, I sort of suggested a slice of this on 1.b) with a >>>> translate_gref() and put_gref() API -- to get the page from a gref. This was >>>> because of the flags|host_addr hurdle we depicted above wrt to using using grant >>>> maps/unmaps. You think some of the xenhost layer would be ammenable to support >>>> this case? >>> >>> I think so, yes. >>> >>>> >>>>> So exactly the kind of stuff you want to do, too. >>>>> >>>> Cool idea! >>> >>> In the end you might make my life easier for nested Xen. :-) >>> >> Hehe :) >> >>> Do you want to have a try with that idea or should I do that? I might be >>> able to start working on that in about a month. >>> >> Ankur (CC'ed) will give a shot at it, and should start a new thread on this >> xenhost abstraction layer. > > Great, looking forward to it! > > > Juergen >