Received: by 10.213.65.68 with SMTP id h4csp1973615imn; Thu, 5 Apr 2018 07:01:08 -0700 (PDT) X-Google-Smtp-Source: AIpwx48hB30WFnSGjZu4Oyv/BjFw5FV9TdOyDxP+AhgyyZFpxgB8MHueHpaYAKkdTRD13mHathLS X-Received: by 2002:a17:902:9692:: with SMTP id n18-v6mr22709900plp.175.1522936868383; Thu, 05 Apr 2018 07:01:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522936868; cv=none; d=google.com; s=arc-20160816; b=O26tGXHVzpsOpZonqKD+lm8wK3p4756RrFllJACBI25LeOXR12LkrMibjkHwGkLOEw oecEvJiXtOewUI91pcADiqxonL73oAG5FUH0V1IaO2c34vor6MEvftBfkQShufGFeEsM M9C+i/16EX7PvYsf1OZ5uhjUWjL1N8k4KpU/rgaB1Ya2ISxp72Bygjcw8jcybH9EyeZL 5qtbM52gCvatXTUmUSsyjIqHj2L2bHyACkn/mHv1z7avLPhEnCW4KYnBWJJKzU6qhBmV yrmBtDWug4Jp6tjUFOIK0+qtt2lVw7HWnOSlJoAtLqLv2o/UcxhLeNyPfmYvj90pLe2l tj7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:content-transfer-encoding :content-language:accept-language:in-reply-to:references:message-id :date:thread-index:thread-topic:subject:cc:to:from :arc-authentication-results; bh=8+D61l2CiTrGwGcWNAi5vtIfwadwTGHA29qfPgcEusY=; b=vJxDRE7KfRE0rOh48S1RmwGypqkWbWNqbld+TpW1fAQHMaylxRvY0c4QL8iCyl9aRP CaGTtMBWs5YKgk+YPhoS7rTQnqlQwYsImfLyG97QtC97RSFOj931Gl9u98w2Rd9E07yK 8Cx2IeCRHJQMcmvv//kCJnm+vys05fwwCfLmqb1u+h++IR0YM/rfaqnmwpQFcIlVP8c+ eMlVGFBUWgjIaSj2x+WhhBJL9zPrRT1rnrthhHvsBpepUSeFepq1c2JBf64yfQWQzrEd R8u9MSf3NYo/YHfxBadxvaILrbCytYr/lQzqNjOZ1qcB/tJhjqnDoa0ARXOvreibAhg4 VS1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d25-v6si8320594plj.13.2018.04.05.07.00.53; Thu, 05 Apr 2018 07:01:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751298AbeDEN7A convert rfc822-to-8bit (ORCPT + 99 others); Thu, 5 Apr 2018 09:59:00 -0400 Received: from smtp.ctxuk.citrix.com ([185.25.65.24]:56764 "EHLO SMTP.EU.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751179AbeDEN67 (ORCPT ); Thu, 5 Apr 2018 09:58:59 -0400 X-IronPort-AV: E=Sophos;i="5.48,411,1517875200"; d="scan'208";a="71101405" From: Paul Durrant To: Paul Durrant , "xen-devel@lists.xenproject.org" , "linux-kernel@vger.kernel.org" , "x86@kernel.org" CC: Boris Ostrovsky , Juergen Gross , Thomas Gleixner , Ingo Molnar Subject: RE: [PATCH] xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE Thread-Topic: [PATCH] xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE Thread-Index: AQHTzMD/Aw2CZ/bqlk2nJVUlcKVss6PyMrRA Date: Thu, 5 Apr 2018 13:58:56 +0000 Message-ID: <99e465a1e2d84f01958ab608b929872f@AMSPEX02CL03.citrite.net> References: <20180405093206.3624-1-paul.durrant@citrix.com> In-Reply-To: <20180405093206.3624-1-paul.durrant@citrix.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > -----Original Message----- > From: Paul Durrant [mailto:paul.durrant@citrix.com] > Sent: 05 April 2018 10:32 > To: xen-devel@lists.xenproject.org; linux-kernel@vger.kernel.org; > x86@kernel.org > Cc: Paul Durrant ; Boris Ostrovsky > ; Juergen Gross ; Thomas > Gleixner ; Ingo Molnar > Subject: [PATCH] xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE > > My recent Xen patch series introduces a new HYPERVISOR_memory_op to > support direct priv-mapping of certain guest resources (such as ioreq > pages, used by emulators) by a tools domain, rather than having to access > such resources via the guest P2M. > > This patch adds the necessary infrastructure to the privcmd driver and > Xen MMU code to support direct resource mapping. > > NOTE: The adjustment in the MMU code is partially cosmetic. Xen will now > allow a PV tools domain to map guest pages either by GFN or MFN, thus > the term 'gfn' has been swapped for 'pfn' in the lower layers of the > remap code. > > Signed-off-by: Paul Durrant Unfortunately I have just found a bug in this patch when it comes to mapping multiple frames. I will send a v2 shortly. Apologies for the noise. Paul > --- > Cc: Boris Ostrovsky > Cc: Juergen Gross > Cc: Thomas Gleixner > Cc: Ingo Molnar > --- > arch/x86/xen/mmu.c | 50 ++++++++++++----- > drivers/xen/privcmd.c | 119 > +++++++++++++++++++++++++++++++++++++++++ > include/uapi/xen/privcmd.h | 11 ++++ > include/xen/interface/memory.h | 67 +++++++++++++++++++++++ > include/xen/interface/xen.h | 7 +-- > include/xen/xen-ops.h | 24 ++++++++- > 6 files changed, 260 insertions(+), 18 deletions(-) > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c > index d33e7dbe3129..8453d7be415c 100644 > --- a/arch/x86/xen/mmu.c > +++ b/arch/x86/xen/mmu.c > @@ -65,37 +65,42 @@ static void xen_flush_tlb_all(void) > #define REMAP_BATCH_SIZE 16 > > struct remap_data { > - xen_pfn_t *mfn; > + xen_pfn_t *pfn; > bool contiguous; > + bool no_translate; > pgprot_t prot; > struct mmu_update *mmu_update; > }; > > -static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, > +static int remap_area_pfn_pte_fn(pte_t *ptep, pgtable_t token, > unsigned long addr, void *data) > { > struct remap_data *rmd = data; > - pte_t pte = pte_mkspecial(mfn_pte(*rmd->mfn, rmd->prot)); > + pte_t pte = pte_mkspecial(mfn_pte(*rmd->pfn, rmd->prot)); > > /* If we have a contiguous range, just update the mfn itself, > else update pointer to be "next mfn". */ > if (rmd->contiguous) > - (*rmd->mfn)++; > + (*rmd->pfn)++; > else > - rmd->mfn++; > + rmd->pfn++; > > - rmd->mmu_update->ptr = virt_to_machine(ptep).maddr | > MMU_NORMAL_PT_UPDATE; > + rmd->mmu_update->ptr = virt_to_machine(ptep).maddr; > + rmd->mmu_update->ptr |= rmd->no_translate ? > + MMU_PT_UPDATE_NO_TRANSLATE : > + MMU_NORMAL_PT_UPDATE; > rmd->mmu_update->val = pte_val_ma(pte); > rmd->mmu_update++; > > return 0; > } > > -static int do_remap_gfn(struct vm_area_struct *vma, > +static int do_remap_pfn(struct vm_area_struct *vma, > unsigned long addr, > - xen_pfn_t *gfn, int nr, > + xen_pfn_t *pfn, int nr, > int *err_ptr, pgprot_t prot, > - unsigned domid, > + unsigned int domid, > + bool no_translate, > struct page **pages) > { > int err = 0; > @@ -106,11 +111,12 @@ static int do_remap_gfn(struct vm_area_struct > *vma, > > BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == > (VM_PFNMAP | VM_IO))); > > - rmd.mfn = gfn; > + rmd.pfn = pfn; > rmd.prot = prot; > /* We use the err_ptr to indicate if there we are doing a contiguous > * mapping or a discontigious mapping. */ > rmd.contiguous = !err_ptr; > + rmd.no_translate = no_translate; > > while (nr) { > int index = 0; > @@ -121,7 +127,7 @@ static int do_remap_gfn(struct vm_area_struct *vma, > > rmd.mmu_update = mmu_update; > err = apply_to_page_range(vma->vm_mm, addr, range, > - remap_area_mfn_pte_fn, &rmd); > + remap_area_pfn_pte_fn, &rmd); > if (err) > goto out; > > @@ -175,7 +181,8 @@ int xen_remap_domain_gfn_range(struct > vm_area_struct *vma, > if (xen_feature(XENFEAT_auto_translated_physmap)) > return -EOPNOTSUPP; > > - return do_remap_gfn(vma, addr, &gfn, nr, NULL, prot, domid, > pages); > + return do_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false, > + pages); > } > EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range); > > @@ -183,7 +190,7 @@ int xen_remap_domain_gfn_array(struct > vm_area_struct *vma, > unsigned long addr, > xen_pfn_t *gfn, int nr, > int *err_ptr, pgprot_t prot, > - unsigned domid, struct page **pages) > + unsigned int domid, struct page **pages) > { > if (xen_feature(XENFEAT_auto_translated_physmap)) > return xen_xlate_remap_gfn_array(vma, addr, gfn, nr, > err_ptr, > @@ -194,10 +201,25 @@ int xen_remap_domain_gfn_array(struct > vm_area_struct *vma, > * cause of "wrong memory was mapped in". > */ > BUG_ON(err_ptr == NULL); > - return do_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, > pages); > + return do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid, > + false, pages); > } > EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array); > > +int xen_remap_domain_mfn_array(struct vm_area_struct *vma, > + unsigned long addr, > + xen_pfn_t *mfn, int nr, > + int *err_ptr, pgprot_t prot, > + unsigned int domid, struct page **pages) > +{ > + if (xen_feature(XENFEAT_auto_translated_physmap)) > + return -EOPNOTSUPP; > + > + return do_remap_pfn(vma, addr, mfn, nr, err_ptr, prot, domid, > + true, pages); > +} > +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_array); > + > /* Returns: 0 success */ > int xen_unmap_domain_gfn_range(struct vm_area_struct *vma, > int nr, struct page **pages) > diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c > index 1c909183c42a..e8b7e07658f2 100644 > --- a/drivers/xen/privcmd.c > +++ b/drivers/xen/privcmd.c > @@ -33,6 +33,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -722,6 +723,120 @@ static long privcmd_ioctl_restrict(struct file *file, > void __user *udata) > return 0; > } > > +struct remap_pfn { > + struct mm_struct *mm; > + struct page **pages; > + pgprot_t prot; > + unsigned long i; > +}; > + > +static int remap_pfn(pte_t *ptep, pgtable_t token, unsigned long addr, > + void *data) > +{ > + struct remap_pfn *r = data; > + struct page *page = r->pages[r->i]; > + pte_t pte = pte_mkspecial(pfn_pte(page_to_pfn(page), r->prot)); > + > + set_pte_at(r->mm, addr, ptep, pte); > + r->i++; > + > + return 0; > +} > + > +static long privcmd_ioctl_mmap_resource(struct file *file, void __user > *udata) > +{ > + struct privcmd_data *data = file->private_data; > + struct mm_struct *mm = current->mm; > + struct vm_area_struct *vma; > + struct privcmd_mmap_resource kdata; > + xen_pfn_t *pfns = NULL; > + struct xen_mem_acquire_resource xdata; > + int rc; > + > + if (copy_from_user(&kdata, udata, sizeof(kdata))) > + return -EFAULT; > + > + /* If restriction is in place, check the domid matches */ > + if (data->domid != DOMID_INVALID && data->domid != kdata.dom) > + return -EPERM; > + > + down_write(&mm->mmap_sem); > + > + vma = find_vma(mm, kdata.addr); > + if (!vma || vma->vm_ops != &privcmd_vm_ops) { > + rc = -EINVAL; > + goto out; > + } > + > + pfns = kcalloc(kdata.num, sizeof(*pfns), GFP_KERNEL); > + if (!pfns) { > + rc = -ENOMEM; > + goto out; > + } > + > + if (xen_feature(XENFEAT_auto_translated_physmap)) { > + struct page **pages; > + unsigned int i; > + > + rc = alloc_empty_pages(vma, kdata.num); > + if (rc < 0) > + goto out; > + > + pages = vma->vm_private_data; > + for (i = 0; i < kdata.num; i++) { > + pfns[i] = page_to_pfn(pages[i]); > + pr_info("pfn[%u] = %p\n", i, (void *)pfns[i]); > + } > + } else > + vma->vm_private_data = PRIV_VMA_LOCKED; > + > + memset(&xdata, 0, sizeof(xdata)); > + xdata.domid = kdata.dom; > + xdata.type = kdata.type; > + xdata.id = kdata.id; > + xdata.frame = kdata.idx; > + xdata.nr_frames = kdata.num; > + set_xen_guest_handle(xdata.frame_list, pfns); > + > + xen_preemptible_hcall_begin(); > + rc = HYPERVISOR_memory_op(XENMEM_acquire_resource, > &xdata); > + xen_preemptible_hcall_end(); > + > + if (rc) > + goto out; > + > + if (xen_feature(XENFEAT_auto_translated_physmap)) { > + struct remap_pfn r = { > + .mm = vma->vm_mm, > + .pages = vma->vm_private_data, > + .prot = vma->vm_page_prot, > + }; > + > + rc = apply_to_page_range(r.mm, kdata.addr, > + kdata.num << PAGE_SHIFT, > + remap_pfn, &r); > + } else { > + unsigned int domid = > + (xdata.flags & XENMEM_rsrc_acq_caller_owned) ? > + DOMID_SELF : kdata.dom; > + > + rc = xen_remap_domain_mfn_array(vma, > + kdata.addr & PAGE_MASK, > + pfns, kdata.num, NULL, > + vma->vm_page_prot, > + domid, > + vma->vm_private_data); > + } > + > + rc = rc > 0 ? 0 : rc; > + > +out: > + kfree(pfns); > + > + up_write(&mm->mmap_sem); > + return rc; > +} > + > static long privcmd_ioctl(struct file *file, > unsigned int cmd, unsigned long data) > { > @@ -753,6 +868,10 @@ static long privcmd_ioctl(struct file *file, > ret = privcmd_ioctl_restrict(file, udata); > break; > > + case IOCTL_PRIVCMD_MMAP_RESOURCE: > + ret = privcmd_ioctl_mmap_resource(file, udata); > + break; > + > default: > break; > } > diff --git a/include/uapi/xen/privcmd.h b/include/uapi/xen/privcmd.h > index 39d3e7b8e993..d2029556083e 100644 > --- a/include/uapi/xen/privcmd.h > +++ b/include/uapi/xen/privcmd.h > @@ -89,6 +89,15 @@ struct privcmd_dm_op { > const struct privcmd_dm_op_buf __user *ubufs; > }; > > +struct privcmd_mmap_resource { > + domid_t dom; > + __u32 type; > + __u32 id; > + __u32 idx; > + __u64 num; > + __u64 addr; > +}; > + > /* > * @cmd: IOCTL_PRIVCMD_HYPERCALL > * @arg: &privcmd_hypercall_t > @@ -114,5 +123,7 @@ struct privcmd_dm_op { > _IOC(_IOC_NONE, 'P', 5, sizeof(struct privcmd_dm_op)) > #define IOCTL_PRIVCMD_RESTRICT \ > _IOC(_IOC_NONE, 'P', 6, sizeof(domid_t)) > +#define IOCTL_PRIVCMD_MMAP_RESOURCE \ > + _IOC(_IOC_NONE, 'P', 7, sizeof(struct privcmd_mmap_resource)) > > #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */ > diff --git a/include/xen/interface/memory.h > b/include/xen/interface/memory.h > index 583dd93b3016..b110142ea996 100644 > --- a/include/xen/interface/memory.h > +++ b/include/xen/interface/memory.h > @@ -265,4 +265,71 @@ struct xen_remove_from_physmap { > }; > DEFINE_GUEST_HANDLE_STRUCT(xen_remove_from_physmap); > > +/* > + * Get the pages for a particular guest resource, so that they can be > + * mapped directly by a tools domain. > + */ > +#define XENMEM_acquire_resource 28 > +struct xen_mem_acquire_resource { > + /* IN - The domain whose resource is to be mapped */ > + domid_t domid; > + /* IN - the type of resource */ > + uint16_t type; > + > +#define XENMEM_resource_ioreq_server 0 > +#define XENMEM_resource_grant_table 1 > + > + /* > + * IN - a type-specific resource identifier, which must be zero > + * unless stated otherwise. > + * > + * type == XENMEM_resource_ioreq_server -> id == ioreq server id > + * type == XENMEM_resource_grant_table -> id defined below > + */ > + uint32_t id; > + > +#define XENMEM_resource_grant_table_id_shared 0 > +#define XENMEM_resource_grant_table_id_status 1 > + > + /* IN/OUT - As an IN parameter number of frames of the resource > + * to be mapped. However, if the specified value is 0 and > + * frame_list is NULL then this field will be set to the > + * maximum value supported by the implementation on return. > + */ > + uint32_t nr_frames; > + /* > + * OUT - Must be zero on entry. On return this may contain a bitwise > + * OR of the following values. > + */ > + uint32_t flags; > + > + /* The resource pages have been assigned to the calling domain */ > +#define _XENMEM_rsrc_acq_caller_owned 0 > +#define XENMEM_rsrc_acq_caller_owned (1u << > _XENMEM_rsrc_acq_caller_owned) > + > + /* > + * IN - the index of the initial frame to be mapped. This parameter > + * is ignored if nr_frames is 0. > + */ > + uint64_t frame; > + > +#define XENMEM_resource_ioreq_server_frame_bufioreq 0 > +#define XENMEM_resource_ioreq_server_frame_ioreq(n) (1 + (n)) > + > + /* > + * IN/OUT - If the tools domain is PV then, upon return, frame_list > + * will be populated with the MFNs of the resource. > + * If the tools domain is HVM then it is expected that, on > + * entry, frame_list will be populated with a list of GFNs > + * that will be mapped to the MFNs of the resource. > + * If -EIO is returned then the frame_list has only been > + * partially mapped and it is up to the caller to unmap all > + * the GFNs. > + * This parameter may be NULL if nr_frames is 0. > + */ > + GUEST_HANDLE(xen_pfn_t) frame_list; > +}; > +typedef struct xen_mem_acquire_resource > xen_mem_acquire_resource_t; > +DEFINE_GUEST_HANDLE_STRUCT(xen_mem_acquire_resource); > + > #endif /* __XEN_PUBLIC_MEMORY_H__ */ > diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h > index 4f4830ef8f93..8bfb242f433e 100644 > --- a/include/xen/interface/xen.h > +++ b/include/xen/interface/xen.h > @@ -265,9 +265,10 @@ > * > * PAT (bit 7 on) --> PWT (bit 3 on) and clear bit 7. > */ > -#define MMU_NORMAL_PT_UPDATE 0 /* checked '*ptr = val'. ptr is MA. > */ > -#define MMU_MACHPHYS_UPDATE 1 /* ptr = MA of frame to modify > entry for */ > -#define MMU_PT_UPDATE_PRESERVE_AD 2 /* atomically: *ptr = val | > (*ptr&(A|D)) */ > +#define MMU_NORMAL_PT_UPDATE 0 /* checked '*ptr = val'. ptr is > MA. */ > +#define MMU_MACHPHYS_UPDATE 1 /* ptr = MA of frame to modify > entry for */ > +#define MMU_PT_UPDATE_PRESERVE_AD 2 /* atomically: *ptr = val | > (*ptr&(A|D)) */ > +#define MMU_PT_UPDATE_NO_TRANSLATE 3 /* checked '*ptr = val'. ptr is > MA. */ > > /* > * MMU EXTENDED OPERATIONS > diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h > index fd23e42c6024..fd18c974a619 100644 > --- a/include/xen/xen-ops.h > +++ b/include/xen/xen-ops.h > @@ -63,7 +63,7 @@ static inline void > xen_destroy_contiguous_region(phys_addr_t pstart, > struct vm_area_struct; > > /* > - * xen_remap_domain_gfn_array() - map an array of foreign frames > + * xen_remap_domain_gfn_array() - map an array of foreign frames by gfn > * @vma: VMA to map the pages into > * @addr: Address at which to map the pages > * @gfn: Array of GFNs to map > @@ -86,6 +86,28 @@ int xen_remap_domain_gfn_array(struct > vm_area_struct *vma, > unsigned domid, > struct page **pages); > > +/* > + * xen_remap_domain_mfn_array() - map an array of foreign frames by > mfn > + * @vma: VMA to map the pages into > + * @addr: Address at which to map the pages > + * @mfn: Array of MFNs to map > + * @nr: Number entries in the MFN array > + * @err_ptr: Returns per-MFN error status. > + * @prot: page protection mask > + * @domid: Domain owning the pages > + * @pages: Array of pages if this domain has an auto-translated physmap > + * > + * @mfn and @err_ptr may point to the same buffer, the MFNs will be > + * overwritten by the error codes after they are mapped. > + * > + * Returns the number of successfully mapped frames, or a -ve error > + * code. > + */ > +int xen_remap_domain_mfn_array(struct vm_area_struct *vma, > + unsigned long addr, xen_pfn_t *mfn, int nr, > + int *err_ptr, pgprot_t prot, > + unsigned int domid, struct page **pages); > + > /* xen_remap_domain_gfn_range() - map a range of foreign frames > * @vma: VMA to map the pages into > * @addr: Address at which to map the pages > -- > 2.11.0